I'm trying to get NodeJS and CodeIgniter to share data with each other (sessions & db data). I've googled around quite a bit for a solution to this problem, but I still haven't found the most convenient way to do things. It seems the most appropriate way is to use some sort of caching software, such as memcached or redis as wrappers are available for Node and PHP.
This is what I've thought of so far:
Client logs in as normal on CodeIgniter powered website. Session is created and added to memcached.
Client connects to a secure socket.io server using SSL.
Client sends raw cookie data to socket.io server.
Server does some string splitting with the cookie data and gets the session id.
Server checks the cache to see if session id exists. If yes, user is logged in!
User logs out on CodeIgniter site. Session data is destroyed.
However, there are a few problems that I can think of with this approach:
1. How would I cleanup expired sessions from Memcached? From what I can tell, there is no way of detecting expired sessions in CodeIgniter? - Edit - Just realized that I could set a timeout on the memcached data to solve this.
The CodeIgniter docs says that session ids change every five minutes. Wouldn't this kinda ruin my approach?
Is there a better solution out there? I'd like to hear what other options there are before I start implementing this.
Related
Hi I have to retrieve data from several web servers. First I login as a user to my web site. After successfull login I have to fetch data from different web servers and display. How can I share a single session with multiple servers. How can I achieve this?
When I first login it create session and session id saved on temp folder of that server. When I try to access another server how can I use current session that already created when I logged in. Can anybody suggest a solution?
You'll have to use another session handler.
You can:
build your own (see session_set_save_handler) or
use extensions that provide their own session handler, like memcached
In complement to all these answers:
If you store sessions in databases, check that garbage collecting of sessions in PHP is really activated (it's not the case on Debian-like distributions, they decided to garbage sessions with their own cron and altered the php.ini so that it never launch any gc, so check the session.gc_probability and session.gc_divisor). The main problem of sessionstorage in database is that it means a lot of write queries and a lot of conflicting access in the database. This is a great way of stressing a database server like MySQL. So IMHO using another solution is better, this keeps your read/write ratio in a better web-database way.
You could also keep the file storage system and simply share the file directory between servers with NFS. Alter the session.save_path setting to use something other than /tmp. But NFS is by definition not the fastest wày of using a disk. Prefer memcached or mongodb for fast access.
If the only thing you need to share between the server is authentification, then instead of sharing the real session storage you could share authentification credentials. Like the OpenId system in SO, it's what we call an SSO, for the web part you have several solutions, from OpenId to CAS, and others. If the data is merged on the client side (ajax, ESI-gate) then you do not really need a common session data storage on server-side. This will avoid having 3 of your 5 impacted web application writing data in the shared session in the same time. Other session sharing techniques (database, NFS, even memcached) are mostly used to share your data between several servers because Load Balancing tools can put your sequential HTTP request from one server to another, but if you really mean parallel gathering of data you should really study SSO.
Another option would be to use memcached to store the sessions.
The important thing is that you must have a shared resource - be it a SQL database, memcached, a NoSQL database, etc. that all servers can access. You then use session_set_save_handler to access the shared resource.
Store sessions in a database which is accessible from the whole server pool.
Store it in a database - get all servers to connect to that same database. First result for "php store session in database"
Please, I need a push (or kick) because I am feeling lost.
I have to write some kind of portal, which I would like to do by using php+mysql via ajax.
There is no problem with that, but part of the portal should be working in realtime - so , because I've been messing around with node.js & socket.io for while, and I think its pretty awesome, I am going to use it.
The problematic part is that I would like to get pushed on the right way to solve this:
I'm going to authenticate users in that php "portal thing" by setting and checking php sessions - really simple way to log user in, log user out, saving hash and microtime hash to database etc.
But how should I use this kind of signature and authentication in socket communication?
Is something shown in diagram below reasonable and legal?
If anyone could redirect me somewhere or point out risks and things I should be worried about.
It may be all stupid and nonsense. My problem is that I am newbie with node&socket (Ive been coding some simple chats etc..).
Thanks for any suggestions!
Its great that you are using node.js. I have been playing with it recently. Its Awesome!
There is a way you can use the PHP session in Nodejs. All you need to do is use the separate DB for session storage which can be accessed by both PHP and NodeJS. So this will fix your user 'Authenticaion' in both side.
I chose Redis over Memcache for custom session storage because you don't want to lose all your existing session data on server restart. For Nodejs you can easily search, install and configure the Redis.
Refer here for more useful info http://ericterpstra.com/2013/03/use-redis-instead-of-mysql-for-codeigniter-session-data
I'm on board with the whole cookieless domains / CDN thing, and I understand how just sending cookies for requests to www.yourdomain.com, while setting up a separate domain like cdn.yourdomain.com to keep unnecessary cookies from being sent can help performance.
What I'm curious about is if using PHP's native sessions have a negative effect on performance, and if so, how? I know the session key is kept track of in a cookie, which is small, and so that seems fine.
I'm prompted to ask this question because in the past I've written my web apps and stored a lof of the user's active data, preferences, and authentication information in the $_SESSION variable. However, I notice that some popular web applications out there, like Wordpress, don't use $_SESSION at all. But sessions are easy to use and seem fairly secure, especially if you combine it with tracking user-agent / ip changes to prevent session hijacking. So why don't Wordpress and other web apps use php's sessions? Should I also stop using sessions?
Also, let me also clarify that I do realize the server must load the session data to process a page request, but that's not what I'm asking about here. My question is about if / how it impacts the network performance, especially in regard to the headers being sent / received. For example does using sessions prevent pages or images on the site from being served from the browser's cache? Is the PHPSESID cookie the only additional header that is being sent? These sorts of things.
The standard store for $_SESSION is the file-system with one file per session. This comes with a price:
When two requests access the same session, one request will win over the other and the other request needs to wait until the first request has finished. A race condition controlled by file-locking.
Using cookies to store the session data (Wordpress, Codeigniter), the race-condition is the same but the locking is not that immanent, but a browser might do locking within the cookie management.
Using cookies has the downside that you can not store that much data and that the data get's passed with each request and response. This is likely to trigger security issues as well. Steal the cookie and you've got the data. If it's encrypted, an attacker can try to decrypt it to gain the data stored therein.
The historical reason for Wordpress was that the platform never used the PHP Sessions. The root project started around 2000, it got a lot of traction in 2002 and 2004. As session handling was only available with PHP 4 and PHP 3 was much more popular that time.
Later on, when $_SESSION was available, the main design of the application was already done, and it worked. Next to that, in 2004/2005 wordpress decided to start a commercial multi-blog hosting service. This created a need in scaling the application(s) across servers and cookies+database looked more easy for the session/user handling than using the $_SESSION implementation. Infact, this is pretty easy and just works, so there never was need to change it.
For Codeigniter I can not say that much. I know that it stores all session information inside a cookie by default. So session is just another name for cookie. Optionally it can be encrypted but this needs configuration. IIRC it was said that this has been done because "most users do not need sessions". For those who need, there is a database backend (requires additional configuration) so users can change from cookie to database store transparently within their application. There is a new implementation available as well that allows you to change to any store you like, e.g. to native PHP sessions as well. This is done with so called drivers.
However this does not mean that you can't achieve the same based on $_SESSION nowadays. You can replace the store with whatever you like (even cookies :) ) and the PHP implementation of it should be encapsulated anyway in a good program design.
That done you can implement a store you can better control locking on (e.g. a database) and that works across servers in a load balanced infrastructure that does not support sticky sessions.
Wordpress is a good example for an own implementation of sessions handling totally agnostic to whatever PHP offers. That means the wheel has been re-invented. With a view from today, I would not call their design explicitly innovative, so it full-fills a very specific need in a very specific environment that you can only understand if you know about the projects roots.
Codeigniter is maybe a little step ahead (in an interface sense) as it offers some sort of (unstable) interface to sessions and it's possible to replace it with any implementation you like. That's much better for new developers but it's also sort of re-inventing the wheel because PHP does this already out of the box.
The best thing you can do in an application design is to make the implementation independent from system needs, so to make the storage mechanism of your session data independent from the rest of the program flow. PHP offers this with a pretty direct interface, the $_SESSION array and the session configuration.
As $_SESSION is a superglobal array you might want to prevent your application to access it directly as this would introduce global state. So in a good design you would have an interface to it, to be able to fully abstract away from the superglobal.
Done that, plus abstraction of the store plus configuration (e.g. all in one session dependency container), you should be able to scale and maintain your application well over as many servers as you like for whatever reason. Your implementation then can just use cookies if you think that's it for you. However you will be able to switch to database based session in case you need it - without the need to rewrite large parts of your application.
I'm not 100% confident this is the case but one reason to avoid the built-in $_SESSION mechanism in PHP is if you want to deploy your web application in a high-availability web farm scenario.
Because the default session behavior in PHP is to store session objects in process, in memory, it makes it hard (if not impossible) to have multiple servers processing requests from the same user. You would only have this if you wanted to deploy your web application in a web farm environment where you have a number of PHP web servers processing requests for your app to balance the load.
So, while in-process session state is generally much faster than a database-based solution, the latter is favorable when you need to process a huge number of requests and to service the capacity a web-farm environment is used.
As I said in the beginning, I'm not 100% sure if PHP supports configuring the session state provider to be a database, or session state server, instead of the in-process default.
I had some thoughts back ago about using memcached for session storage, but came to the conclusion that it wouldn't be sufficient in the event of one or more of the servers in the memcached pool were about to go down.
A hybrid version is to save the main database (mySQL) from load caused by reads would be to work out a function that tries to fetch the data from the cache pool, and if that fails gets it from the database.
After putting some more thought into it, I started to think about using APC cache for session related data. If our web server would go down, sessions would be lost either way, so storing them in a local APC or a localhost memcached server maybe isn't that bad?
What's your experiences?
Generally, session data is something which should be treated as volatile in any situation. The user can always choose to eliminate the cookie themselves at any point (if you are using cookies, of course). For this reason, I see no problem with using memcached for session data.
For me, I'd just keep it simple - no need for a DB fallback unless you absolutely must never lose the user's session in the event of a memcached server failure. As I said at the beginning, I always treat sessions as purely volatile in any case and don't really store anything of any significance in them.
That's my two cents anyways.
We have an old legacy PHP application. Now I want to write a new application module using Ruby on Rails.
Deployment is a one problem. I guess that it should be possible to run PHP app (via mod_php) and RoR app (via mod_proxy / mongrel) on a one Apache server. I don't want to use mod_rails because it requires to run php via fcgi. So is a risk of breaking something. Both PHP and RoR will use the same DB.
The tricky part is how to pass login info from PHP application to RoR app. Users login into PHP and their info is stored in PHP session data. The RoR app will be placed in a subdirectory of main PHP app (eg www.example.com/railsapp). So RoR should receive all HTTP cookies. And the question is how to extract PHP session data from RoR app.
Above this is just my first idea which is rather bad because of possible race conditions between PHP mod and RoR. I can modify the PHP app to store some info in DB when a user logs in. But I don't know how to handle a case when PHP session data expired and some data in DB should be updated (logout a user).
Does anyone solved similar problem? Or at least can point a most promising direction?
Update: It should be possible to configure mod_php to store session data in sql DB. In this way there will be no race conditions. DB engine should prevent race conditions.
Update2: Actually it is possible to use mod_rails with Apache in prefork mode. Which is required by the mod_php. It is just recommended for mod_rails to run Apache in worker mpm mode. So the whole deployment of PHP / RoR apps is greatly simplified.
First, if you are placing the rails app in a sub directory it is possible to use mod_rails. In your configuration for the PHP site you can have a location that has a root for rails.
<Location /railsapp>
DocumentRoot /.../app/public
</Location>
To get a session over to the rails side, you could either create a connect page on the rails and call it from the PHP side and pass in some data to login. You just need to protect this page from any request not from the localhost (easy to do).
You could also switch rails to use a database to store its sessions, you should then be able to generate a session id, store it in a cookie with the correct name and secret, and create a session in the database manually with that id.
You can also (which I recommend) have proxy page on the rails side which logs the user in and redirects them to their desired page. You could do it like this (not actual working code, but you get the idea):
PHP
$key = md5hash($user_id . $user_password_hash . $timestamp)
$url = "/railsapp/proxy?userid=" . $user_id . "&key=" . $key . "&page=home%2Fwelcome"
Rails App
Rails
map.proxy 'proxy', :controller => 'proxy', :action => 'connect'
class ProxyController < ActionController::Base
def connect
key = ...
if params[:key] == key
login_user params[:userid]
redirect_to params[:page]
else
render :nothing, :status => 403
end
end
end
I've done a mixed PHP/RoR app before (old PHP code, new hawt RoR features, as stuff needed fixing it got re-implemented). It's not really all that hard -- you just serve up the PHP as normal files, and use a 404 handler to redirect everything else to the Rails app.
As far as the session data goes, you could stuff it into a DB, but if you're willing to write/find routines to read and write PHP's marshalled data formats, PHP uses flock() to ensure that there are no race conditions in reading/writing the session data file. Do the same thing in your Rails app for minimal pain.
First, it seems like you're asking for trouble by mixing the two technologies. My first suggestion is "don't do that."
However, since you're probably not going to listen to that advice I'll make a suggestion about your actual question. In PHP apps that I've seen store session data in the database I've noticed too approaches to cleaning the data. Both include always time stamping the records so you know how old they are, and refreshing that time stamping from time to time while the user is active (sometimes every page view, sometimes less often depending on expected load and query count).
If your app does relatively few database calls, and therefore has a little time to spare, you can run an extra query against your session table every page view to at least certain pages. This means an extra query and a busy application that's a problem. The alternative tends to be a cron job that runs periodically to clean the table of expired records. These periodic cleaning jobs also can get run only when specific other tasks are done (like a user log in, which is often a little slow anyway since you have to setup the session data).
Hey, Ive always thought this is actually a pretty common problem from those moving to ruby from php who have legacey apps/data in php and am surprised its not asked more.
I would be tempted to go with either of the following two approaches, both of which have worked well in the past. The second option is going to require a little more work and both of them will require modifications to your existing login code:
1) Use openid to handle your login so you dont have to worry about rolling your own solution. Either run your own openid server or use google, yahoo etc. Run each of your apps as a unique subdomain. There are openid plugins/code for rails and php and it is a tried and tested secure standard
2) Use memcached to store login sessions (as opposed to a db). Have a login page (either in you php app or rails app). Whether you ruby app or php app is accessed the oucome would be similar to this.
a) User tries to access a login protected page
b) App checks its own session data to see if user is logged in
c) If correct session data exists then user is logged in and app proceeds
d) If App cant find current login session checks for browser cookie
e) Then App checks memcache for this cookie key.
f) if the cookie key exists then the user must be logged in (otherwise redirects to login page)
g) app grabs user_id from memcached (stored as cookie key) so it knows which user is logged in
h) app sets login session so it doesnt have to go past c again unless session expires
This is an overly simplified version of what is happening but does work.
I would use memcached in this scenario because it has auto expiration of values which you define and is damn fast. Remember dont pass usernames and passwords between apps or even userids for that matter. Pass a unique key which is merely a pointer to information stored in your DB/memcached
1) I successfully use Passenger and mod_php simultaneously on a single prefork Apache, both on the development machine and on the server.
2) I'd connect the applications via a HTTP challenge/response sequence or maybe via shared memcached keys.