Multiple PHPSESSID collisions - php

I have noticed that multiple users per day are being assigned the same session_id. I am using php 7.2 now, but looking back into the history of user sessions this has been happening since I was using php 5.4.
I am just using php defaults of session_start(), no custom session handler.
I have read that the session_id is a combo of the client ip and time, but give that I am using a load balancer, that might be limiting the randomness of the ip_addresses?
What is the proper way to increase the uniqueness of session_ids to prevent collisions when using a load balancer?

If you are using Nginx you may want to check if FastCGI micro-caching is enabled and disable it. This has caused some errors before noted in PHP.net developers bugs listings in PHP 7.1 running nginx
Bug #75496 Session ID Collision happened few times
After the first case [of collision] we changed a hash entropy php settings in php.ini so session_id is now 48 chars but it didn't help to prevent second case.
Solution:
FastCGI micro caching at nginx that cached 200 responses together with session cookies.
Of course maybe it was set wrong at our server, but its definitely has nothing to do with PHP.
Please see:
https://bugs.php.net/bug.php?id=75496

Assuming you're running PHP-FPM on each web node, but I guess #2 and #3 would probably work running PHP as an Apache plugin as well.
You've got a few options here:
Keep your LB'd web nodes with apache and have each one point to the same upstream PHP-FPM server. Obviously the third box running the single PHP-FPM process might need to be beefier, since it's handling PHP parsing for both web nodes.
Change the session storage to point to a file location (probably a NFS or SMB share) that both servers can access. Not sure if I've ever done this honestly but it seems like it would work. Really, your web files should probably be on an NFS/SMB share already so you can deploy changes to only one location.
Spin up a redis server and have both web nodes' PHP-FPM process use that for session.
Number three is probably the best option in most cases.

Related

What to care about when using a load balancer?

I have a web application written in PHP, is already deployed on an Apache server and works perfectly.
The application uses Mysql as db, session are saved in memcached server.
I am planning to move to an HAproxy environment with 2 servers.
What I know: I will deploy the application to the servers and configure HAproxy.
My question is: is there something I have to care about/change in the code ?
It depends.
Are you trying to solve a performance or redundancy problem?
If your database (MySQL) and session handler (memcached) are running on one or more servers separate from the two Apache servers, then the only major thing your code will have to do differently is manage the forwarded IP addresses (via X-FORWARDED-FOR), and HAProxy will happily round robin your requests between Apache servers.
If your database and session handler are currently running on the same server, then you need to decide if the performance or redundancy problem you are trying to solve is with the database, the session management, or Apache itself.
The easiest solution, for a performance problem with a database/session-heavy web app, is to simply start by putting MySQL and memcached on the second server to separate your concerns. If this solves the performance problem you were having with one server, then you could consider the situation resolved.
If the above solution does not solve the performance problem, and you notice that Apache is having trouble serving your website files, then you would have the option of a "hybrid" approach where Apache would exist on both servers, but then you would also run MySQL/memcached on one of the servers. If you decided to use this approach, then you could use HAProxy and set a lower weight to the hybrid server.
If you are attempting to solve a redundancy issue, then your best bet will be to isolate each piece into logical groups (e.g. database cluster, memcached cluster, Apache cluster, and a redundant HAProxy pair), and add redundancy to each logical group as you see fit.
The biggest issue that you are going to run into is going to be related to php sessions. By default php sessions maintain state with a single server. When you add the second server into the mix and start load balancing connections to both of them, then the PHP session will not be valid on the second server that gets hit.
Load balancers like haproxy expect a "stateless" application. To make PHP stateless you will more than likely need to use a different mechanism for your sessions. If you do not/can not make your application stateless then you can configure HAProxy to do sticky sessions either off of cookies, or stick tables (source IP etc).
The next thing that you will run into is that you will loose the original requestors IP address. This is because haproxy (the load balancer) terminates the TCP session and then creates a new TCP session to apache. In order to continue to see what the original requstors IP address is you will need to look at using something like x-forwarded-for. In the haproxy config the option is:
option forwardfor
The last thing that you are likely to run into is how haproxy handles keep alives. Haproxy has acl's, rules that determine where to route the traffic to. If keep alives are enabled, haproxy will only make the decision on where to send traffic based on the first request.
For example, lets say you have two paths and you want to send traffic to two different server farms (backends):
somedomain/foo -> BACKEND_serverfarm-foo
somedomain/bar -> BACKEND_serverfarm-bar
The first request for somedomain/foo goes to BACKEND_serverfarm-foo. The next request for somedomain/bar also goes to BACKEND_serverfarm-foo. This is because haproxy only processes the ACL's for the first request when keep alives are used. This may not be an issue for you because you only have 2 apache servers, but if it is then you will need to have haproxy terminate the keep alive session. Haproxy has several options for this but these two make the most since in this scenario:
option forceclose
option http-server-close
The high level difference is that forceclose closes both the server side and the client side keep alive session. http-server-close only closes the server side keep alive session which allows the client to maintain a keepalive with haproxy.

clearstatcache + include_path + sessions

Im having a problem where we run an upgrade for our web application.
After the upgrade script completes and access the web app via the browser, we get file not found errors on require_once() because we shifted some files around and PHP still has the old directory structure cached.
If we have the default 120 seconds for the realpath_cache_ttl to expire, then everything resolves itself, but this is not acceptable for obvious reasons.
So I tried using clearstatcache with limited success. I created a separate file (clearstatcache.php) that only calls this function (this is a one line file), and placed a call to it in our install script via curl:
<?php
clearstatcache(true);
This does not seem to work, however if I call this file via the browser it immediately begins to work.
I'm running PHP version 5.3
I started looking at the request header differences between my browser and curl, and the only thing I can see that might matter is the PHPSESSID cookie.
So my question is, does the current PHPSESSID matter (I don't think it should). Am I doing something wrong with my curl script? I am using
curl -L http://localhost/clearstatcache.php
EDIT: Upon further research, I've decided this probably has something to do with multiple apache processes running. clearstatcache will only clear the cache of the current apache process - when the browser is making a request a different apache process serves the request, and this process still has the old cache.
Given that the cache is part of the Apache child process thanks to mod_php, your solution here is probably going to be restarting the Apache server.
If you were using FastCGI (under Apache or another web server), the solution would probably be restarting whatever process manager you were using.
This step should probably become part of your standard rollout plan. Keep in mind that there may be other caches that you might also need to clear.

PHP Sessions to handle Multiple Servers

All,
I have a PHP5 web application written with Zend Framework and MVC. This application is installed on 2 servers with the same setup. Server X has php5/MySql/Apache and Server Y also have the same. We don't have a common DB server between both the servers.
My application works when accessed individually via https on Server X and Server Y. But when we turn on load balancing and have both servers up, the sessions get lost.
How can I make sure my sessions persist across servers? Should I maintain my db on a third server and write sessions to it? IF so, what's the easiest and most secure way to do it?
Thanks
memcached is a popular way to solve this problem. You just need to get it up and running (easy) and update your php.ini file to tell it to use memcached as the session storage.
In php.ini you would modify:
session.save_handler = memcache
session.save_path = ""
For the general idea: PHP Sessions in Memcached.
There are any number of tutorials on setting up the Zend session handler to work with memcached. Take your pick.
Should I maintain my db on a third
server and write sessions to it?
Yes, one way to handle it is to have a 3rd machine running the database that both webservers use for the application. I've done that for several projects in the past and its worked well. The question with that approach is... is the bottleneck at the webservers or the database. If its at the database, you wont see much improvement by throwing load balancing of the web servers into the mix. You may need to instead think of mirroring schemes for the database.
Another option is to use the sticky sessions feature on your load balancer. What this will do is keep users on certain servers. So when user 1 comes to the site, they will be directed to server X. Every subsequent request will also be directed to server X. This allows you to not worry about persisting sessions between servers, as each user will continue to be directed to the server they have their session on.
The one downside of this is that when you take a web server out of the pool, half the users with a session will be logged out. So the effectiveness of this solution depends on how often you take servers out of the pool.

PHP running as a FastCGI application (php-cgi) - how to issue concurrent requests?

EDIT: Update - scroll down
EDIT 2: Update - problem solved
Some background information:
I'm writing my own webserver in Java and a couple of days ago I asked on SO how exactly Apache interfaces with PHP, so I can implement PHP support. I learnt that FastCGI is the best approach (since mod_php is not an option). So I have looked at the FastCGI protocol specification and have managed to write a working FastCGI wrapper for my server. I have tested phpinfo() and it works, in fact all PHP functions seem to work just fine (posting data, sessions, date/time, etc etc).
My webserver is able to serve requests concurrently (ie user1 can retrieve file1.html at the same time as user2 requesting some_large_binary_file.zip), it does this by spawning a new Java thread for each user request (terminating when completed or user connection with client is cancelled).
However, it cannot deal with 2 (or more) FastCGI requests at the same time. What it does is, it queues them up, so when request 1 is completed immediately thereafter it starts processing request 2. I tested this with 2 PHP pages, one contains sleep(10) and the other phpinfo().
How would I go about dealing with multiple requests as I know it can be done (PHP under IIS runs as FastCGI and it can deal with multiple requests just fine).
Some more info:
I am coding under windows and my batch file used to execute php-cgi.exe contains:
set PHP_FCGI_CHILDREN=8
set PHP_FCGI_MAX_REQUESTS=500
php-cgi.exe -b 9000
But it does not spawn 8 children, the service simply terminates after 500 requests.
I have done research and from Wikipedia:
Processing of multiple requests
simultaneously is achieved either by
using a single connection with
internal multiplexing (ie. multiple
requests over a single connection)
and/or by using multiple connections
Now clearly the multiple connections isn't working for me, as everytime a client requests something that involves FastCGI it creates a new socket to the FastCGI application, but it does not work concurrently (it queues them up instead).
I know that internal multiplexing of FastCGI requests under the same connection is accomplished by issuing each unique FastCGI request with a different request ID.
(also see the last 3 paragraphs of 'The Communication Protocol' heading in this article).
I have not tested this, but how would I go about implementing that? I take it I need some kind of FastCGI Java thread which contains a Map of some sort and a static function which I can use to add requests to. Then in the Thread's run() function it would have a while loop and for every cycle it would check whether the Map contains new requests, if so it would assign them a request ID and write them to the FastCGI stream. And then wait for input etc etc, As you can see this becomes too complicated.
Does anyone know the correct way of doing this? Or any thoughts at all? Thanks very much.
Note, if required I can supply the code for my FastCGI wrapper.
Update:
Basically, I downloaded nginx and set it up to use PHP as a FastCGI application and it too suffered from the same problem as my server. It could not handle concurrent PHP requests. This is leads me to believe my code is in fact correct. So something is wrong with PHP or I am not setting it up correctly. Maybe it is because I am using Windows because some lighttpd users claim Windows can't handle FastCGI properly (this doesn't make much sense). I'll install Linux sometime soon and report any progress with that.
Okay, I managed to find the cause of the problem. It wasn't my code at all. It's PHP, it cannot spawn additional php-cgi's under Windows when running as FastCGI mode, under Linux it works perfectly, I simply pointed my server to my linux box IP and it had no problems with concurrent FCGI requests. Sucks, but I guess that's the way it is...
I did look deeper into the PHP source code after that and found that the section of code which responds to PHP_FCGI_CHILDREN has been encapsulated by #ifndef WIN32 So the developers must be aware of the issue
Hi this comes a little late, I've wrote a spawner for php-cgi.exe on windows, not perfect but it might be what you needed. Check it at here.
re: spawn-php python script...
Thanks #nosam that really helped.
For those wanting to get it working quickly you'll need the following (if 64bit system)
ActivePython-2.7.2.5-win64-x64.msi pywin32-217.win-amd64-py2.7.exe
ActivePython does not have older versions of these on their www so you will need to do a bit of googling around to find a working mirror (there are plenty out there)
Once you have downloaded the src from bitbucket you may need to edit spawn-php.py (to fix up the tab spacing), as bit-bucket seemed to mess up the tab's in the file preventing it from running.
All-in-all that saved my day for a busy little windows website using nginx + fast-cgi.
Thanks mate!

PHP sessions in a load balancing cluster - how?

OK, so I've got this totally rare an unique scenario of a load balanced PHP website. The bummer is - it didn't used to be load balanced. Now we're starting to get issues...
Currently the only issue is with PHP sessions. Naturally nobody thought of this issue at first so the PHP session configuration was left at its defaults. Thus both servers have their own little stash of session files, and woe is the user who gets the next request thrown to the other server, because that doesn't have the session he created on the first one.
Now, I've been reading PHP manual on how to solve this situation. There I found the nice function of session_set_save_handler(). (And, coincidentally, this topic on SO) Neat. Except I'll have to call this function in all the pages of the website. And developers of future pages would have to remember to call it all the time as well. Feels kinda clumsy, not to mention probably violating a dozen best coding practices. It would be much nicer if I could just flip some global configuration option and VoilĂ  - the sessions all get magically stored in a DB or a memory cache or something.
Any ideas on how to do this?
Added: To clarify - I expect this to be a standard situation with a standard solution. FYI - I have a MySQL DB available. Surely there must be some ready-to-use code out there that solves this? I can, of course, write my own session saving stuff and auto_prepend option pointed out by Greg seems promising - but that would feel like reinventing the wheel. :P
Added 2: The load balancing is DNS based. I'm not sure how this works, but I guess it should be something like this.
Added 3: OK, I see that one solution is to use auto_prepend option to insert a call to session_set_save_handler() in every script and write my own DB persister, perhaps throwing in calls to memcached for better performance. Fair enough.
Is there also some way that I could avoid coding all this myself? Like some famous and well-tested PHP plugin?
Added much, much later: This is the way I went in the end: How to properly implement a custom session persister in PHP + MySQL?
Also, I simply included the session handler manually in all pages.
You could set PHP to handle the sessions in the database, so all your servers share same session information as all servers use the same database for that.
A good tutorial for that can be found here.
The way we handle this is through memcached. All it takes is changing the php.ini similar to the following:
session.save_handler = memcache
session.save_path = "tcp://path.to.memcached.server:11211"
We use AWS ElastiCache, so the server path is a domain, but I'm sure it'd be similar for local memcached as well.
This method doesn't require any application code changes.
You don't mentioned what technology you are using for load balancing (software, hardware etc.); but in any case, the solution to your problem is to employ "sticky sessions" on the load balancer.
In summary, this means that when the first request from a "new" visitor comes in, they are assigned a specific server from the cluster: all future requests for the lifetime of their session are then directed to that server. In practice this means that applications written to work on a single server can be up-scaled to a balanced environment with zero/few code changes.
If you are using a hardware balancer, such as a Radware device, then the sticky sessions is configured as part of the cluster setup. Hardware devices usually give you more fine-grained control: such as which server a new user is assigned to (they can check for health status etc. and pick the most healthy / least utilised server), and more control of what happens when a server fails and drops out of the cluster. The drawback of hardware balancers is the cost - but they are worth it imho.
As for software balancers, it comes down to what you are using. For Apache there is the stickysession property on mod_proxy - and plenty of articles via google to get this working with the php session ( for example )
Edit:
From other comments posted after the original question, it sounds like your "balancing" is done via Round Robin DNS, so the above probably won't apply. I'll refrain from commenting further and starting a flame against round robin dns.
The easiest thing to do is configure your load balancer to always send the same session to the same server.
If you still want to use session_set_save_handler then maybe take a look at auto_prepend.
If you have time and you still want to check more solutions, take a look at
http://redis4you.com/articles.php?id=01..
Using redis you are fault tolerant. From my point of view, it could be better than memcache solutions because of this robustness.
If you are using php sessions you could share with NFS the /tmp directory, where I think the sessions are stored, between all the servers in the cluster. That way you don't need database.
Edited: You can also use an external service like memcachedb (persistent and fast) and store the session info in the memcachedb index and indentify it with a hash of the content or even the session ID.
When we had this situation we implemented some code that lives in a common header.
Essentially for each page we check if we know the session Id. If we dont we check if we're in the situation whehich you describe, by checking if we have stored sesion data in the DB.Otherwise we just start a new session.
Obviously this requires all relevant data to be copied to the DB, but if you encapsulate your session data in a seperate class then it works OK.
you could also try using memcache as session handler
Might be too late, but check this out: http://www.pureftpd.org/project/sharedance
Sharedance is a high-performance server to centralize ephemeral key/data
pairs on remote hosts, without the overhead and the complexity of an SQL
database.
It was mainly designed to share caches and sessions between a pool of web
servers. Access to a sharedance server is trivial through a simple PHP API and
it is compatible with the expectations of PHP 4 and PHP 5 session handlers.
When it comes to php session handling in the Load Balancing Cluster, it's best to have Sticky Sessions. For that ask the network of datacenter who is maintaining the load balancer to enable the sticky session. Once that is enabled you'll don't need worry about sessions at php end

Categories