This is an issue I've started to experiment a few months ago and I've been trying to fix without success since.
Symptoms: At random intervals of time symfony loses the session info and logs out the users. It seems somehow to be connected with the load of the site. When the load is higher it seems that the users are logged out more often, could happen even fast as 30 seconds.
Environment: Since this started I've changed a lot of the setup, including the php version, web server, session storage, symfony version. Here is the current setup:
Ubuntu 10.04, php 5.4.0, symfony 1.4.17, nginx 1.0.15 with FPM. Here is how the session storage is configured in factories.yml:
user:
class: myUser
param:
timeout: 86400
use_flash: true
storage:
class: sfCacheSessionStorage
param:
cache:
class: sfMemcacheCache
param:
lifetime: 86400
host: 192.168.1.3
serializer: IGBINARY
mode: compiled
port: 11211
I got to mention that I've also used redis for session storage and still had the problem.
I really have no idea what to try next. Anybody else experienced something similar? At this stage any hint would be much appreciated.
Update:
Ater months of searches and countless trials and errors I think it could be a concurrency problem. Our site is quite heavy on AJAX requests and I have learned that it can lead to issues with the sessions unless proper locking mechanisms are implemented in the session handler.
First I have eliminated symfony from the equation, I've set it to use php sessions. With the default file session storage I never lose any sessions. Then I have configured php to use the memcache session storage. Surely enough we have started to see lost sessions. I am 100% positive that memcached is not running out of memory, I have installed an admin tool and the memcached server is barely using 2% of the 8GB allocated to it (no waste, the memory is allocated as needed).
Then I have added a second memcached server and configured the session handler to use redundancy. This helped a lot, I rarely have any lost sessions. For now this is an acceptable compromise.
For some reason, memcache seems to miss every now and again and create a new session which causes the user to be logged out.
As suggested by Jestep, you should prove this by taking memcache out of the equation to see if the problem goes away.
If so, then the problem is either the way you are talking to memcache or memcache itself.
Actually the setup we have been using for the last few months is symfony configured to use the normal php sessions (not any of the cache classes) and then php is setup to use the memcache extention (there is another one called memcached) to store the session on 2 redundant memcache servers. If I take out one of the memcache servers we immediately start to see lost sessions.
This was the only setup that really did the job.
Related
I have noticed that multiple users per day are being assigned the same session_id. I am using php 7.2 now, but looking back into the history of user sessions this has been happening since I was using php 5.4.
I am just using php defaults of session_start(), no custom session handler.
I have read that the session_id is a combo of the client ip and time, but give that I am using a load balancer, that might be limiting the randomness of the ip_addresses?
What is the proper way to increase the uniqueness of session_ids to prevent collisions when using a load balancer?
If you are using Nginx you may want to check if FastCGI micro-caching is enabled and disable it. This has caused some errors before noted in PHP.net developers bugs listings in PHP 7.1 running nginx
Bug #75496 Session ID Collision happened few times
After the first case [of collision] we changed a hash entropy php settings in php.ini so session_id is now 48 chars but it didn't help to prevent second case.
Solution:
FastCGI micro caching at nginx that cached 200 responses together with session cookies.
Of course maybe it was set wrong at our server, but its definitely has nothing to do with PHP.
Please see:
https://bugs.php.net/bug.php?id=75496
Assuming you're running PHP-FPM on each web node, but I guess #2 and #3 would probably work running PHP as an Apache plugin as well.
You've got a few options here:
Keep your LB'd web nodes with apache and have each one point to the same upstream PHP-FPM server. Obviously the third box running the single PHP-FPM process might need to be beefier, since it's handling PHP parsing for both web nodes.
Change the session storage to point to a file location (probably a NFS or SMB share) that both servers can access. Not sure if I've ever done this honestly but it seems like it would work. Really, your web files should probably be on an NFS/SMB share already so you can deploy changes to only one location.
Spin up a redis server and have both web nodes' PHP-FPM process use that for session.
Number three is probably the best option in most cases.
We have PHP5 FPM set up under Nginx. We use Memcached as our session handler.
session.save_handler=memcached
My expectation is that, without fail (notwithstanding some fatal error like the death of our Memcached server) that all sessions should make it to Memcached and explicitly NOT disk.
However, upon inspecting our application, I've found sessions on Memcached AND in /var/lib/php5/fpm/.
Some troubleshooting:
We are definitely getting new sessions set on Memcached. However, some sessions that I found on disk, don't appear on Memcached
The timestamps on the file based sessions are definitely recent - there are files in the current minute.
Permissions on the files are for the installation user - not root.
Despite having said point 3 above, there are SOME files that have the root user and group ownership. This I find weird. Why would there be sessions owned by root? That would mean that anyone trying to check the file (that has 0600 permissions btw) would fail.
So, I guess my questions amount to:
Is there any scenario in which it is valid that new session files are created on disk despite the fact that we use Memcached?
Any idea why we'd have session files that have a root ownership?
For context: I'm researching very sporadic session expiry issues. After having increased Memcached memory limits and concurrent connections (and that ultimately fixing a large number of the instances) we're still experiencing a small amount of the session expiries. Anyway, that is simply context - might not be important.
The session files were created by php-cli started by cron. cli config differs from fpm one and uses default file session handler.
Edit
Importantly, the cronjob must either be hitting a piece of code that manually starts the session
OR
the configuration directive session.auto_start for PHP5-cli must be set to true
I have a web tier in AWS running Nginx+PHP-fpm using memcache on ElastiCache for sessions. Over the last 6 months or so we've been experiencing a very strange issue where every so often perhaps 6 weeks or so the ElastiCache node runs out of memory and starts evicting keys which leads to some users being loosing session, being logged out and of course frustrated and loosing their place in the app.
I've tried several things. One being leveraging the php-memcached module in ini:
session.save_handler = memcached
session.save_path = "<aws elasticache dns:port>"
And yes I verified that the save_path url I'm actually using is correct and receiving network connections. I've also verified through CloudWatch metrics that the cache node is indeed receiving network connections and data.
This configuration did not work, so I replaced it with a Zend framework session manager and save handler. I verified through phpinfo() that session.save_handler was set to user and also verified that the browser is getting the right cookie that I configured in Zend session.
Still, we're having the same problem as illustrated in the following CloudWatch screenshot:
The vertical spikes in memory are I believe due to memcache clearing expired keys which seems to happen every 24 hours. The very last (far right) spike is where I rebooted the node. The strange thing is that everytime it clears keys, it doesn't clear enough. We end up with an ultimately downward trend in available memory which at some point causes memory to run out and memcache to start evicting keys.
I'm at a loss as to what could be the problem and what to try next in an effort to debug. Any thoughts? Thanks!
This isn't a bug, just how Memcached is supposed to work. By the very nature of being a cache, the data should be (relatively) ephemeral. If you're current node doesn't have enough memory to support all the values you are trying to store it has no choice but to evict keys. If you're only storing sessions and you're filling up an entire cache instance, you're best option would be to upgrade the size of your cache node (thats a lot of sessions!), or in AWSs case, add another node.
If you're storing other data on the cache node as well, set intelligent expiration times for those items so they expire and free space up periodically.
Update: I'll also add, if you're comfortable using cookies, having a time-limited cookie to recreate dropped sessions is a nice fill-in as well. Basic "Keep my logged in" code should suffice
How can i manage sessions between 2 or more web servers are using to manage Load balancing ?
The points that i found are
Use database session CDbHttpSession
Use cache session CCacheHttpSession
Use Security manager CSecurityManager
As Yii project Lead Qiang said , There is only one thing you need to be careful, that is the validationKey of CSecurityManager. By default, this key is automatically/randomly generated the first time and is stored under runtime directory. In multiple server environment, you should explicitly configure this property so that all servers share the same key. This key is used widely to generate hash keys for various security-related measures.
You have a couple of options:
1) If your load balancer supports it, you can enable session persistence so that the user always is sent to the same server as the one they originally hit. The benefit of this is that it's easy to setup if you don't want to change any code. The downside is that if one of your servers goes down you lose all your sessions on that node.
2) Setup a shared memcache (not memcached) session between node1 and node2. The relevant settings being.
php.ini
session.save_handler memcache
session.save_path tcp://<ip1>, tcp://<ip2>
memcache.ini
memcache.allow_failover 1
memcache.default_port 11211
memcache.hash_strategy standard
memcache.max_failover_attempts 20
It's a little tricky to setup, but once you get it working you have full redundancy between both servers if one were to go down.
3) Setup a third node to manage sessions and configure php session.save_path to be that server's ip. The benefit of this is that sessions are now managed by a third server. The downside being you lose redundancy, if that server goes down you lose sessions.
this is the best answer that i got . But I cant Use APC !!
Is there any other methods ?
I'm working with a couple of Web Servers behind a Load Balancer and I can enable Sticky Sessions to hold a user to the one specific Web Servers - this will work.
I have been reading about PHP Sessions & MemCache. I must say what I've read is a touch confusing as some pages say its a good idea and others the opposite.
Questions:
is it possible to keep php sessions in memcache?
is it better to use sticky sessions over memcache?
what are the problems with php sessions in memcache - note: I can get enough cache (amazon so its expandable).
1: YES. And I strongly recommend storing PHP sessions in Memcached. Here's why:
Memcached is great for storing small chunks of data that are frequently accessed by the database and filesystem.
Memcached was designed specifically for sessions. It was originally the brainchild of the lead developer of livejournal.com and later used to also cache the content of users' posts. The benefit was immediate: most of the action was taking place in memory. Page load times greatly improved.
Thankfully, PHP and Apache have an easy implementation to handle sessions with Memcached. Simply install with a few shell commands
example for Debian:
sudo apt-get -t stable install php7.4-memcached
and
change your php.ini settings to something similar to:
(taken from https://www.php.net/manual/en/memcached.sessions.php)
session.save_handler = memcached
; change server:port to fit your needs...
session.save_path = "localhost:11211"
The key is the session.save_path
It will no longer point to a relative file path on your server.
APC was mentioned - APC for the caching of .php files used by the program. APC and Memcached will reduce IO significantly and leave Apache/Nginx free to server resources, such as images, faster.
2: No
3: The fundamental disadvantage of using Memcached is data volatility
Session data is not persistent in Memcached. So if and when the server crashes, all data in memory is lost. Everyone will have to log in again.
And then you have memory consumption...
Remember: the sessions are stored in the memory. If your website handles a large number of concurrent users, you may have to shell out a little extra money for a larger memory allocation.
1. Yes, it is possible to keep PHP sessions in memcached.
The memcache extension even comes with a session handler that takes very little configuration to get up and running. http://php.net/manual/en/memcached.sessions.php
2. Memcache/Sticky Sessions
I don't really know which is "better". I feel this is going to be one of those "it depends" answers. It likely depends on your reasons for load balancing. If a small number of users cause lots of load each, or if it's a large number causing a small load each.
3. Cons of Memcache
There are probably 2 main cons to using memcache for sessions storage.
Firstly, it is volatile. This means, if one of your memcached instances is restarted/crashes etc. any sessions stored in that instance are lost. While if they were using traditional file based sessions, they will be still there when the server returns.
Secondly and probably more relevant, memcached doesn't guarantee persistance, it is only meant to be a cache. Data can be purged from memcached at any time, for any reason. While, in reality, the only reasons data should be purged is if the cache is nearing its size limits. The least recently accessed data will be expelled. Again, this might not be an issue, as the user is probably gone if their session is stale, but it depends on your needs.
If you want to use "memcacheD" extension not "memcache" (there are two different extensions) for session control, you should pay attention to modify php.ini.
Most web resources from Google is based on memcache because it's earlier version than memcacheD. They will say as following:
session.save_handler = memcache
session.save_path = "tcp://localhost:11211"
But it's not valid when it comes to memcacheD.
You should modify php.ini like that:
session.save_handler = memcached
session.save_path = "localhost:11211"
There is no protocol indentifier.
From: http://php.net/manual/en/memcached.sessions.php#99646
As my point of view its not recommended storing sessions in Memcached.If a session disappears, often the user is logged out,If a portion of a cache disappears or either due to a hardware crash it should not cause your users noticable pain.According to the memcached site, “memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.” So while developing your application, remember that you must have a fall-back mechanism to retrieve the data once it is not found in the Memcached server.