ini_set('session.cookie_lifetime', 259200);
ini_set('session.gc_maxlifetime', 259200);
session_start();
I have the above bit of code included on every single page on my site. I want to keep the user logged in for three days after logging in, but if they visit the site before the expiry date, it keeps them alive for three more days. Basically, the session is kept alive three days from when they leave the site (and if they don't return within those three days).
I've noticed, however, that sessions are kept alive for about a day and then die despite the ini_set I have above. I considered that perhaps it was my webhost's php.ini, but it also does this on my local machine.
Is there some other ini_set call I can do to get my desired effect? These don't seem to work, although they do keep it alive for one day.
As the size of a session gets larger, you'll run into various quirks though: not sure about current version, but PHP 5.1.x was loading the whole session into memory at session_start(); with a 20 MB session and 50 concurrent users, your scripts start to be severely limited by disk access speeds (a.k.a. "script startup is slow as molasses" - the sessions alone are hogging a GB of RAM, you definitely don't want your server to start swapping out); in the end, we dedicated a box to keep as many sessions as possible in its RAM, and the frontend boxes accessed them over NFS (although it helped in our case, this may be overkill for you).
Note that for many concurrent users and session storage on disk, the number of session temporary files may cause problems with filesystem limits (e.g. how many files can be in one directory), or other limits (we once found the hard way that a box was configured to only allow 4096 open files at the same time). None of this is really session-specific, but can be triggered by session handling.
Related
In my production Environment I'm observing a sporadic issue where pages are taking a long time to load. In the error logs we are seeing:
PHP Fatal error: Maximum execution time of 30 seconds exceeded
The affected line is where a session is being created for the user.
The directories are physical. There are +3.5 million files in the directory. The trash collection is set for 31 days for sessions in PHP.
The issue is sporadic so I can't trigger it. The behavior is consistent that it is always the session starting that takes above 30 seconds to execute. The lines prior to that run fine, if I list the contents of the sessions directory (ls /var/www/sessions/) it takes +45 seconds just from the command line. I think application monitoring would be good but this seems to be an issue at the system level.
I've looked at the cloudwatch metrics but don't see a bottleneck involving the disc reads there.
Could anyone advise on what issues we might be running into and how to resolve them?
PHP uses session.gc_probability to sporadically cleanup the sessions folder. Make sure to set it to 0 in production so your API/page calls don't hang.
I suggest checking session.gc_maxlifetime value, it will give you some ideas about how long the files will be kept.
You can call session_gc() to force cleanup manually (and probably simulate your issue), check more info at https://www.php.net/manual/en/function.session-gc.php. If it hangs for too long running via the command line, you might consider deleting the entire session folder instead (WARNING: this will kill all user's sessions).
Note that some distros/packages install automatically a session garbage collection cron job, I had issues a long ago with too many files at the folder and the cron job simply hangs (more details https://serverfault.com/questions/511609/why-does-debian-clean-php-sessions-with-a-cron-job-instead-of-using-phps-built).
As a long-term solution, I would say to move away from file-based sessions and use Redis to handle sessions, especially on AWS where the disk performance is not the best. Not sure what framework you use (most modern ones have built-in solutions for it) but you can also find framework-less solutions online.
For the adminpanel of our CMS, it turns out some customers like to have a lot longer than aproximately 30 minutes to save data in the administration panel. They get distracted or get a phonecall... and then instead of the data being saved as they expect, they have to log in again and lose changes.
I have tried to keep the session alive with an ajax call Javascript does not call file
That first seemed to work, but eventually does not have the desired effect. If I leave the browser alone, the session dies after 1-2 hours. (I save a timestamp to a textfile every 5 minutes, so I can see exactly when the session stopped being alive).
I have been reading a lot about this issue, apparently the garbage collector kills off sessions anyway, if they are around longer than the session.gc_maxlifetime set in php.ini
My consideration is now, to set the session.gc_maxlifetime in the php.ini much higher and then set the session.gc_maxlifetime lower in a php config file for the clients who do not need this. Also for the frontend I don't want the session to be alive for hours. This way I turn it around and controll the sessions that are not supposed to last longer then default.
Would this be good practice? Will this create undesired effects?
Any advice on the path to take or possible other solutions?
My project is in zend framework and I want to increase the user inactivity time to 1-2 weeks. Cookies for my browser setting correctly but the session logs out user after 8 hour as I have set the session.gc_maxlifetime value to 28800. So I just wanted to confirm before moving forward that "Would it increase the server load, if I increase PHP session.gc_maxlifetime to 1-2 week?"
By default, session data is stored as serialized object in plain text files on the hard disk of your server. Higher session timeout then means more session files in the folder. You won't experience any significant increase in server load, but depending on the amount of sessions you might hit filesystem limitations (scan time increases with the amount of files in a folder)
Alternative session storage like MySQL might be a solution
Sessions are maintained on server for each user. Increase in session time out will prevent the Server from releasing memory allocated to inactive session.
So yes, if you have too many users and you keeping their session for week, then you will performance issue.
Yes, I've read session_start seems to be very slow (but only sometimes), but my problem is slightly different.
We have a PHP application that stores very simple sessions in memcached (elasticache to be specific), and have been monitoring our slowest-performing pageloads. Almost all of the slow ones spend a majority of their time in Zend_Session::Start, and we can't figure out why. It's a very AJAX-y front end, moving more and more toward a single-page app, making a number of simultaneous requests to the backend per pageload, and some of the requests take up to three to four times as long as they should based solely on this.
It's not every request, obviously, but enough of them that we're concerned. Has anyone else seen this behavior? We were under the impression that memcache is not blocking (how could it be?), so the very worst would be a user has a bum session, but multiple-second wait times in session_start seems inexplicable.
Take a look at your session garbage collection mechanism (play with probability and divisor).
If gc is slowing down the app, consider cleaning the old sessions manually (e.g. setting gc probability to 1 and running gc script via cron).
Another possibilities:
session mechanism is locked (so the app waits to unlock, then write)
too much data saved
The first page I load from my site after not visiting it for 20+ mins is very slow. Subsequent page loads are 10-20x faster. What are the common causes of this symptom? Could my server be sleeping or something when it's not receiving http requests?
I will answer this question generally because I'm sure it's something that confuses a lot of newcomers.
The really short answer is: caching.
Just about every program in your computer uses some form of caching to remember data that has already been loaded/processed recently, so it doesn't have to do the work again.
The size of the cache is invariably limited, so stuff has to be thrown out. And 99% of the time the main criteria for expiring cache entries is, how long ago was this last used?
Your operating system caches file data that is read from disk
PHP caches pages and keeps them compiled in memory
The CPU caches memory in its own special faster memory (although this may be less obvious to most users)
And some things that are not actually a cache, work in the same way as cache:
virtual memory aka swap. When there not enough memory available for certain programs, the operating system has to make room for them by moving chunks of memory onto disk. On more recent operating systems the OS will do this just so it can make the disk cache bigger.
Some web servers like to run multiple copies of themselves, and share the workload of requests between them. The copies individually cache stuff too, depending on the setup. When the workload is low enough the server can terminate some of these processes to free up memory and be nice to the rest of the computer. Later on if the workload increases, new processes have to be started, and their memory loaded with various data.
(Note, the wikipedia links above go into a LOT of detail. I'm not expecting everyone to read them, but they're there if you really want to know more)
It's probably not sleeping. It's just not visited for a while and releases it's resources. It takes time to get it started again.
If the site is visited frequently by many users it should response quickly every time.
It sounds like it could be caching. Is the server running on the same machine as your browser? If not, what's the network configuration (same LAN, etc...)?