A basic page page with just session_start(); loads just fine, but once I've set something, for example $_SESSION['pet']="dog";, the page load time is around 5 seconds.
I'm using AWS's memcached server and the connection time to it from the EC2 instance is really fast. I'm not sure where the slow down is coming from.
The session.save_handler is set to memcached and session.save_path is set to xxx.cfg.use1.cache.amazonaws.com:11211
phpinfo also displays Registered save handlers as files user memcache memcached
EDIT :
I uploaded test files to demonstrate the issue. The first file is simply session_start(); print_r($_SESSION); (http://rr915webapi.us-east-1.elasticbeanstalk.com/session.php). The second file is session_start();$_SESSION['pet']="dog";$_SESSION['name']="bob";(http://rr915webapi.us-east-1.elasticbeanstalk.com/session-set.php). After you load the second file, you can see the first takes a while longer to load than initially did.
By setting the following in the PHP ini file, the response time was reduced down to milliseconds.
session.lazy_write = 0
memcached.sess_locking = Off
Some possibilities :
if your PHP server running your PHP code and your memcached server / cfg.use1.cache.amazonaws.com are hosted on different regions, it can explain all this time...
there seems to be a a bug in libmemcached 1.0.16...if you update to 1.0.18, will fix the problem, see https://github.com/iuscommunity/wishlist/issues/143 comments and https://bugs.launchpad.net/libmemcached/+bug/1589344
Related
We have an uploader which takes customer data, maps the data to our database structure and then uploads to the database. As part of the uploader, we load all the uploaded data into memory.
The issue we are having is that "large" datasets seem to cause session issues (current example failing is around 800kb). In the code base there is a check one every page load for the contents of the session to ensure the user is logged in and is valid etc - when performing this upload and things going awry, that check fails upon form submission. Through troubleshooting we discovered that when the issue occurs the session is emptied hence the check fails and causes the user to be logged out.
The code works fine with the 800kb dataset in development and also works fine on live with around 100kb dataset. This points to the code being OK and the issue being "environmental".
Session code is:
// If the session hasn't been started yet.
if (session_status() == PHP_SESSION_NONE)
{
// Start the session.
session_start();
}
and also then in the file itself:
$_SESSION['csv_data'] = array_map('str_getcsv', file($file_path));
and finally MUCH later (doesn't get here):
// Remove the session data.
unset($_SESSION['csv_data']);
We tried changing the memory_limit in PHP up to 1GB and even 10GB to no avail -likewise we upped max_filesize and post_max_size to 512MB respectively and same thing happens (we restarted both HTTPD and PHP-FPM)
The server is quad-core, 32GB RAM with 4 x 256GB SSD in RAID 10 - at no point does free memory drop below 28GB and both uptime and top show low usage so it doesn't seem to be lack of traditional resources causing an issue. It is running Centos 6.9 64 bit and Apache 2.4.9 with PHP 5.6.33
Is there anything else on the server which could be setting a limit to the (PHP) session/memory size? What else can we try to figure out what might be causing this?
I know this problem has been presented here in SO and I've tried the solutions but it's still not fixed.
PHP is deleting the session after some time of inactivity (i assume 24 minutes as it's the default and seems to fit the testing).
I have the following code set in all the pages:
ini_set('display_errors', 0);
$sessionCookieExpireTime = 2880000;
session_set_cookie_params($sessionCookieExpireTime);
ini_set('session.gc_maxlifetime', $sessionCookieExpireTime);
session_start();
echo ini_get('session.gc_maxlifetime'); //echos 2880000 as expected
But the session still gets reset after 24 minutes (or so) of inactivity.
phpinfo() return the following output for session:
Any idea why this isn't working? (PHP 5.3.10)
Thanks
Although Marc B answer shares some great insight it wasn't working for me. I was pretty sure everything was fine with my script and I had nothing messing with the session in my code.
After an epic struggle I discovered that my problem was actually due to shared hosting environment. From the PHP doc:
“If different scripts … share the same place for storing the session
data then the script with the minimum value will [determine the
session timeout]“.
After this the problem was quite obvious. Some script (hosted on the same server) was using the default php.ini session.gc_maxlifetime and that was resetting my sessions.
The solution was to create a folder under the root of my hosting (make sure it's not web accessible), set the right permissions to it and then use session.save_path to tell php where to store my sessions. Something like:
ini_set("session.gc_maxlifetime","21600"); // 6 hours
ini_set("session.save_path", "/your_home/your_sessions/");
session_start();
This website provided great insight: php sessions on shared hosting
So if you come accross this issue make sure you follow Marc B recommendations and if that doesn't work try this out.
Best wishes!!
Are you doing this code in EVERY script that uses sessions? ini_set changes apply ONLY to the script they're executed in, and ONLY for the execution lifetime of that particular script.
If you want to make it a permanent global change, you'll have to modify php.ini, or put some php_values directives into http.conf/.htaccess.
I have a backup script which backups up all files for a website to a zip file (using a script similar to the answer to this question). However, for large sites the script times out before it can complete.
Is there any way I can extend the length of time available for the script to run? The websites run on shared Windows servers, so I don't have access to the php.ini file.
If you are in a shared server environment, and you don’t have access to the php.ini file, or you want to set php parameters on a per-site basis, you can use the .htaccess file (when running on an Apache webserver).
For instance, in order to change the max_execution_time value, all you need to do is edit .htaccess (located in the root of your website, usually accesible by FTP), and add this line:
php_value max_execution_time 300
where 300 is the number of seconds you wish to set the maximum execution time for a php script.
There is also another way by using ini_set function in the php file
eg. TO set execution time as 5 second, you can use
ini_set('max_execution_time', 300); //300 seconds = 5 minutes
Please let me know if you need any more clarification.
set time limit comes to mind, but may still be limited by php.ini settings
set_time_limit(0);
http://php.net/manual/en/function.set-time-limit.php
Simply put; don't make a HTTP request to start the PHP script. The boundaries you're experiencing are set because you're using a HTTP request, which means you can have a time-out. A better solution would be to implement this using a "cronjob", or what Microsoft calls "Scheduled tasks". Most hosting providers will allow you to run such a task at set times. By calling the script from command line, you don't have to worry about the time-outs any more, but you're still at risk of running into memory issues.
If you have a decent hosting provider though, why doesn't it provide daily backups to start with? :)
You can use the following in the start of your script:
<?php
if(!ini_get('safe_mode')){
set_time_limit(0); //0 in seconds. So you set unlimited time
}
?>
And at the end of the script use flush() function to tell PHP to send out what it has generated.
Hope this solves your problem.
Is the script giving the "Maximum execution time of xx seconds exceeded" error message, or is it displaying a blank page? If so, ignore_user_abort might be what you're looking for. It tells php not to stop the script execution if the communication with the browser is lost, which may protect you from other timeout mechanisms involved in the communication.
Basically, I would do this at the beginning of your script:
set_time_limit(0);
ignore_user_abort(true);
This said, as advised by Berry Langerak, you shouldn't be using an HTTP call to run your backup. A cronjob is what you should be using. Along with a set_time_limit(0), it can run forever.
In shared hosting environments where a change to the max_execution_time directive might be disallowed, and where you probably don't have access to any kind of command line, I'm afraid there is no simple (and clean) solution to your problem, and the simplest solution is very often to use the backup solution provided by the hoster, if any.
Try the function:
set_time_limit(300);
On windows, there is a slight possibility that your webhost allows you to over ride settings by uploading a php.ini file in the root directory of your webserver. If so, upload a php.ini file containing:
max_execution_time = 300
To check if the settings work, do a phpinfo() and check the Local Value for max_execution_time.
Option 1: Ask the hosting company to place the backups somewhere accesible by php, so the php file can redirect the backup.
Option 2: Split the backup script in multiple parts, perhaps use some ajax to call the script a few times in a row, give the user a nice progress bar and combine the result of the script calls in a zip with php and offer that as a download.
since a few hours our server hangs every time you do a session_start.
For testing purposes i created a script which looks like this:
<?php
session_start();
?>
Calling it from the console hangs and it can't even be stopped with ctrl-c, only kill -9 works. The same for calling it via Apache. /var/lib/php/session/ stays empty but permissions are absolutely fine, www can write and also has read permissions for all parent folders.
According to the admins there were no changes made on the server and there is no special code registered for sessions. The Server is CentOS 4 or 5 and yesterday everything was working perfectly. We rebooted the server and updated PHP, but nothing changed.
I've ran out of ideas, any suggestions?
UPDATE
We solved this problem by moving the project to another server, so while the problem still exists on one server there is no immediate need for a solution anymore.
I will keep the question open in case someone has an idea for others having a similar problem in the future, though.
There are many reasons for that, here are a few of them:
A. The session file could be opened exclusively.
When the file lock is not released properly for whatever reason, it is causing session_start() to hang infinitely on any future script executions.
Workaround: use session_set_save_handler() and make sure the write function uses fopen($file, 'w') instead of fopen($file, 'x')
B. Never use the following in your php.ini file (entropie file to "/dev/random"), this will cause your session_start() to hang:
<?php
ini_set("session.entropy_file", "/dev/random");
ini_set("session.entropy_length", "512");
?>
C.
session_start() needs a directory to write to.
You can get Apache plus PHP running in a normal user account. Apache will then of course have to listen to an other port than 80 (for instance, 8080).
Be sure to do the following things:
- create a temporary directory PREFIX/tmp
- put php.ini in PREFIX/lib
- edit php.ini and set session.save_path to the directory you just created
Otherwise, your scripts will seem to 'hang' on session_start().
If this helps:
In my scenario, session_start() was hanging at the same time I was using the XDebug debugger within PHPStorm, the IDE, on Windows. I found that there was a clear cause: Whenever I killed the debug session from within PHPStorm, the next time I tried to run a debug session, session_start() would hang.
The solution, if this is your scenario, is to make sure to restart Apache every time you kill an XDebug session within your IDE.
I had a weird issue with this myself.
I am using CentOS 5.5x64, PHP 5.2.10-1. A clean ANSI file in the root with nothing other than session_start() was hanging. The session was being written to disk and no errors were being thrown. It just hung.
I tried everything suggested by Thariama, and checked PHP compile settings etc.
My Fix:
yum reinstall php; /etc/init.d/httpd restart
Hope this helps someone.
To everyone complaining about the 30 seconds of downtime being unacceptable, this was an inexplicable issue on a brand new, clean OS install, NOT a running production machine. This solution should NOT be used in a production environment.
Ok I face the same problem on 2 PC, 1 is MAC mini XAMPP, 1 is Windows 10 Xampp.
Both is php spent infinity to run session_start(). Both PHP version is 7.x.x
I found that session files is lock to read and write. So that I added code to make PHP read session files and immediately unlock when done with
<?php
session_start([
'read_and_close' => true,
]);
?>
or
<?php
//For PHP 5.x
session_start();
session_write_close();
?>
After this PHP unlock session file => Problems solve
The problem: -
Iv experienced (and fixed) the problem where file based sessions hang the request, and database based sessions get out of sync by storing out of date session data (like storing each session save in the wrong order).
This is caused by any subsequent request that loads a session (simultaneous requests), like ajax, video embed where the video file is delivered via php script, dynamic resource file (like script or css) delivered via php script, etc.
In file based sessions file locking prevents session writing thus causing a deadlock between the simultaneous request threads.
In database based session the last request thread to complete becomes the most recent save, so for example a video delivery script will complete long after the page request and overwrite the since updated session with old session data.
The fix: -
If your ajax or resource delivery script doesnt need to use sessions then easiest to just remove session usage from it.
Otherwise you'd best make yourself a coffee and do the following: -
Write or employ a session handler (if not already doing so) as per http://www.php.net//manual/en/class.sessionhandler.php (many other examples available via google search).
In your session handler function write() prepend the code ...
// processes may declare their session as read only ...
if(!empty($_SESSION['no_session_write'])) {
unset($_SESSION['no_session_write']);
return true;
}
In your ajax or resource delivery php script add the code (after the session is started) ...
$_SESSION['no_session_write'] = true;
I realise this seems like a lot of stuffing around for what should be a tiny fix, but unfortunately if you need to have simultaneous requests each loading a session then it is required.
NOTE if your ajax or resource delivery script does actually need to write/save data, then you need to do it somewhere other than in the session, like database.
Just put session_write_close(); befor Session_start();
as below:
<?php
session_write_close();
session_start();
.....
?>
I don't know why, but changing this value in /etc/php/7.4/apache2/php.ini worked for me:
;session.save_path = "/var/lib/php/sessions"
session.save_path = "/tmp"
To throw another answer into the mix for those going bananas, I had a session_start() dying only in particular cases and scripts. The reason my session was dying was ultimately because I was storing a lot of data in them after a particularly intensive script, and ultimately the call to session_start() was exhausting the 'memory_limit' setting in php.ini.
After increasing 'memory_limit', those session_start() calls no longer killed my script.
For me, the problem seemed to originate from SeLinux. The needed command was chcon -R -t httpd_sys_content_t [www directory] to give access to the right directory.
See https://askubuntu.com/questions/451922/apache-access-denied-because-search-permissions-are-missing
If you use pgAdmin 4 this can happen as well.
If you have File > Preferences > SQL Editor > Options > "Auto Commit" disabled, and you just ran a query using the query tool but didn't manually commit, then session_start() will freeze.
Enable auto commit, or manually commit, or just close pgAdmin, and it will no longer freeze.
In my case it seems like it was the NFS Share that was locking the session , after restarting the NFS server and only enabled 1 node of web clients the sessions worked normally .
Yet another few cents that might help someone. In my case I was storing in $_SESSION complex data with several different class objects in them and session_start() couldn't handle the whole unserialization as not every class was loaded on session_start. The solution is my case was to serialize/jsonify data before saving it into the $_SESSION and reversing the process after I got the data out of session.
I have page listing few records from db.
After upgrading to PHP 5.3 site printing long records list is not displayed - Explorer says "Connection was reset"
I've changed SQL query in code to limit records and then page was shown correctly
So it seems to be some kind of timeout set.
I've tried find some settings in PHP.ini , HTTPD.conf - changed all sounds similar to timeout but nothing happened.
Any idea how to make it working ?
EDIT
Page resets after ~2 secs - so there is no extremely long time....
EDIT-2
I've tried set php vars: max_execution_time, max_input_time, memory_limit
WAMPServer 2 (PHP 5.3, Apache 2.2.11)
At the top of your .php file, insert something like:
set_time_limit(120);
That sets the timeout for the script to 2 minutes. Increase it as you need.
I would recommend avoiding this problem by paginating your results, otherwise you're opening yourself up to a world of trouble. Slow pages are an open door for a denial of service attack by resource exhaustion.