Following situation:
Apache2 Webserver
PHP Version 5.5.38 (Module) (Upgrade not possible for the moment)
Test environment reachable over http://www.test-environment.com
Productive environment reachable over https://www.productive-environment.com
(Fictional domains)
Both systems are located on the same machine:
/www/htdocs/myuser/test (Webroot of test environment, reachable over http)
/www/htdocs/myuser/productive (Webroot of productive environment, forced HTTPS)
I got the following code and executed it on both systems, but they behave differently:
session_write_close_test.php
<?php
session_start();
// Some logic here, but doesn't matter in the test case...
session_write_close();
set_time_limit(30);
$counter = 30;
while ($counter > 0) {
sleep(1);
$counter -= 1;
}
Calling the script on my test environment (http-only) over http://www.test-environment/session_write_close_test.php and calling http://www.test-environment.com within this 30 seconds in another tab, everything works fine. No blocking. session_write_close() seems to work.
Calling https://www.productive-environment.com/session_write_close_test.php and calling https://www.productive-environment.com within this 30 seconds in another tab: Nothing, until the test script ends. session_write_close() doesn't seem to work.
Any explanation for this behavior? Both systems use the same configuration. I already tried to change the session save path to prevent any file write permission issues, sadly with no effects.
Thanks in advance
Related
Im currently writing a php script which accesses a csv file on a remote server, processes the data then writes data to the local MySQL database. Because there is so much data to process and insert into the database (50,000 lines), the script takes longer than 60 seconds to run. The problem I have is, the script times out after 60 seconds.
To make sure its not a MySQL issue, i created another script that enters an infinite loop, and it too times out at 60 seconds.
I have tried increasing/changing the following settings on the Ubuntu server but it hasn't helped:
max_execution_time
max_input_time
mysql.connect_timeout
default_socket_timeout
the TimeOut value in the apache2.conf file.
Could it possibly be an issue because i'm accessing the PHP file from a web browser? Do web browsers have time out limits?
Any help would be appreciated.
The simplest and least intrusive way to get over this limit is to add this line to your script.
Then you are only amending the execution time for this script and not all PHP scripts which would be the case if you amended either of the 2 PHP.INI files
ini_set ('max_execution_time', -1);
When you were trying to amend the php.ini file I would guess you were amending the wrong one, there are 2, one used only be the PHP CLI and one used by PHP running with Apache.
For future reference to find the actual file used by php-apache just do a
<?php
phpinfo();
?>
And look for Loaded Configuration File
I finally worked out the reason the request times out. The problem lies with having virtual server hosting.
The request from the web browser is sent to the hosting server which then directs the request to the virtual server (acts like a separate server). Because the hosting server doesn't get a response back from the virtual server after 60 seconds, it times out and sends a response back to the web browser saying exactly this. Meanwhile, the virtual server is still processing the script.
When the virtual server finally finishes processing the script, it is too late as the hosting server has already returned a timeout error to the front-end user.
Because the hosting server is used to host many virtual servers (for multiple different users), it is generally not possible to change the timeout settings on this server.
So, final verdict: The timeout error cannot be avoided with virtual hosting. If this is a serious issue, you may need to look into getting dedicated server hosting.
Michael,
Your problem should come from the PHP file and not the web browser accessing it.
Did you try putting the following lines at the beginning of your PHP file ?
set_time_limit(0);
ini_set ('max_execution_time', 0);
PHP has 2 configuration files, one for Apache and one for CLI, which explains why when running the script in command line, you don't have a timeout. The phpinfo you gave me has a max_execution_time at 6000
See set time limit documentation.
For CentOS8, the below settings worked for me:
sed -i 's/default_socket_timeout = 60/default_socket_timeout = 6000/g' /etc/php.ini
sed -i 's/max_input_time = 60/max_input_time = 30000/g' /etc/php.ini
sed -i 's/max_execution_time = 30000/max_execution_time = 60000/g' /etc/php.ini
echo "Timeout 6000" >> /etc/httpd/conf/httpd.conf
Restarting apache the usual way isn't good enough anymore. You have to do this now:
systemctl restart httpd php-fpm
Synopsis:
If the script(PHP function) takes 61 seconds or above, then you will get a gateway timeout error. The term Gateway is referred to as the PHP worker, meaning the worker timed out because thats how it was configured. It has nothing to do with networking.
php-fpm is a new service in CentOS8. From what I gathered from the internet (I have not verified this myself), it basically has executables(workers) running in the background waiting for you to give it scripts (PHP) to execute. The time saving is the executables are always running. Because they are already running you suffer no start-up time penalty.
I have a PHP application that has multiple "nested" include() functions. For some reason the applications stops after 60 seconds. I'm using set_time_limit(0), also I have tested this without the include function in the file and it runs forever. I'm not sure what the issue is.
Working:
set_time_limit(0);
while(1 < 2){
echo 'hello';
}
Not Working:
//MASTER FILE
set_time_limit(0);
while(1 < 2){
include('file.php');
}
//INCLUDED FILE 'file.php'
echo 'hello';
First, it's bad practice to write infinite loops, especially in response to a web request. In general you also want your web requests to respond as quickly as possible and have long-running processes run separately.
That said, assuming you're running PHP behind Apache, you'll want to adjust your Apache TimeOut config. It defaults to 60 seconds.
I'm working in PHP and creating a system with a lot of PHP-driven elements and I have noticed that some of my pages stop displaying text produced using the echo command.
I have made a small example of this. Of course, my program is not supposed to just print allt numbers from 1 to 10000, but this example demonstrates how the script just terminates without any warnings.
Example code:
<?php
for ($i = 1; $i <= 10000; $i++) {
echo $i, '<br>';
}
?>
Output:
1
2
More numbers...
8975
8976
8977
8
What is causing this? Is it a buffer issue, and how do I resolve it?
The fact that your code ran to completion on the cli suggests to me that your script is exceeding the ini.max_execution_time runtime configuration.
Note in the linked documentation that the value of this configuration on the cli is 0 which means it does not time out when run in that environment.
The default setting in the browser is 30 seconds.
You can show your current setting in the browser with:
echo ini_get('max_execution_time');
And you should be able to increase it with:
ini_set('max_execution_time', 0); // turns off timeout
If the script you have shown us behaves as you describe the there's something very wrong going on. If this is a Unix or Linux based system and its repeatedly exhibiting this behaviour then the kernel is terminating the script - unless it has been configured not to do so, the kernel will be forcing a core dump of the process.
Either go build a new system to run your code on or Google how to capture and diagnose a core dump on your operating system.
update
If xdebug is reporting the process is still running then it probably hasn't dumped its core, but "not producing output" != "not running". What state is the process in? What happens when you redirect the ouput? What is the end-to-end output channel when it misbehaves?
The problem did not lie directly with my PHP installation or the application itself, but somewhere in my IDE PHPStorm. When running the code with the same PHP interpreter outside of the IDE's wrappers, it all works fine. The procedures described by the many users here helped with that. Thank you.
since a few hours our server hangs every time you do a session_start.
For testing purposes i created a script which looks like this:
<?php
session_start();
?>
Calling it from the console hangs and it can't even be stopped with ctrl-c, only kill -9 works. The same for calling it via Apache. /var/lib/php/session/ stays empty but permissions are absolutely fine, www can write and also has read permissions for all parent folders.
According to the admins there were no changes made on the server and there is no special code registered for sessions. The Server is CentOS 4 or 5 and yesterday everything was working perfectly. We rebooted the server and updated PHP, but nothing changed.
I've ran out of ideas, any suggestions?
UPDATE
We solved this problem by moving the project to another server, so while the problem still exists on one server there is no immediate need for a solution anymore.
I will keep the question open in case someone has an idea for others having a similar problem in the future, though.
There are many reasons for that, here are a few of them:
A. The session file could be opened exclusively.
When the file lock is not released properly for whatever reason, it is causing session_start() to hang infinitely on any future script executions.
Workaround: use session_set_save_handler() and make sure the write function uses fopen($file, 'w') instead of fopen($file, 'x')
B. Never use the following in your php.ini file (entropie file to "/dev/random"), this will cause your session_start() to hang:
<?php
ini_set("session.entropy_file", "/dev/random");
ini_set("session.entropy_length", "512");
?>
C.
session_start() needs a directory to write to.
You can get Apache plus PHP running in a normal user account. Apache will then of course have to listen to an other port than 80 (for instance, 8080).
Be sure to do the following things:
- create a temporary directory PREFIX/tmp
- put php.ini in PREFIX/lib
- edit php.ini and set session.save_path to the directory you just created
Otherwise, your scripts will seem to 'hang' on session_start().
If this helps:
In my scenario, session_start() was hanging at the same time I was using the XDebug debugger within PHPStorm, the IDE, on Windows. I found that there was a clear cause: Whenever I killed the debug session from within PHPStorm, the next time I tried to run a debug session, session_start() would hang.
The solution, if this is your scenario, is to make sure to restart Apache every time you kill an XDebug session within your IDE.
I had a weird issue with this myself.
I am using CentOS 5.5x64, PHP 5.2.10-1. A clean ANSI file in the root with nothing other than session_start() was hanging. The session was being written to disk and no errors were being thrown. It just hung.
I tried everything suggested by Thariama, and checked PHP compile settings etc.
My Fix:
yum reinstall php; /etc/init.d/httpd restart
Hope this helps someone.
To everyone complaining about the 30 seconds of downtime being unacceptable, this was an inexplicable issue on a brand new, clean OS install, NOT a running production machine. This solution should NOT be used in a production environment.
Ok I face the same problem on 2 PC, 1 is MAC mini XAMPP, 1 is Windows 10 Xampp.
Both is php spent infinity to run session_start(). Both PHP version is 7.x.x
I found that session files is lock to read and write. So that I added code to make PHP read session files and immediately unlock when done with
<?php
session_start([
'read_and_close' => true,
]);
?>
or
<?php
//For PHP 5.x
session_start();
session_write_close();
?>
After this PHP unlock session file => Problems solve
The problem: -
Iv experienced (and fixed) the problem where file based sessions hang the request, and database based sessions get out of sync by storing out of date session data (like storing each session save in the wrong order).
This is caused by any subsequent request that loads a session (simultaneous requests), like ajax, video embed where the video file is delivered via php script, dynamic resource file (like script or css) delivered via php script, etc.
In file based sessions file locking prevents session writing thus causing a deadlock between the simultaneous request threads.
In database based session the last request thread to complete becomes the most recent save, so for example a video delivery script will complete long after the page request and overwrite the since updated session with old session data.
The fix: -
If your ajax or resource delivery script doesnt need to use sessions then easiest to just remove session usage from it.
Otherwise you'd best make yourself a coffee and do the following: -
Write or employ a session handler (if not already doing so) as per http://www.php.net//manual/en/class.sessionhandler.php (many other examples available via google search).
In your session handler function write() prepend the code ...
// processes may declare their session as read only ...
if(!empty($_SESSION['no_session_write'])) {
unset($_SESSION['no_session_write']);
return true;
}
In your ajax or resource delivery php script add the code (after the session is started) ...
$_SESSION['no_session_write'] = true;
I realise this seems like a lot of stuffing around for what should be a tiny fix, but unfortunately if you need to have simultaneous requests each loading a session then it is required.
NOTE if your ajax or resource delivery script does actually need to write/save data, then you need to do it somewhere other than in the session, like database.
Just put session_write_close(); befor Session_start();
as below:
<?php
session_write_close();
session_start();
.....
?>
I don't know why, but changing this value in /etc/php/7.4/apache2/php.ini worked for me:
;session.save_path = "/var/lib/php/sessions"
session.save_path = "/tmp"
To throw another answer into the mix for those going bananas, I had a session_start() dying only in particular cases and scripts. The reason my session was dying was ultimately because I was storing a lot of data in them after a particularly intensive script, and ultimately the call to session_start() was exhausting the 'memory_limit' setting in php.ini.
After increasing 'memory_limit', those session_start() calls no longer killed my script.
For me, the problem seemed to originate from SeLinux. The needed command was chcon -R -t httpd_sys_content_t [www directory] to give access to the right directory.
See https://askubuntu.com/questions/451922/apache-access-denied-because-search-permissions-are-missing
If you use pgAdmin 4 this can happen as well.
If you have File > Preferences > SQL Editor > Options > "Auto Commit" disabled, and you just ran a query using the query tool but didn't manually commit, then session_start() will freeze.
Enable auto commit, or manually commit, or just close pgAdmin, and it will no longer freeze.
In my case it seems like it was the NFS Share that was locking the session , after restarting the NFS server and only enabled 1 node of web clients the sessions worked normally .
Yet another few cents that might help someone. In my case I was storing in $_SESSION complex data with several different class objects in them and session_start() couldn't handle the whole unserialization as not every class was loaded on session_start. The solution is my case was to serialize/jsonify data before saving it into the $_SESSION and reversing the process after I got the data out of session.
I've installed AjaXplorer (very nice web file explorer), written in PHP, on my IIS (Windows Server 2008 SP2 x64). It works too slow for me.
What can be the cause? Are there some settings in php.ini? Or, maybe, something is wrong with IIS?
I use 32-bit PHP, php-cgi.exe as interpreter.
Regards,
First off, CGI will always be slow. It needs to boot the entire PHP runtime for each request. Try using FastCGI (If you're using IIS 7, or if you're using IIS 6)...
After that, try to see why it's slow. Is it because the PHP script takes a long time to execute (meaning it's a code issue), or is it because of a server config. To test, modify this into the start of the entrance point of the PHP program (index.php):
define(START_TIME_CUSTOM, microtime(true));
function onEndTimeCompute() {
$timeTaken = microtime(true) - START_TIME_CUSTOM;
echo "Completed In: ".number_format($timeTaken, 4)." Seconds\n";
}
register_shutdown_function('onEndTimeCompute');
That write Completed in n Seconds to the end of the generated output (even if die() is called). It may cause some issues if Ajax calls are expected to return JSON, so don't do it as a rule, just for trying to figure out what's going on.
So, if the total request takes 1 second, yet you see Completed in 0.004 Seconds, you know that the PHP code itself is not the issue (it's either in the setup of the interpreter by CGI, or somewhere else in IIS)...
That should at least show you where the problem is...