I've just finished a CakePHP website which works like a charm in local. So, I put it online.
The problem is that , although pages are generated pretty quickly (around 400ms), my browser shows the loading symbol for 5 long seconds.
My Firefox dev console shows that the browser is in "receiving" state for EXACTLY 5 seconds, every time. Actually, the page is displayed way before, but it seems that for some reason the connection stays open for precisely 5 seconds.
It's actually boring because, let alone the loading sign that appears even though the page is fully loaded and usable, AJAX calls will last 5 seconds too. So AJAX content can't be displayed before 5s, which is obviously unacceptable for users.
I've tested the problem on several computers, browsers, and internet connections. I've also tested a vanilla CakePHP on the same host, and encountered the same problem.
So do you have any idea of which host setting could cause that? I believe that this only can happen because the server is keeping the connection open, not because the client is. But i can't figure out the reason. I hope you will !
I had the same problème.
It's not the good solution, but you can force the php script to close http connection at the end of Ajax action :
function AjaxEdit(){
// do some thing
header( "Connection: Close" );
}
Related
I have a PHP script, which works fine. Sometimes, it needs to run for longer than 30 seconds, so I have increased the max_execution_time to 6000.
The problem I now have is that for long queries (longer than three minutes), I get a 404 error.
I know that script is working, as I can see changes in the files that I am updating, but it does not complete.
I think (but not sure) that the web page does not get a response (as the script is still running and despite doing echo's they are not reflected yet).
I am not that savvy when it comes to tracking these type of problems down but trying to google this issue, I read that the network tab might be useful to look at.
Not sure if the following screen shots are useful, but thought they might be.
Wonder if anybody can help?
I am running Google Chrome (Version 85.0.4183.83 (Official Build) (64-bit)), but have also tried on Brave, with the same result.
If I have a PHP page that is doing a task that takes a long time, and I try to load another page from the same site at the same time, that page won't load until the first page has timed out. For instance if my timeout was set to 60 seconds, then I wouldn't be able to load any other page until 60 seconds after the page that was taking a long time to load/timeout. As far as I know this is expected behaviour.
What I am trying to figure out is whether an erroneous/long loading PHP script that creates the above situation would also affect other people on the same network. I personally thought it was a browser issues (i.e. if I loaded http://somesite.com/myscript.php in chrome and it start working it's magic in the background, I couldn't then load http://somesite.com/myscript2.php until that had timed out, but I could load that page in Firefox). However, I've heard contradictory statements, saying that the timeout would happen to everyone on the same network (IP address?).
My script works on some data imported from sage and takes quite a long time to run - sometiems it can timeout before it finishes (i.e. if the sage import crashes over the weeked), so I run it again and it picks up where it left off. I am worried that other staff in the office will not be able to access the site while this is running.
The problem you have here is actually related to the fact that (I'm guessing) you are using sessions. This may be a bit of a stretch, but it would account for exactly what you describe.
This is not in fact "expected behaviour" unless your web server is set up to run a single process with a single thread, which I highly doubt. This would create a situation where the web server is only able to handle a single request at any one time, and this would affect everybody on the network. This is exactly why your web server probably won't be set up like this - in fact I suspect you will find it is impossible to configure your server like this, as it would make the server somewhat useless. And before some smart alec chimes in with "what about Node.js?" - that is a special case, as I am sure you are already well aware.
When a PHP script has a session open, it has an exclusive lock on the file in which the session data is stored. This means that any subsequent request will block at the call to session_start() while PHP tries to acquire that exclusive lock on the session data file - which it can't, because your previous request still has one. As soon as your previous request finishes, it releases it's lock on the file and the next request is able to complete. Since sessions are per-machine (in fact per-browsing session, as the name suggests, which is why it works in a different browser) this will not affect other users of your network, but leaving your site set up so that this is an issue even just for you is bad practice and easily avoidable.
The solution to this is to call session_write_close() as soon as you have finished with the session data in a given script. This causes the script to close the session file and release it's lock. You should try and either finish with the session data before you start the long running process, or not call session_start() until after it has completed.
In theory you can call session_write_close() and then call session_start() again later in the script, but I have found that PHP sometimes exhibits buggy behaviour in this respect (I think this is cookie related, but don't quote me on that). Obviously, pay attention to the fact the setting cookies modifies the headers, so you have to call session_start() before you output any data or enable output buffering.
For example, consider this script:
<?php
session_start();
if (!isset($_SESSION['someval'])) {
$_SESSION['someval'] = 1;
} else {
$_SESSION['someval']++;
}
echo "someval is {$_SESSION['someval']}";
sleep(10);
With the above script, you will have to wait 10 seconds before you are able to make a second request. However, if you add a call to session_write_close() after the echo line, you will be able to make another request before the previous request has completed.
Hmm... I did not check but I think that each request to the webserver is handled in a thread of its own. Thereby a different request should not be blocked. Just try :-) Use a different browser and access your page while the big script is running!
Err.. I just see that this worked for you :-) And it should for others, too.
For some odd reason, just today our server decided to be very slow during the starting of sessions. For every session_start, the server either times out after 30 seconds, or it'll take about 20 seconds for it to start the session. This is very weird, seeing as it hasn't done this for a very long time (the last time our server did this was about 7 months ago). I've tried to change the session to run through a database instead, and that works fine, however, as our current website is built, it'd take days to go on every page and change the loading of sessions to include a new session handler. Therefore my question remains:
Why is it so slow, and why only sometimes?
We run on a dedicated hetzner server with 24GB's of ram, and a CPU fast enough to just run a simple webserver (a Xeon, I believe, but I'm not sure). We run debian on the server with an apache+fastcgi+php5 setup.
The server doesn't report much load, neither through server-status as well as the top command. Vnstat reports no problem whatsoever with our network link (again, that wouldn't result in a slow local session handling). IOtop reports no problem with processes taking over the entire harddrive. Writing to the tmp folder where the session files are located works fast if done through vim.
Again, to make this clear, my main concern here isn't whether or not we should switch to a DB or a memory-cached version of the sessions, it's simply to ask why this happens, because everything I take a look at seems to be working fine, except for the PHP itself.
EDIT:
The maximum file in our PHP tmp directory is 2.9 MB, so nothing that should make an impact, I believe.
UPDATE: I did never figure out what was wrong and/or how to fix it, but the problem disappeared after we switched over to memcached/db sessions.
Have you tried session_write_close(); ?
This will disable write-ability in session variables but you can still read data from them. And later when you need to write a session variable, reopen it.
I have also suffered from this problem but this thing worked like a charm. This is what i do:
session_start(); //starts the session
$_SESSION['user']="Me";
session_write_close(); // close write capability
echo $_SESSION['user']; // you can still access it
I had the same problem: suddenly the server took 30 seconds to execute a request. I noticed it was because of session_start(). The first request was fast, but each next request took some 30 sec to be executed.
I found that the session file in c:\wamp\tmp was locked by the first request for some 30 sec. During this time the second request was waiting for the file to be unlocked.
I found out it had something to do with rewrite_mod and .htaccess. I disabled rewrite_mod and commented out every line in .htaccess and it works again like a charm. I don't know why this happend because I don't remember change any settings or conf on wamp.
I ran into this problem too. It was answered here:
Problem with function session_start() (works slowly)
Sessions are locked by PHP while one script is executing, so if scripts are stacked under the same session, they can cause these surprisingly long delays.
Each session is stored by apache as a text file.
When session start is use to resume an existing session (via cookie identifier for example) maybe a big session file (a Session with a lot of content inside) can be slow to be started?
If this is the case probably you application is putting to much data into sessions.
Please check if you have correct memcache settings e.g. in /etc/php.d/memcached.ini
I know this is an old question but I've just fixed this issue on my server. All I did was turn on the bypass cache for the domains in the cache manager in the cpanel.
My sessions were taking ages to start and close now they are instant.
Sessions also may start slow if you put many data in it. For example 50MB of data in session in docker image may result in 3 seconds of session start time.
In order to do things quickly while developing, I'm keeping the browser window opened on a particular page (which is reached after logging in and doing some other stuff that takes around a minute or two). I'm simply assigning the old session id in test case setup and it is working fine with persistent settings.
However there is one little bit of problem. After every (I think around 30 minutes of inactivity), the browser session times out and running the test case takes 2 minutes. Its not a really big deal but certainly annoying while developing. Is there a way to fix it so that browser default session timeout can be increased.
Im using PHP but it really does not matter if I can find the solution for one language, its easy to figure out for others.
I've written in PHP a script that takes a long time to execute [Image processing for thousands of pictures]. It's a meter of hours - maybe 5.
After 15 minutes of processing, I get the error:
ERROR
The requested URL could not be retrieved
The following error was encountered while trying to retrieve the URL: The URL which I clicked
Read Timeout
The system returned: [No Error]
A Timeout occurred while waiting to read data from the network. The network or server may be down or congested. Please retry your request.
Your cache administrator is webmaster.
What I need is to enable that script to run for much longer.
Now, here are all the technical info:
I'm writing in PHP and using the Zend Framework. I'm using Firefox. The long script that is processed is done after clicking a link. Obviously, since the script is not over I see the web page on which the link was and the web browser writes "waiting for ...".
After 15 minutes the error occurs.
I tried to make changes to Firefox threw about:config but without any success. I don't know, but the changes might be needed somewhere else.
So, any ideas?
Thanks ahead.
set_time_limit(0) will only affect the server-side running of the script. The error you're receiving is purely browser-side. You have to send SOMETHING to keep the browser from deciding the connection's dead - even a single character of output (followed by a flush() to make sure it actually get sent out over the wire) will do. Maybe once every image that's processed, or on a fixed time interval (if last char sent more than 5 minutes ago, output another one).
If you don't want any intermediate output, you could do ignore_user_abort(TRUE), which will allow the script to keep running even if the connection gets shut down from the client side.
If the process runs for hours then you should probably look into batch processing. So you just store a request for image processing (in a file, database or whatever works for you) instead of starting the image processing. This request is then picked up by a scheduled (cron) process running on the server, which will do the actual processing (this can be a PHP script, which calls set_time_limit(0)). And when processing is finished you could signal the user (by mail or any other way that works for you) that the processing is finished.
use set_time_limit
documentation here
http://nl.php.net/manual/en/function.set-time-limit.php
If you can split your work in batches, after processing X images display the page with some javascript (or META redirects) on it to open the link http://server/controller/action/nextbatch/next_batch_id.
Rinse and repeat.
batching the entire process also has the added benefit that once something goes wrong, you don't have to start out the entire thing anew.
If you're running on a server of your own and can get out of safe_mode, then you could also fork background processes to do the actual heavy lifting, independent of your browser view of things. If you're in a multicore or multiprocessor environment, you can even schedule more than one running process at any time.
We've done something like that for large computation scripts; synchronization of the processes happened over a shared database---but luckily enough, they processes were so independent that the only thing we needed to see was their completion or termination.