Do PHP timeouts stop people on the same network loading pages? - php

If I have a PHP page that is doing a task that takes a long time, and I try to load another page from the same site at the same time, that page won't load until the first page has timed out. For instance if my timeout was set to 60 seconds, then I wouldn't be able to load any other page until 60 seconds after the page that was taking a long time to load/timeout. As far as I know this is expected behaviour.
What I am trying to figure out is whether an erroneous/long loading PHP script that creates the above situation would also affect other people on the same network. I personally thought it was a browser issues (i.e. if I loaded http://somesite.com/myscript.php in chrome and it start working it's magic in the background, I couldn't then load http://somesite.com/myscript2.php until that had timed out, but I could load that page in Firefox). However, I've heard contradictory statements, saying that the timeout would happen to everyone on the same network (IP address?).
My script works on some data imported from sage and takes quite a long time to run - sometiems it can timeout before it finishes (i.e. if the sage import crashes over the weeked), so I run it again and it picks up where it left off. I am worried that other staff in the office will not be able to access the site while this is running.

The problem you have here is actually related to the fact that (I'm guessing) you are using sessions. This may be a bit of a stretch, but it would account for exactly what you describe.
This is not in fact "expected behaviour" unless your web server is set up to run a single process with a single thread, which I highly doubt. This would create a situation where the web server is only able to handle a single request at any one time, and this would affect everybody on the network. This is exactly why your web server probably won't be set up like this - in fact I suspect you will find it is impossible to configure your server like this, as it would make the server somewhat useless. And before some smart alec chimes in with "what about Node.js?" - that is a special case, as I am sure you are already well aware.
When a PHP script has a session open, it has an exclusive lock on the file in which the session data is stored. This means that any subsequent request will block at the call to session_start() while PHP tries to acquire that exclusive lock on the session data file - which it can't, because your previous request still has one. As soon as your previous request finishes, it releases it's lock on the file and the next request is able to complete. Since sessions are per-machine (in fact per-browsing session, as the name suggests, which is why it works in a different browser) this will not affect other users of your network, but leaving your site set up so that this is an issue even just for you is bad practice and easily avoidable.
The solution to this is to call session_write_close() as soon as you have finished with the session data in a given script. This causes the script to close the session file and release it's lock. You should try and either finish with the session data before you start the long running process, or not call session_start() until after it has completed.
In theory you can call session_write_close() and then call session_start() again later in the script, but I have found that PHP sometimes exhibits buggy behaviour in this respect (I think this is cookie related, but don't quote me on that). Obviously, pay attention to the fact the setting cookies modifies the headers, so you have to call session_start() before you output any data or enable output buffering.
For example, consider this script:
<?php
session_start();
if (!isset($_SESSION['someval'])) {
$_SESSION['someval'] = 1;
} else {
$_SESSION['someval']++;
}
echo "someval is {$_SESSION['someval']}";
sleep(10);
With the above script, you will have to wait 10 seconds before you are able to make a second request. However, if you add a call to session_write_close() after the echo line, you will be able to make another request before the previous request has completed.

Hmm... I did not check but I think that each request to the webserver is handled in a thread of its own. Thereby a different request should not be blocked. Just try :-) Use a different browser and access your page while the big script is running!
Err.. I just see that this worked for you :-) And it should for others, too.

Related

LiteSpeed / PHP server parallel requests

On our PHP website we have noticed, that if I request pages in 2 separate tabs, the second tab will never start loading until the first tab finishes processing the PHP side (uses LiteSpeed webserver). It's a cloud webhosting that should handle a notable number of visitors. We observed the same behavior on previous hosting with Apache.
Could the issue be in webserver configuration? Or could it be something in our application blocking the processing?
If your application uses PHP sessions (intentionally or not), this will generally be the behavior.
When an application calls session_start() it generates a session for you as a user, and will do a file lock on this session until the execution of the site is done, then the lock will be removed, and the next script waiting for the same session can then proceed.
If you're not going to write any data into the session, it's beneficial to call session_write_close() early on, since this will free the lock, and let other processes (Your other tab) allow being processed.
The locking happens because you obviously don't want two processes trying to write to the same file at once, possibly overriding one of the processes data that was written to the session.
Obviously it could also be something else in your application causing this, but you'd have to actually debug the code to figure out. You could possibly use strace on the command-line as well to try to gather more information.
PHP session locking is explained in depth in https://ma.ttias.be/php-session-locking-prevent-sessions-blocking-in-requests/ - it simply goes into a lot more detail than I did above.

configure Apache and MySQL for parallel and cancelable service to clients

We have a client server architecture with Angular on client side and Apache2 PHP PDO and MySQL on the server side. server side exposing an API to clients that gives them data to show.
Some observations :
some API calls can take very long to compute and return response.
server side seem to handle a single request per client at any given time (im seeing only one coresponding query thats being executed in mysql), that limit comes either from apache or from mysql since front-end sending requests in parallel for sure.
front end cancels requests that are not relevant anymore (data being fetched will not be visible)
seems like requests canceled by front end are not canceled in server side and continues to run anyway, i think even if they are queued they will still run when their turn arrives (even though they were cancelled on client side)
Need help to understand :
what exactly is the cause of not having all requests (or at least X>1 requests) run on parallel? can it be changed?
What configurations should i change in either apache or mysql to overcome this?
is there a way to make apache drop cancelled requests? at least those that are still queued and not started?
Thanks!
EDIT
Following #Markus AO comment (Thanks Markus!!!) this was session blocking related... wish i knew about that before!
OP has a number of tangled problems on the table. However I feel these are worthwhile concerns (having wrestled with them myself), so let's take this apart. For great justice; main screen turn on:
Solving Concurrent Request Problems
There are several possible problems and solutions with concurrent connections in a (L)AMP stack. Before looking at tuning Apache and MySQL, however, let me gloss a common "mystery" issue that creates concurrence problems; namely, a necessary evil called "PHP Session Locking".
PHP Session Blocking & Concurrent Requests
In a nutshell: When you use sessions in your application, after calling session_start(), PHP locks the session file stored at your session.save_path directory. This file lock will remain in place until the script ends, or session_write_close() is called. Result: Any subsequent calls by the same user will be queued, rather than concurrently processed, to ensure there's no session data corruption. (Imagine parallel scripts writing into the same $_SESSION!)
An easy way to demonstrate this is to create a long-running script; then call it in your browser; and then open a new tab, and call it again (or in fact, call any script sharing the same session cookie/ID). You'll see that the second call won't execute until the first one is concluded. This is a common cause of strange AJAX lags, especially with parallel AJAX requests from a single page. Processing will be consecutive instead of concurrent. Then, 10 calls at 0.3 sec each will take a total of 3 sec to conclude, and so on. We don't want that, do we!
You can remedy request blocking caused by PHP session lock by ensuring that:
Scripts using sessions should call session_write_close() once done storing session data. The session lock will be immediately released.
Scripts that don't require sessions shouldn't start sessions to begin with.
Scripts that need to only read session data: Using session_start() with ['read_and_close' => true] option will give you a read-only (non-persistent) $_SESSION variable without session locking. (Available since PHP 7.)
Options 1 and 3 will leave you with read access for the $_SESSION variable and release/avoid the session lock. Any changes made to $_SESSION after the session is closed will be silently discarded; no warnings/errors are displayed.
The session lock request blocking issue is only consequential for a single user (using the same session). It has no impact on multi-user concurrence. For further reading, please see:
SO: Session (Auto)-Start, Performance & Session Locking
SO: PHP & Sessions: Is there any way to disable PHP session locking?
In-Depth: PHP Session Locking: How To Prevent Sessions Blocking in PHP requests.
Apache & MySQL Concurrent Requests
Once upon a time, before realizing PHP was the culprit behind blocking/queuing my concurrent calls, I spent a small aeon in tweaking Apache and MySQL and wondering, what happen?
Apache 2.4 supports 150 concurrent requests by default; any further requests will queue up. There are several settings under the MPM/Multi-Processing Module that you can tune to support the desired level of concurrent connections. Please see:
MPM Docs
Worker Docs
Overview at Oxpedia
MySQL has options for max_connections (default 151) and max_user_connections (default unlimited). If your application sends a lot of concurrent requests per user, you'll want to ensure the global max connections is high enough to ensure a handful of users don't hog the entire DBMS.
Obviously, you'll further want to tune these settings in light of your server CPU/RAM specs. (The calculations for which are beyond this answer.) Your concurrency issues probably aren't caused by too many open TCP sockets, but hey, you never know...
Canceling Requests to Apache/PHP/MySQL
We don't have much to go on as far as your application's specific wiring, but I understand from the comments that as it stands, a user can cancel a request at the front-end, but no back-end action is taken. (Ie. any back-end response is simply ignored/discarded.)
"Is there a way to make Apache drop cancelled requests?" I'm assuming that your front-end sends the requests directly and without delay to Apache; and onward to PHP > MySQL > PHP > Apache. In that case, no, you can't really have Apache cancel the request that it's already received; or you could hit "stop", but chances are PHP and MySQL are already munching it away...
Holding a "Cancel Window"
However, you could program a "cancel window" lag into your front-end, where requests are only passed on to Apache after e.g. a 0.5-second sleep waiting for a possible cancel. This may or may not have a negative impact on the UX; may be worth implementing to save server resources if a significant portion of requests are canceled. This assumes an UI with Javascript. If you're getting direct HTTP calls to API, you could have a "sleepy proxy receiver" instead.
Using a "Cancel Controller"
How would one cancel PHP/MySQL processes? This is obviously only feasible/doable if calls to your API result in a processing time of any significant duration. If the back-end takes 0.28 sec to process, and user cancels after 0.3 seconds, then there isn't much left to cancel, is there.
However, if you do have scripts that may run for longer, say into a couple of seconds. You could always find relevant break-points in your code, where you have a "not-canceled" check or a kill/rollback routine. Basically, you'd have the following flow:
Front-end sends request with unique ID to main script
PHP script begins the long march for building a response
On cancel: Front-end re-sends the ID to a light-weight cancel controller
Cancel controller logs ID to temporary file/database/wherever
PHP checks at break-points if there's a cancel request for current process
On cancel, PHP executes a kill/rollback routine instead of further processing
This sort of "cancel watch" will obviously create some overhead, and as such you may want to only incorporate this into heavier scripts, to ensure you actually save some processing time in the big picture. Further, you'd only want at most a couple of breakpoints at significant junctions. For read requests, you could just kill the process; but for write requests, you'd probably want to have a graceful rollback to ensure data integrity in your system.
You can also cancel/kill a long-running MySQL thread, already initiated by PHP, with mysqli::kill. For this to make sense, you'd want to run it as MYSQLI_ASYNC, so PHP's around to pull the plug. PDO doesn't seem to have a native equivalent for either async queries or kill. Came across $pdo->query('KILL CONNECTION_ID()'); and PHP Asynchronous MySQL Query (see answer for PDO). Haven't tested these myself. Also see: Kill MySQL query on user abort
PHP Connection Handling
As an alternative to a controller that passes the cancel signal "from the side", you could look into PHP Connection Handling and poll for aborted connection status at your cancel check-points with connection_aborted(). (See "MySQL kill" link above for a code example.)
A CONNECTION_ABORTED state follows if a user clicks the "stop" button in their browser. PHP has a ignore_user_abort() setting, default "Off", which should abort a script on user-abort. (In my experience though, if I have a rogue script and session lock is on, I can't do anything until it times out, even when I hit "stop" in the browser. Go figure.)
If you have "ignore user abort" on false, ie. the PHP script terminates on user abort, be aware that this will be a wholly uncontrolled termination, unless you have register_shutdown_function() implemented. Even so, you'd have to flag check-points in your code for your shutdown function to be able to "rewind the clock" from the termination point onward. Also note this caveat:
PHP will not detect that the user has aborted the connection until an attempt is made to send information to the client. Simply using an echo statement does not guarantee that information is sent, see flush(). ~ PHP Manual on ignore_user_abort
I have no experience with implementing "user abort" over AJAX/JS. For a starting point, see: Abort AJAX Request and Cancel an HTTP fetch() request. Not sure how/if they register with PHP. If you decide to travel down this road, please return and update us with your code / research!

Internal server error when user's IP changes (probably has something to do with PHP session file locking)

I have a website written in PHP. One of the users is in a building where they have two internet connections with two ISP's, but one network. So, any computer on the network may connect through either web connection, and it appears that the switch sometimes happens mid-request. Sometimes this leads to internal server errors on my script. Sadly, the logs on my shared host don't seem to have a lot of detail.
So, here's my guess. My script uses sessions. The user sends a request with internet connection 1 and this locks the session file. As that request is processing, internet connection 1 is shut off and internet connection 2 is turned on. Apache/PHP keeps trying to send the response back to internet connection 1 (which no longer exists). User tries to reload page via internet connection 2. PHP waits for the initial script to exit and unlock the session file, but it never does, so it eventually dies with an internal server error.
So, how might I get around this? Is there a way to force un-lock a session file if too much time has passed since it was last locked? (No script has ever taken more than 3 seconds to execute, so if the lock is more than, say, 15 seconds old, that means that the old script is waiting around to serve a file to the wrong IP address and could be killed.) Thanks!
I think I solved the problem. So, the issue, as far as I can tell, was that the server did not unlock the session file if a request was started but then failed due to the funky internet connection. So, the user sends a request, session_start() is called and the session file is locked, the server starts sending a response to the client, but the client half-disappears. Client sends a new request via the new IP address, session_start() is called, but that has to wait for the session file to be unlocked, so that just waits and waits and eventually times out. However, I solved this for my site. So, here's the way my site works:
1. Server gets request.
2. Server does processing of request and stores information for user in various variables rather than an echo-as-you-go method.
3. Then I include template.php, and my various saved variables are displayed in the correct location.
So, my code flow does all processing first, and then all user-echoing after the processing is done. So, at the beginning of the template.php file, I now included a "session_write_close();". That releases the session file manually before the response is sent to the user, instead of waiting until the script ends. So, if the user disappears during the transfer, the session file is unlocked and the user can send a new request and it won't bump into a locked session file.
Since I put in that fix, I haven't heard any more complaints. (As a side note, the user does have issues with other websites, but I want my site to react in the best manner possible to a sub-par internet connection.)

Does PHP know when a connection has been closed?

Suppose a page takes a long time to generate, some large report for example, and the user closes the browser, or maybe they press refresh, does the PHP engine stop generating the page from the original request?
And if not, what can one do to cope with users refreshing a page a lot that causes an expensive report to be generated.
I have tried this and it seems that it does not stop any running query on the database. But that could be an engine problem, not PHP.
Extra info:
IIS7
MS SQL Server via ODBC
When you send a request to the server, it is executed on the server without any communication with the browser until information is sent back to the browser. When PHP tries to send data back to the browser, it will fail and therefore the script will exit.
However, if you have a lot of code executing before any headers are sent, this will continue to execute until the headers are sent and a failed response is received.
PHP knows when a connection has been closed when it tries to output some data (and fails). echo, print, flush, etc. Aside from this, no, it doesn't; everything else is happening on the server end.
There is little in the way of passing back information about the browser state once a request has been made (or in your case, in progress)
To know if a user is still connected to your site, you will need to implement a long poll / comet or perhaps a web socket.
Alternatively - you may want to run the long query initiated via an ajax call - while keeping the main browser respsonsive (not white screened). This allows you to detect if the browser is closed during the long query with a Javascript event onbeforeunload() to notify your backend that the user has left. (I'm not sure how you would interupt a query in progress from another HTTP request though)
PHP have two functions to control this. set_time_limit(num) able to increase the limit before a page execution "dies". If you don't expand that limit, a page running "too long" will die. Bad for a long process. Also you need ignore_user_abort(TRUE) so the server don't close the PHP process if the server detect that the page has ben closed in the client side.
You may also need to check for memory leaks if you are writing something that use much memory and run for several hours.
When you send a request to the server the server will go away and perform the appropriate actions. IIS/SQL Server does not know if the browser has been closed (and it is not IIS/SQL Server's responsibility to understand this) so it will execute the commands (as told to do so by the PHP engine until it has finished or until the engine kills any transactions). Since your report could be dynamic, IIS will not cache page requests, SQL Server however can cache the last previously ran queries therefore you will see some performance gain from the database backend.

why does apache/server die when i make a typo and cause a never ending loop?

this is more of a fundamental question at how apache/threading works.
in this hypothetical (read: sometimes i suck and write terrible code), i write some code that enters the infinite-recursion phases of it's life. then, what's expected, happens. the serve stalls.
even if i close the tab, open up a new one, and hit the site again (locally, of course), it does nothing. even if i hit a different domain i'm hosting through a vhost declaration, nothing. i normally have to wait a number of seconds before apache can begin handling traffic again. most of the time i just get tired and restart the server manually.
can someone explain this process to me? i have the php runtime setting 'ignore_user_abort' set to true to allow ajax calls that are initiated to keep running even if they close their browser, but would this being set to false affect it?
any help would be appreciated. didn't know what to search for.
thanks.
ignore_user_abort() allows your script (and Apache) to ignore a user disconnecting (closing browser/tab, moving away from page, hitting ESC, esc..) and continue processing. This is useful in some cases - for instance in a shopping cart once the user hits "yes, place the order". You really don't want an order to die halfway through the process, e.g. order's in the database, but the charge hasn't been sent to the payment facility yet. Or vice-versa.
However, while this script is busilly running away in "the background", it will lock up resources on the server, especially the session file - PHP locks the session file to make sure that multiple parallel requests won't stomp all over the file, so while your infinite loop is running in the background, you won't be able to use any session-enabled other part of the site. And if the loop is intensive enough, it could tie up the CPU enough that Apache is unable to handle any other requests on other hosted sites, where the session lock might not apply.
If it is an infinite loop, you'll have to wait until PHP's own maximum allowed run time (set_time_limit() and max_execution_time()) kicks in and kills the script. There's also some server-side limiters, like Apache's RLimitCPU and TimeOut that can handle situations like this.
Note that except on Windows, PHP doesn't count "external" time in the set_time_limit. So if your runaway process is doing database stuff, calling external programs via system() and the like, the time spent running those external calls is NOT accounted for in the parent's time limit.
If you write code that causes an (effectively) neverending loop, then apache will execute that, and be unable to respond to any additional new requests for a page, because it's trying to determine the page content (for the served page which caused the neverending loop) by executing the (non-terminating) php code.
Solution: don't write code that doesn't terminate (in a reasonable amount of time). Understand loop invariants.

Categories