I use php 5.2.11 + php-fpm + nginx on my server.
If a user sends a time consuming request "A", before getting response for "A" from the server, he sends more others normal requests.
It is weird that user can not get any response before response for "A" returned, it seems php-fpm queues the requests.
I checked tcp connection, the requests are sent from different socket, have same PHPSESSION. And on the server side, php-fpm also wrote the normal requests into slow log.
I can not figure out how to resolve it, any suggestions?
It isn't PHP-FPM fault. Since you're saying they have the same session then that's the culprit. Sessions in PHP have a per-session lock, so one user cannot load a page with a specific session ID while there are outstanding requests with the same session; the blocking happens when you call session_start(). This is to avoid having different requests editing the same session variables (that causes all sort of problems). When a request ends and writes its resulting session data to the store, the next can start.
If you want to be able to read session variables and start a time consuming job, but don't want to block other requests from happening, just use session_write_close() after reading the session data you need to continue. Be aware that after calling that you cannot modify session data (and maybe you cannot read it because $_SESSION is emptied, but cannot remember).
Related
We have a client server architecture with Angular on client side and Apache2 PHP PDO and MySQL on the server side. server side exposing an API to clients that gives them data to show.
Some observations :
some API calls can take very long to compute and return response.
server side seem to handle a single request per client at any given time (im seeing only one coresponding query thats being executed in mysql), that limit comes either from apache or from mysql since front-end sending requests in parallel for sure.
front end cancels requests that are not relevant anymore (data being fetched will not be visible)
seems like requests canceled by front end are not canceled in server side and continues to run anyway, i think even if they are queued they will still run when their turn arrives (even though they were cancelled on client side)
Need help to understand :
what exactly is the cause of not having all requests (or at least X>1 requests) run on parallel? can it be changed?
What configurations should i change in either apache or mysql to overcome this?
is there a way to make apache drop cancelled requests? at least those that are still queued and not started?
Thanks!
EDIT
Following #Markus AO comment (Thanks Markus!!!) this was session blocking related... wish i knew about that before!
OP has a number of tangled problems on the table. However I feel these are worthwhile concerns (having wrestled with them myself), so let's take this apart. For great justice; main screen turn on:
Solving Concurrent Request Problems
There are several possible problems and solutions with concurrent connections in a (L)AMP stack. Before looking at tuning Apache and MySQL, however, let me gloss a common "mystery" issue that creates concurrence problems; namely, a necessary evil called "PHP Session Locking".
PHP Session Blocking & Concurrent Requests
In a nutshell: When you use sessions in your application, after calling session_start(), PHP locks the session file stored at your session.save_path directory. This file lock will remain in place until the script ends, or session_write_close() is called. Result: Any subsequent calls by the same user will be queued, rather than concurrently processed, to ensure there's no session data corruption. (Imagine parallel scripts writing into the same $_SESSION!)
An easy way to demonstrate this is to create a long-running script; then call it in your browser; and then open a new tab, and call it again (or in fact, call any script sharing the same session cookie/ID). You'll see that the second call won't execute until the first one is concluded. This is a common cause of strange AJAX lags, especially with parallel AJAX requests from a single page. Processing will be consecutive instead of concurrent. Then, 10 calls at 0.3 sec each will take a total of 3 sec to conclude, and so on. We don't want that, do we!
You can remedy request blocking caused by PHP session lock by ensuring that:
Scripts using sessions should call session_write_close() once done storing session data. The session lock will be immediately released.
Scripts that don't require sessions shouldn't start sessions to begin with.
Scripts that need to only read session data: Using session_start() with ['read_and_close' => true] option will give you a read-only (non-persistent) $_SESSION variable without session locking. (Available since PHP 7.)
Options 1 and 3 will leave you with read access for the $_SESSION variable and release/avoid the session lock. Any changes made to $_SESSION after the session is closed will be silently discarded; no warnings/errors are displayed.
The session lock request blocking issue is only consequential for a single user (using the same session). It has no impact on multi-user concurrence. For further reading, please see:
SO: Session (Auto)-Start, Performance & Session Locking
SO: PHP & Sessions: Is there any way to disable PHP session locking?
In-Depth: PHP Session Locking: How To Prevent Sessions Blocking in PHP requests.
Apache & MySQL Concurrent Requests
Once upon a time, before realizing PHP was the culprit behind blocking/queuing my concurrent calls, I spent a small aeon in tweaking Apache and MySQL and wondering, what happen?
Apache 2.4 supports 150 concurrent requests by default; any further requests will queue up. There are several settings under the MPM/Multi-Processing Module that you can tune to support the desired level of concurrent connections. Please see:
MPM Docs
Worker Docs
Overview at Oxpedia
MySQL has options for max_connections (default 151) and max_user_connections (default unlimited). If your application sends a lot of concurrent requests per user, you'll want to ensure the global max connections is high enough to ensure a handful of users don't hog the entire DBMS.
Obviously, you'll further want to tune these settings in light of your server CPU/RAM specs. (The calculations for which are beyond this answer.) Your concurrency issues probably aren't caused by too many open TCP sockets, but hey, you never know...
Canceling Requests to Apache/PHP/MySQL
We don't have much to go on as far as your application's specific wiring, but I understand from the comments that as it stands, a user can cancel a request at the front-end, but no back-end action is taken. (Ie. any back-end response is simply ignored/discarded.)
"Is there a way to make Apache drop cancelled requests?" I'm assuming that your front-end sends the requests directly and without delay to Apache; and onward to PHP > MySQL > PHP > Apache. In that case, no, you can't really have Apache cancel the request that it's already received; or you could hit "stop", but chances are PHP and MySQL are already munching it away...
Holding a "Cancel Window"
However, you could program a "cancel window" lag into your front-end, where requests are only passed on to Apache after e.g. a 0.5-second sleep waiting for a possible cancel. This may or may not have a negative impact on the UX; may be worth implementing to save server resources if a significant portion of requests are canceled. This assumes an UI with Javascript. If you're getting direct HTTP calls to API, you could have a "sleepy proxy receiver" instead.
Using a "Cancel Controller"
How would one cancel PHP/MySQL processes? This is obviously only feasible/doable if calls to your API result in a processing time of any significant duration. If the back-end takes 0.28 sec to process, and user cancels after 0.3 seconds, then there isn't much left to cancel, is there.
However, if you do have scripts that may run for longer, say into a couple of seconds. You could always find relevant break-points in your code, where you have a "not-canceled" check or a kill/rollback routine. Basically, you'd have the following flow:
Front-end sends request with unique ID to main script
PHP script begins the long march for building a response
On cancel: Front-end re-sends the ID to a light-weight cancel controller
Cancel controller logs ID to temporary file/database/wherever
PHP checks at break-points if there's a cancel request for current process
On cancel, PHP executes a kill/rollback routine instead of further processing
This sort of "cancel watch" will obviously create some overhead, and as such you may want to only incorporate this into heavier scripts, to ensure you actually save some processing time in the big picture. Further, you'd only want at most a couple of breakpoints at significant junctions. For read requests, you could just kill the process; but for write requests, you'd probably want to have a graceful rollback to ensure data integrity in your system.
You can also cancel/kill a long-running MySQL thread, already initiated by PHP, with mysqli::kill. For this to make sense, you'd want to run it as MYSQLI_ASYNC, so PHP's around to pull the plug. PDO doesn't seem to have a native equivalent for either async queries or kill. Came across $pdo->query('KILL CONNECTION_ID()'); and PHP Asynchronous MySQL Query (see answer for PDO). Haven't tested these myself. Also see: Kill MySQL query on user abort
PHP Connection Handling
As an alternative to a controller that passes the cancel signal "from the side", you could look into PHP Connection Handling and poll for aborted connection status at your cancel check-points with connection_aborted(). (See "MySQL kill" link above for a code example.)
A CONNECTION_ABORTED state follows if a user clicks the "stop" button in their browser. PHP has a ignore_user_abort() setting, default "Off", which should abort a script on user-abort. (In my experience though, if I have a rogue script and session lock is on, I can't do anything until it times out, even when I hit "stop" in the browser. Go figure.)
If you have "ignore user abort" on false, ie. the PHP script terminates on user abort, be aware that this will be a wholly uncontrolled termination, unless you have register_shutdown_function() implemented. Even so, you'd have to flag check-points in your code for your shutdown function to be able to "rewind the clock" from the termination point onward. Also note this caveat:
PHP will not detect that the user has aborted the connection until an attempt is made to send information to the client. Simply using an echo statement does not guarantee that information is sent, see flush(). ~ PHP Manual on ignore_user_abort
I have no experience with implementing "user abort" over AJAX/JS. For a starting point, see: Abort AJAX Request and Cancel an HTTP fetch() request. Not sure how/if they register with PHP. If you decide to travel down this road, please return and update us with your code / research!
I have php-fpm & nginx stack installed on my server.
I'm running a JS app which fires an AJAX request that internally connects to a third party service using curl. This service takes a long time to respond say approximately 150s.
Now, When i connect to the same page on another browser tab, it doesn't even return the javascript code on the page which triggers the ajax requests. Basically all subsequent requests keep loading until either the curl returns response or it timeouts.
Here, i have proxy_read_timeout set to 300 seconds.
I want to know why nginx is holding the resource and not serving other clients.
The issue was due to the PHP session lock. When i used to make a certain request, PHP used to lock the session file and would release only after the request was completed.
To avoid this, you may use session_write_close(). In my case, I implemented redis session.
Problem solved!!!
I have a website written in PHP. One of the users is in a building where they have two internet connections with two ISP's, but one network. So, any computer on the network may connect through either web connection, and it appears that the switch sometimes happens mid-request. Sometimes this leads to internal server errors on my script. Sadly, the logs on my shared host don't seem to have a lot of detail.
So, here's my guess. My script uses sessions. The user sends a request with internet connection 1 and this locks the session file. As that request is processing, internet connection 1 is shut off and internet connection 2 is turned on. Apache/PHP keeps trying to send the response back to internet connection 1 (which no longer exists). User tries to reload page via internet connection 2. PHP waits for the initial script to exit and unlock the session file, but it never does, so it eventually dies with an internal server error.
So, how might I get around this? Is there a way to force un-lock a session file if too much time has passed since it was last locked? (No script has ever taken more than 3 seconds to execute, so if the lock is more than, say, 15 seconds old, that means that the old script is waiting around to serve a file to the wrong IP address and could be killed.) Thanks!
I think I solved the problem. So, the issue, as far as I can tell, was that the server did not unlock the session file if a request was started but then failed due to the funky internet connection. So, the user sends a request, session_start() is called and the session file is locked, the server starts sending a response to the client, but the client half-disappears. Client sends a new request via the new IP address, session_start() is called, but that has to wait for the session file to be unlocked, so that just waits and waits and eventually times out. However, I solved this for my site. So, here's the way my site works:
1. Server gets request.
2. Server does processing of request and stores information for user in various variables rather than an echo-as-you-go method.
3. Then I include template.php, and my various saved variables are displayed in the correct location.
So, my code flow does all processing first, and then all user-echoing after the processing is done. So, at the beginning of the template.php file, I now included a "session_write_close();". That releases the session file manually before the response is sent to the user, instead of waiting until the script ends. So, if the user disappears during the transfer, the session file is unlocked and the user can send a new request and it won't bump into a locked session file.
Since I put in that fix, I haven't heard any more complaints. (As a side note, the user does have issues with other websites, but I want my site to react in the best manner possible to a sub-par internet connection.)
If I have a PHP page that is doing a task that takes a long time, and I try to load another page from the same site at the same time, that page won't load until the first page has timed out. For instance if my timeout was set to 60 seconds, then I wouldn't be able to load any other page until 60 seconds after the page that was taking a long time to load/timeout. As far as I know this is expected behaviour.
What I am trying to figure out is whether an erroneous/long loading PHP script that creates the above situation would also affect other people on the same network. I personally thought it was a browser issues (i.e. if I loaded http://somesite.com/myscript.php in chrome and it start working it's magic in the background, I couldn't then load http://somesite.com/myscript2.php until that had timed out, but I could load that page in Firefox). However, I've heard contradictory statements, saying that the timeout would happen to everyone on the same network (IP address?).
My script works on some data imported from sage and takes quite a long time to run - sometiems it can timeout before it finishes (i.e. if the sage import crashes over the weeked), so I run it again and it picks up where it left off. I am worried that other staff in the office will not be able to access the site while this is running.
The problem you have here is actually related to the fact that (I'm guessing) you are using sessions. This may be a bit of a stretch, but it would account for exactly what you describe.
This is not in fact "expected behaviour" unless your web server is set up to run a single process with a single thread, which I highly doubt. This would create a situation where the web server is only able to handle a single request at any one time, and this would affect everybody on the network. This is exactly why your web server probably won't be set up like this - in fact I suspect you will find it is impossible to configure your server like this, as it would make the server somewhat useless. And before some smart alec chimes in with "what about Node.js?" - that is a special case, as I am sure you are already well aware.
When a PHP script has a session open, it has an exclusive lock on the file in which the session data is stored. This means that any subsequent request will block at the call to session_start() while PHP tries to acquire that exclusive lock on the session data file - which it can't, because your previous request still has one. As soon as your previous request finishes, it releases it's lock on the file and the next request is able to complete. Since sessions are per-machine (in fact per-browsing session, as the name suggests, which is why it works in a different browser) this will not affect other users of your network, but leaving your site set up so that this is an issue even just for you is bad practice and easily avoidable.
The solution to this is to call session_write_close() as soon as you have finished with the session data in a given script. This causes the script to close the session file and release it's lock. You should try and either finish with the session data before you start the long running process, or not call session_start() until after it has completed.
In theory you can call session_write_close() and then call session_start() again later in the script, but I have found that PHP sometimes exhibits buggy behaviour in this respect (I think this is cookie related, but don't quote me on that). Obviously, pay attention to the fact the setting cookies modifies the headers, so you have to call session_start() before you output any data or enable output buffering.
For example, consider this script:
<?php
session_start();
if (!isset($_SESSION['someval'])) {
$_SESSION['someval'] = 1;
} else {
$_SESSION['someval']++;
}
echo "someval is {$_SESSION['someval']}";
sleep(10);
With the above script, you will have to wait 10 seconds before you are able to make a second request. However, if you add a call to session_write_close() after the echo line, you will be able to make another request before the previous request has completed.
Hmm... I did not check but I think that each request to the webserver is handled in a thread of its own. Thereby a different request should not be blocked. Just try :-) Use a different browser and access your page while the big script is running!
Err.. I just see that this worked for you :-) And it should for others, too.
On executing two very simple ajax POST requests (successivly), the Apache server seems to always respond in the same order as they were requested, although the second request takes significant less amount of time to process than the first request.
The time it takes the server to process Request1 is 30 seconds.
The time it takes the server to process Request2 is 10 seconds.
var deferred1 = dojo.xhrPost(xhrArgs1);
var deferred2 = dojo.xhrPost(xhrArgs2);
I expect Apache to achieve some "parallelization" on my dual core machine, which is obviously not happening.
When I execute each request at the same time in a separate broswer then works ok, the Request2 is returned first.
Facts:
httpd.conf has: ThreadsPerChild 50, MaxRequestsPerChild 50
PHP version : 5.2.5
Apache's access log states that both client requests are received at about the same time, which is as expected.
The Php code on the server side is something as simple as sleep(30)/sleep(10)
Any idea about why I don't get the "parallelization" when run from the same browser?
Thanks
When your two requests are sent from the same browser, they both share the same session.
When sessions are stored in files (that's the default), there is a locking mecanism that's used, to ensure that two scripts will not use the same session at the same time -- allowing that could result in the session data of the first script being overwriten by the second one.
That's why your second script doesn't start before the first one is finished : it's waiting for the lock (created by the first script) on the session data to be released.
For more informations, take a look at the manual page of session_write_close() -- which might be a solution to your problem : close the session before the sleep() (quoting) :
Session data is usually stored after
your script terminated without the
need to call session_write_close(),
but as session data is locked to
prevent concurrent writes only one
script may operate on a session at any
time. When using framesets
together with sessions you will
experience the frames loading one by
one due to this locking. You can
reduce the time needed to load all the
frames by ending the session as soon
as all changes to session variables
are done.
Browsers typically have a limit of two connections to the same site (although you may increase that limit in some browsers). Some browsers will keep one connection for downloading things like images etc. and another connection for XHR. Which means that your two XHR calls actually goes out in the same connection, one after the other.
Your browser will return immediately after each XHR call because they are async, but internally it may just batch up the requests.
When you run on two different browsers, obviously they each have the two connections, so the two XHR requests go out in different connections. No problem here.
Now it depends on the browser. If the browser allows you to occupy both connections with XHR calls, then you can get up to two requests running simultaneously. Then it will be up to the server which one to do first.
In any rate, if you try with three (or any number >2) XHR requests simultaneously, you will not get more than 2 executed on the server at the same time on modern browsers.