I have AMPPS installed.
My Apache server cannot handle multiple php requests at once (for example if I call localhost/script.php multiple times, they are processed in a consecutive order). script.php consists only of <?php sleep(10); ?>.
I read that MaxClients directive is responsible for concurrent access configuration, but it is missing in my httpd.conf at all.
Disabling Xdebug and writing session_write_close(); to the beginning of the script didn't work.
When I added session_start(); to the beginning of the file and my code looked like:
<?php
session_start();
session_write_close();
sleep(10);
phpinfo();
echo "Done";
When making 5 requests to localhost/script.php, last 4 waited for the first one to end and then ended concurrently.
Please, help me resolve the issue. If any information that is needed to help me resolve this problem is missing, please notify and I will add it.
Apache can surely handle multiple requests at the same time, there is something surely going wrong within your apache configuration.
It depends on which version of Apache you are using and how it is configured, but a common default configuration uses multiple workers with multiple threads to handle simultaneous requests. See http://httpd.apache.org/docs/2.2/mod/worker.html for a rundown of how this works.
The reason why you are facing it is:
There is some lock somewhere - which can happen, for instance, if the two requests come from the same client, and you are using file-based sessions in PHP : while a script is being executed, the session is "locked", which means the server/client will have to wait until the first request is finished (and the file unlocked) to be able to use the file to open the session for the second user.
The requests come from the same client AND the same browser; most browsers will queue the requests in this case, even when there is nothing server-side producing this behaviour.
Probably becouse of sessions locking. When you don't need to edit session variables, close it.
http://php.net/manual/en/function.session-write-close.php
manipulate yours sessions, write on start of script.php
// manipulate writes, and unlock session file!
session_start();
$_SESSION['admin'] = 1;
$_SESSION['user'] = 'Username';
session_write_close(); // unlock session file, to another script can access
// start your script without php session block
sleep(30);
echo $_SESSION['user'];
// another script can run without wait this script finish
Have you tried to make the simultaneous calls with different browser tabs/windows/instances ?
Apache is multithreaded, so, it definitely can handle your parallel requests. It seems you have some things to check:
Make requests with an appropriate client for testing (like apache benchmark) - Take a look at https://httpd.apache.org/docs/2.4/programs/ab.html
Check your setup on apache. There is some wrong setups that can produce strange behavior, like single request at a time. Take a look at fork and worker parameters at httpd.conf. Suggestion: use all default parameters for testing.
Apache provides a variety of multi-processing modules (Apache calls these MPMs) that dictate how client requests are handled. Basically, this allows administrators to swap out its connection handling architecture easily. These are:
mpm_prefork: This processing module spawns processes with a single thread each to handle request. Each child can handle a single
connection at a time.
mpm_worker: This module spawns processes that can each manage multiple threads. Each of these threads can handle a single
connection.Since there are more threads than processes, this also means that new connections can immediately take a free thread instead of having to wait for a free process.
mpm_event: This module is similar to the worker module in most
situations, but is optimized to handle keep-alive connections. When
using the worker MPM, a connection will hold a thread regardless of
whether a request is actively being made for as long as the
connection is kept alive.
Try including the sleep and phpinfo within the session before calling session close.
As it looks like the sessions(all the five are treated as same and are terminated with the first one being terminated). Maybe verify if the Session Ids are the same.
By keeping the session open you can see that concurrently they are handled.
I encountered a similar problem. Multiple requests kept hanging randomly while connecting to the server.
Tried changing the mpm configurations, with no use.
Finally this one seems to solve the problem for me. (from https://serverfault.com/a/680075)
AcceptFilter http none
EnableSendfile Off
EnableMMAP off
You can move session storing from files to database - than you would have possibility to request your files all at once without waiting - or - if you don't need session in your script turn it off (don't use session_start();)
Related
On our PHP website we have noticed, that if I request pages in 2 separate tabs, the second tab will never start loading until the first tab finishes processing the PHP side (uses LiteSpeed webserver). It's a cloud webhosting that should handle a notable number of visitors. We observed the same behavior on previous hosting with Apache.
Could the issue be in webserver configuration? Or could it be something in our application blocking the processing?
If your application uses PHP sessions (intentionally or not), this will generally be the behavior.
When an application calls session_start() it generates a session for you as a user, and will do a file lock on this session until the execution of the site is done, then the lock will be removed, and the next script waiting for the same session can then proceed.
If you're not going to write any data into the session, it's beneficial to call session_write_close() early on, since this will free the lock, and let other processes (Your other tab) allow being processed.
The locking happens because you obviously don't want two processes trying to write to the same file at once, possibly overriding one of the processes data that was written to the session.
Obviously it could also be something else in your application causing this, but you'd have to actually debug the code to figure out. You could possibly use strace on the command-line as well to try to gather more information.
PHP session locking is explained in depth in https://ma.ttias.be/php-session-locking-prevent-sessions-blocking-in-requests/ - it simply goes into a lot more detail than I did above.
We have a client server architecture with Angular on client side and Apache2 PHP PDO and MySQL on the server side. server side exposing an API to clients that gives them data to show.
Some observations :
some API calls can take very long to compute and return response.
server side seem to handle a single request per client at any given time (im seeing only one coresponding query thats being executed in mysql), that limit comes either from apache or from mysql since front-end sending requests in parallel for sure.
front end cancels requests that are not relevant anymore (data being fetched will not be visible)
seems like requests canceled by front end are not canceled in server side and continues to run anyway, i think even if they are queued they will still run when their turn arrives (even though they were cancelled on client side)
Need help to understand :
what exactly is the cause of not having all requests (or at least X>1 requests) run on parallel? can it be changed?
What configurations should i change in either apache or mysql to overcome this?
is there a way to make apache drop cancelled requests? at least those that are still queued and not started?
Thanks!
EDIT
Following #Markus AO comment (Thanks Markus!!!) this was session blocking related... wish i knew about that before!
OP has a number of tangled problems on the table. However I feel these are worthwhile concerns (having wrestled with them myself), so let's take this apart. For great justice; main screen turn on:
Solving Concurrent Request Problems
There are several possible problems and solutions with concurrent connections in a (L)AMP stack. Before looking at tuning Apache and MySQL, however, let me gloss a common "mystery" issue that creates concurrence problems; namely, a necessary evil called "PHP Session Locking".
PHP Session Blocking & Concurrent Requests
In a nutshell: When you use sessions in your application, after calling session_start(), PHP locks the session file stored at your session.save_path directory. This file lock will remain in place until the script ends, or session_write_close() is called. Result: Any subsequent calls by the same user will be queued, rather than concurrently processed, to ensure there's no session data corruption. (Imagine parallel scripts writing into the same $_SESSION!)
An easy way to demonstrate this is to create a long-running script; then call it in your browser; and then open a new tab, and call it again (or in fact, call any script sharing the same session cookie/ID). You'll see that the second call won't execute until the first one is concluded. This is a common cause of strange AJAX lags, especially with parallel AJAX requests from a single page. Processing will be consecutive instead of concurrent. Then, 10 calls at 0.3 sec each will take a total of 3 sec to conclude, and so on. We don't want that, do we!
You can remedy request blocking caused by PHP session lock by ensuring that:
Scripts using sessions should call session_write_close() once done storing session data. The session lock will be immediately released.
Scripts that don't require sessions shouldn't start sessions to begin with.
Scripts that need to only read session data: Using session_start() with ['read_and_close' => true] option will give you a read-only (non-persistent) $_SESSION variable without session locking. (Available since PHP 7.)
Options 1 and 3 will leave you with read access for the $_SESSION variable and release/avoid the session lock. Any changes made to $_SESSION after the session is closed will be silently discarded; no warnings/errors are displayed.
The session lock request blocking issue is only consequential for a single user (using the same session). It has no impact on multi-user concurrence. For further reading, please see:
SO: Session (Auto)-Start, Performance & Session Locking
SO: PHP & Sessions: Is there any way to disable PHP session locking?
In-Depth: PHP Session Locking: How To Prevent Sessions Blocking in PHP requests.
Apache & MySQL Concurrent Requests
Once upon a time, before realizing PHP was the culprit behind blocking/queuing my concurrent calls, I spent a small aeon in tweaking Apache and MySQL and wondering, what happen?
Apache 2.4 supports 150 concurrent requests by default; any further requests will queue up. There are several settings under the MPM/Multi-Processing Module that you can tune to support the desired level of concurrent connections. Please see:
MPM Docs
Worker Docs
Overview at Oxpedia
MySQL has options for max_connections (default 151) and max_user_connections (default unlimited). If your application sends a lot of concurrent requests per user, you'll want to ensure the global max connections is high enough to ensure a handful of users don't hog the entire DBMS.
Obviously, you'll further want to tune these settings in light of your server CPU/RAM specs. (The calculations for which are beyond this answer.) Your concurrency issues probably aren't caused by too many open TCP sockets, but hey, you never know...
Canceling Requests to Apache/PHP/MySQL
We don't have much to go on as far as your application's specific wiring, but I understand from the comments that as it stands, a user can cancel a request at the front-end, but no back-end action is taken. (Ie. any back-end response is simply ignored/discarded.)
"Is there a way to make Apache drop cancelled requests?" I'm assuming that your front-end sends the requests directly and without delay to Apache; and onward to PHP > MySQL > PHP > Apache. In that case, no, you can't really have Apache cancel the request that it's already received; or you could hit "stop", but chances are PHP and MySQL are already munching it away...
Holding a "Cancel Window"
However, you could program a "cancel window" lag into your front-end, where requests are only passed on to Apache after e.g. a 0.5-second sleep waiting for a possible cancel. This may or may not have a negative impact on the UX; may be worth implementing to save server resources if a significant portion of requests are canceled. This assumes an UI with Javascript. If you're getting direct HTTP calls to API, you could have a "sleepy proxy receiver" instead.
Using a "Cancel Controller"
How would one cancel PHP/MySQL processes? This is obviously only feasible/doable if calls to your API result in a processing time of any significant duration. If the back-end takes 0.28 sec to process, and user cancels after 0.3 seconds, then there isn't much left to cancel, is there.
However, if you do have scripts that may run for longer, say into a couple of seconds. You could always find relevant break-points in your code, where you have a "not-canceled" check or a kill/rollback routine. Basically, you'd have the following flow:
Front-end sends request with unique ID to main script
PHP script begins the long march for building a response
On cancel: Front-end re-sends the ID to a light-weight cancel controller
Cancel controller logs ID to temporary file/database/wherever
PHP checks at break-points if there's a cancel request for current process
On cancel, PHP executes a kill/rollback routine instead of further processing
This sort of "cancel watch" will obviously create some overhead, and as such you may want to only incorporate this into heavier scripts, to ensure you actually save some processing time in the big picture. Further, you'd only want at most a couple of breakpoints at significant junctions. For read requests, you could just kill the process; but for write requests, you'd probably want to have a graceful rollback to ensure data integrity in your system.
You can also cancel/kill a long-running MySQL thread, already initiated by PHP, with mysqli::kill. For this to make sense, you'd want to run it as MYSQLI_ASYNC, so PHP's around to pull the plug. PDO doesn't seem to have a native equivalent for either async queries or kill. Came across $pdo->query('KILL CONNECTION_ID()'); and PHP Asynchronous MySQL Query (see answer for PDO). Haven't tested these myself. Also see: Kill MySQL query on user abort
PHP Connection Handling
As an alternative to a controller that passes the cancel signal "from the side", you could look into PHP Connection Handling and poll for aborted connection status at your cancel check-points with connection_aborted(). (See "MySQL kill" link above for a code example.)
A CONNECTION_ABORTED state follows if a user clicks the "stop" button in their browser. PHP has a ignore_user_abort() setting, default "Off", which should abort a script on user-abort. (In my experience though, if I have a rogue script and session lock is on, I can't do anything until it times out, even when I hit "stop" in the browser. Go figure.)
If you have "ignore user abort" on false, ie. the PHP script terminates on user abort, be aware that this will be a wholly uncontrolled termination, unless you have register_shutdown_function() implemented. Even so, you'd have to flag check-points in your code for your shutdown function to be able to "rewind the clock" from the termination point onward. Also note this caveat:
PHP will not detect that the user has aborted the connection until an attempt is made to send information to the client. Simply using an echo statement does not guarantee that information is sent, see flush(). ~ PHP Manual on ignore_user_abort
I have no experience with implementing "user abort" over AJAX/JS. For a starting point, see: Abort AJAX Request and Cancel an HTTP fetch() request. Not sure how/if they register with PHP. If you decide to travel down this road, please return and update us with your code / research!
I have a webpage that when users go to it, multiple (10-20) Ajax requests are instantly made to a single PHP script, which depending on the parameters in the request, returns a different report with highly aggregated data.
The problem is that a lot of the reports require heavy SQL calls to get the necessary data, and in some cases, a report can take several seconds to load.
As a result, because one client is sending multiple requests to the same PHP script, you end up seeing the reports slowly load on the page one at a time. In other words, the generating of the reports is not done in parallel, and thus causes the page to take a while to fully load.
Is there any way to get around this in PHP and make it possible for all the requests from a single client to a single PHP script to be processed in parallel so that the page and all its reports can be loaded faster?
Thank you.
As far as I know, it is possible to do multi-threading in PHP.
Have a look at pthreads extension.
What you could do is make the report generation part/function of the script to be executed in parallel. This will make sure that each function is executed in a thread of its own and will retrieve your results much sooner. Also, set the maximum number of concurrent threads <= 10 so that it doesn't become a resource hog.
Here is a basic tutorial to get you started with pthreads.
And a few more examples which could be of help (Notably the SQLWorker example in your case)
Server setup
This is more of a server configuration issue and depends on how PHP is installed on your system: If you use php-fpm you have to increase the pm.max_children option. If you use PHP via (F)CGI you have to configure the webserver itself to use more children.
Database
You also have to make sure that your database server allows that many concurrent processes to run. It won’t do any good if you have enough PHP processes running but half of them have to wait for the database to notice them.
In MySQL, for example, the setting for that is max_connections.
Browser limitations
Another problem you’re facing is that browsers won’t do 10-20 parallel requests to the same hosts. It depends on the browser, but to my knowledge modern browsers will only open 2-6 connections to the same host (domain) simultaneously. So any more requests will just get queued, regardless of server configuration.
Alternatives
If you use MySQL, you could try to merge all your calls into one request and use parallel SQL queries using mysqli::poll().
If that’s not possible you could try calling child processes or forking within your PHP script.
Of course PHP can execute multiple requests in parallel, if it uses a Web Server like Apache or Nginx. PHP dev server is single threaded, but this should ony be used for dev anyway. If you are using php's file sessions however, access to the session is serialized. I.e. only one script can have the session file open at any time. Solution: Fetch information from the session at script start, then close the session.
I know PHP scripts can run concurrently, but I notice that when I run a script and then run another script after it. It waits for the first to finish and then it does what it has to do?
Is there an Apache config I have to change or do I use a different browser or something?!!
Thanks all
EDIT
If I go to a different browser, I can access the page/PHP script I need?! Is this limiting the number of requests by browser? Apache?
Two things comes to mind:
HTTP 1.1 has a recommend maximum limit of 2 simultaneous connections to any given domain. This would exhibit the problem where you can only open one page on a site at a time unless you switch browsers.
Edit: This is usually enforced by the client, not the server
If your site uses sessions, you'll find out that only one page will load at a time...
...unless you call session_write_close(), at which point the second page can now open the now-unlocked session file.
If you run php under fastcgi you can probably avoid this restriction.
I haven't had this problem with apache / php, the project I work on runs 3 iframes each of which is a php script + the main page, apache handles all 4 at the same time.
You may want to check your apache configuration and see how the threading is setup.
This article may help:
http://www.devside.net/articles/apache-performance-tuning
I've noticed many times where some php scripts exit. It seems to me that this will force an exit of the httpd/apache child (of course another will be started if required for the next request).
But in the CMS, that next request will require the entire init.php initialization, and of course just cleaning up and starting php in the first place.
It seems that the php files usually start with
if ( !defined( 'SMARTY_DIR' ) ) {
include_once( 'init.php' );
}
which suggests that somebody was imagining that one php process would serve multiple requests. But if every script exits, then each php/apache process will serve one request only.
Any thoughts on the performance and security implications of removing many of the exit calls (especially from the most-frequently-called scripts like index.php etc) to allow one process to serve multiple requests?
Thanks, Peter
--ADDENDUM --
Thank you for the answers. That (php will never serve more than one request) is what I thought originally until last week, when I was debugging a config variable that could only have been set in one script (because of the way the path was set up) but was still set in another script (this is on a webserver with about 20 hits/sec). In that case, I did not have a php exit call in the one script that set up its config slightly differently. But when I added the php exit call to that one script (in the alternate directory) this solved the misconfiguration I was experiencing in all my main scripts in the main directory (which were due to having a css directory variable set erroneously, in a previous page execution). So now I'm confused again, because with what all the answers so far say, php should never serve more than one request.
exit does nothing to Apache processes (it certainly doesn't kill a worker!). It simply ends the execution of the PHP script and returns execution to the Apache process, which'll send the results to the browser and continue on to the next request.
The Smarty code you've excerpted doesn't have anything to do with a PHP process serving multiple requests. It just insures that Smarty is initialised at all times - useful if a PHP script might be alternatively included in another script or accessed directly.
I think your confusion comes from what include_once is for. PHP is basically a "shared-nothing" system, where there are no real persistent server objects. include_once doesn't mean once per Apache child, but once per web request.
PHP may hack up a hairball if you include the same file twice. For instance a function with a particular name can only be defined once. This led to people implementing a copy of the old C #ifndef-#define-#include idiom once for each included file. include_once was the fix for this.
Even if you do not call exit your PHP script is still going to end execution, at which point any generated HTML will be returned to the web server to send on to your browser.
The exit keyword allows you to signal to the PHP engine that your work is done and no further processing needs to take place.
Also note that exit is typically used for error handling and flow control - removing it from includes will likely break your application.