I have php-fpm & nginx stack installed on my server.
I'm running a JS app which fires an AJAX request that internally connects to a third party service using curl. This service takes a long time to respond say approximately 150s.
Now, When i connect to the same page on another browser tab, it doesn't even return the javascript code on the page which triggers the ajax requests. Basically all subsequent requests keep loading until either the curl returns response or it timeouts.
Here, i have proxy_read_timeout set to 300 seconds.
I want to know why nginx is holding the resource and not serving other clients.
The issue was due to the PHP session lock. When i used to make a certain request, PHP used to lock the session file and would release only after the request was completed.
To avoid this, you may use session_write_close(). In my case, I implemented redis session.
Problem solved!!!
Related
I am running into this problem :
I am sending a request to the server using AJAX, which takes some parameters in and on the server side will generate a PDF.
The generation of the pdf can take a lot of time depending on the data used
The Elastic Load Balancer of AWS, after 60s of "idle" connection decides to drop the socket, and therefore my request fails in that case.
I know it's possible to increase the timeout in ELB settings, but not only my sysadmin is against it, it's also a false solution, and bad practice.
I understand the best way to solve the problem would be to send data through the socket to sort of "tell ELB" that I am still active. Sending a dummy request to the server every 30s doesn't work because of our architecture and the fact that the session is locked (ie. we cannot have concurrent AJAX requests from the same session, otherwise one is pending until the other one finishes)
I tried just doing a get request to files on the server but it doesn't make a difference, I assume the "socket" is the one used by the original AJAX call.
The function on the server is pretty linear and almost impossible to divide in multiple calls, and the idea of letting it run in the background and checking every 5sec until it's finished is making me uncomfortable in terms of resource control.
TL;DR : is there any elegant and efficient solution to maintain a socket active while an AJAX request is pending?
Many thanks if anyone can help with this, I have found a couple of similar questions on SO but both are answered by "call amazon team to ask them to increase the timeout in your settings" which sounds very bad to me.
Another approach is to divided the whole operations into two services:
The first service accepts a HTTP request for generating a PDF document. This service finishes immediately after request is accepted. And it will return a UUID or URL for checking result
The second service accepts the UUID and return the PDF document if it's ready. If PDF document is not ready, this service can return an error code, such as HTTP 404.
Since you are using AJAX to call the server side, it will be easy for you to change your javascript and call the 2nd servcie when the 1st service finished successfully. Will this work for your scenario?
Have you tried to following the trouble shooting guide of ELB? Quoted the relevant part below:
HTTP 504: Gateway Timeout
Description: Indicates that the load balancer closed a connection
because a request did not complete within the idle timeout period.
Cause 1: The application takes longer to respond than the configured
idle timeout.
Solution 1: Monitor the HTTPCode_ELB_5XX and Latency metrics. If there
is an increase in these metrics, it could be due to the application
not responding within the idle timeout period. For details about the
requests that are timing out, enable access logs on the load balancer
and review the 504 response codes in the logs that are generated by
Elastic Load Balancing. If necessary, you can increase your capacity
or increase the configured idle timeout so that lengthy operations
(such as uploading a large file) can complete.
Cause 2: Registered instances closing the connection to Elastic Load
Balancing.
Solution 2: Enable keep-alive settings on your EC2 instances and set
the keep-alive timeout to greater than or equal to the idle timeout
settings of your load balancer.
I'm working on an application which gets some data from a web service using a PHP soap client. The web service accesses the clients SQL server, which has very slow performance (some requests will take several minutes to run).
Everything works fine for the smaller requests, but if the browser is waiting for 2 minutes, it prompts me to download a blank file.
I've increased the php max_execution_time, memory_limit and default_socket_timeout, but the browser will always seem to stop waiting at exactly 2 minutes.
Any ideas on how to get the brower to hang around indefinitely?
You could change your architecture from pull to push. Then the user can carrying on using your web application & be notified when the data is ready.
Or as a simple work around (not ideal) if you are able to modify the soap server you could have another web service that checks if the data is ready, then on the client you could call this every 30secs to keep checking if data is available rather than waiting.
The web server was timing out - in my case, Apache. I initially thought it was something else as I increased the timeout value in httpd.conf, and it was still stopping after two minutes. However, I'm using Zend Server, which has an additional configuration file which was setting the timeout to 120 seconds - I increased this and the browser no longer stops after two minutes.
My web page uses an ajax call to return data from a very long php script, so if I exit the page early and reload the page, that php script is still being carried out, which will cause me problems.
Is there a way I could tell the server to abort the execution of the previous ajax request, if there is one that's still running?
thanks
Not directly. You will need to set up a scheme where the work is offloaded to an external (to the web server) process, and that process has a communication channel with the web server set up that enables it to check if it should drop what it's doing every so often (e.g. a simple but not ideal scheme would be checking for the last-modified time of a "lock file"; if it's more than X seconds in the past, abort the task).
Your web page would then make a call to a script that would then "keep alive" the background task appropriately (e.g. by touching the lock file of the previous example).
This way, when the task is initiated through an AJAX request, the client begins making "keep-alive" requests to the server and the server forwards the "keep-alive" message to the external process. If the user reloads the page the "keep-alive" requests stop and the worker process will abort when the keep-alive threshold elapses. If all goes well and the work completes, your server would detect this through the communication channel it has with the worker process and report this back to the client on their next keep-alive "ping".
Maybe try use set_time_limit() function for this script.
Or create some few php scripts and randomly generates a url for it.
did you try setting the XMLHttpRequest object to null when the page reloads?
I have a PHP script which opens a socket connection(using fsockopen) and takes about 15secs to complete/return the result to the browser. Mean while, if the browser sends a second request it is serialized. This is giving a bad user experience because if the user clicks 3 times, then the third request which gets sent after 30sec is the one that gets the response -- the first 2 requests from browser prespective are getting lost.
I do not have any session in my script, but tried putting session_write_close() at the beginning of my script which didnt help.
Also session.auto_start in the php.ini = 0.
Any ideas as to how to make the client requests from the same browser parallel??
Thanks
Gary
1) Download and install Firefox
2) Download and install Firebug
3) Add a sleep(10) to your PHP script so that it pauses for a few seconds before returning its response
4) Open up your webpage, and watch the outbound connections with Firebug. You should see several that are open and do not yet have a response. They should all return at about the same time, when each one finishes the 10 second delay.
If you do not see multiple connections open at the same time, and return at approximately the same time, then you need to look at your front end code. AJAX requests are asynchronous and can run in parallel. If you are seeing them run serially instead, then it means you need to fix your JavaScript code, not anything on the server.
Parallel asynchronous Ajax requests using jQuery
You should if at all possible install(*nix) redis.
To install just do simple
make
With lpush/brpop you can handle this stuff asynchronously and keep order intact. If you spawn couple of worker threads you could even handle multiple requests simultaneous. the predis client library is pretty solid
I use php 5.2.11 + php-fpm + nginx on my server.
If a user sends a time consuming request "A", before getting response for "A" from the server, he sends more others normal requests.
It is weird that user can not get any response before response for "A" returned, it seems php-fpm queues the requests.
I checked tcp connection, the requests are sent from different socket, have same PHPSESSION. And on the server side, php-fpm also wrote the normal requests into slow log.
I can not figure out how to resolve it, any suggestions?
It isn't PHP-FPM fault. Since you're saying they have the same session then that's the culprit. Sessions in PHP have a per-session lock, so one user cannot load a page with a specific session ID while there are outstanding requests with the same session; the blocking happens when you call session_start(). This is to avoid having different requests editing the same session variables (that causes all sort of problems). When a request ends and writes its resulting session data to the store, the next can start.
If you want to be able to read session variables and start a time consuming job, but don't want to block other requests from happening, just use session_write_close() after reading the session data you need to continue. Be aware that after calling that you cannot modify session data (and maybe you cannot read it because $_SESSION is emptied, but cannot remember).