Increase idle timeout - php

I have an App service in Azure: a php script that makes a migration from a database (server1) to a another database (azure db in a virtual machine).
This script makes a lot of queries and requests, so it takes a lot of time and the server (App service) returns:
"500 - The request timed out. The web server failed to respond within
the specified time."
I found that it's something about "idle timeout." I would like to know how to increase this time.

In my test, I have tried the following so far:
Add ini_set('max_execution_time', 300); at the top of my PHP script.
App settings on portal: SCM_COMMAND_IDLE_TIMEOUT = 3600.
But nothing seems to work.
After some searching, I found the post by David Ebbo, as he said:
There is a 230 second (i.e. a little less than 4 mins) timeout for
requests that are not sending any data back. After that, the client
gets the 500 you saw, even though in reality the request is allowed to
continue server side.
And the similar thread from SO, you can refer here.
The suggestion for migration is that you can leverage Web Jobs to run PHP scripts as background processes on App Service Web Apps.
For more details, you can refer to https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-create-web-jobs.

Related

Keep Elastic Load Balancer connection alive during long AJAX request

I am running into this problem :
I am sending a request to the server using AJAX, which takes some parameters in and on the server side will generate a PDF.
The generation of the pdf can take a lot of time depending on the data used
The Elastic Load Balancer of AWS, after 60s of "idle" connection decides to drop the socket, and therefore my request fails in that case.
I know it's possible to increase the timeout in ELB settings, but not only my sysadmin is against it, it's also a false solution, and bad practice.
I understand the best way to solve the problem would be to send data through the socket to sort of "tell ELB" that I am still active. Sending a dummy request to the server every 30s doesn't work because of our architecture and the fact that the session is locked (ie. we cannot have concurrent AJAX requests from the same session, otherwise one is pending until the other one finishes)
I tried just doing a get request to files on the server but it doesn't make a difference, I assume the "socket" is the one used by the original AJAX call.
The function on the server is pretty linear and almost impossible to divide in multiple calls, and the idea of letting it run in the background and checking every 5sec until it's finished is making me uncomfortable in terms of resource control.
TL;DR : is there any elegant and efficient solution to maintain a socket active while an AJAX request is pending?
Many thanks if anyone can help with this, I have found a couple of similar questions on SO but both are answered by "call amazon team to ask them to increase the timeout in your settings" which sounds very bad to me.
Another approach is to divided the whole operations into two services:
The first service accepts a HTTP request for generating a PDF document. This service finishes immediately after request is accepted. And it will return a UUID or URL for checking result
The second service accepts the UUID and return the PDF document if it's ready. If PDF document is not ready, this service can return an error code, such as HTTP 404.
Since you are using AJAX to call the server side, it will be easy for you to change your javascript and call the 2nd servcie when the 1st service finished successfully. Will this work for your scenario?
Have you tried to following the trouble shooting guide of ELB? Quoted the relevant part below:
HTTP 504: Gateway Timeout
Description: Indicates that the load balancer closed a connection
because a request did not complete within the idle timeout period.
Cause 1: The application takes longer to respond than the configured
idle timeout.
Solution 1: Monitor the HTTPCode_ELB_5XX and Latency metrics. If there
is an increase in these metrics, it could be due to the application
not responding within the idle timeout period. For details about the
requests that are timing out, enable access logs on the load balancer
and review the 504 response codes in the logs that are generated by
Elastic Load Balancing. If necessary, you can increase your capacity
or increase the configured idle timeout so that lengthy operations
(such as uploading a large file) can complete.
Cause 2: Registered instances closing the connection to Elastic Load
Balancing.
Solution 2: Enable keep-alive settings on your EC2 instances and set
the keep-alive timeout to greater than or equal to the idle timeout
settings of your load balancer.

Server overload due to multiple xhr requests

Recently L started experiencing performance issues with my online application hosted on bluehost.
I have an online form that takes a company name and event handler "onKeyUp" tied up to that field. Every time you put a character into the field it sends request to server which makes multiple mysql queries to get the data. Mysql queries all together take about 1-2 seconds. But since requests are send after every character that is put in it easily overloads the server.
The solution for this problem was to cancel previous XHR request before sending a new one. And it seemed to work fine for me (for about a year) until today. Not sure if bluehost changed any configuration on server (I have VPS), or any php/apache settings, but right now my application is very slow due to the amount of users i have.
And i would understand gradual decrease in productivity that may be caused bu database grow, but it suddenly happened over the weekend and speeds went down like 10 times. usual request that took about 1-2 seconds before now takes 10-16 seconds.
I connected to server via SSH & ran some stress test sending lots of queries to see what process monitor (top) will show. And as I expected, for every new request it was a php process created that was put in queue for processing. This queue waiting, apparently, took the most of wait-time.
Now I'm confused, is it possible that before (hypothetical changes on server) every XHR Abort command was actually causing PHP process to quit, reducing additional load on server, and therefore making it work faster? And now for some reason this doesn't work anymore?
I have WAMP installed on Windows 7, as my test environment, and when I export the same database and run the stress-test locally it works fast. Just like it used to be on server before. But on windows I dont have such handy process monitor as TOP, so i cannot see if php processes are actually created and killed respectively.
Not sure how to do the troubleshooting at this point.

Does php automaticly close the TCP connection after every request?

I have Apache 2.4/PHP 5.6.13 running on Windows Server 2008 R2.
I have an API connector which make 1 call per second per user to read a messaging queue.
I am using setInterval(...., 1000) to make an ajax request to a handler which does the actual API call.
The handler makes cURL calls to the API service to read the messaging queue.
This works fine for 2 users but now I have 10 users using the system which mean more API calls being sent from my server.
Many users "using the API caller or not" have been facing a timeout error. When I look at the php logs I see this fatal error
[14-Aug-2015 16:37:08 UTC] PHP Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000] [2002] An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.
I did a research on this issue and found out that it is not really a SQL error but rather a windows error. It is explained here.
It seems to me that I will need to edit the Windows Registry to correct the issue as it is explained here but I don't like touching windows registry specially on a production server.
My question is does PHP keep ths TCP connection open or does it close it after every request?
I have 10 users using the "API caller" and about 200 that were not. This is only addition was 10 users/10 API calls per second.
Assuming that PHP/cURL automaticly close the TCP connection, then how could I be reaching and 5000 connection from only 10 people using the API?
The problem lies in your application's architecture. Ajax polling is not scalable.
Short polling (what you do) is not scalable, because it just floods the server with requests. You have one request per second and per user. This gives already 10 requests per second for 10 users. You set up a DoS attack against your server!
Long polling (also called comet) means that your server does not immediately respond to the request, but waits until there's a message to send or until a timeout is reached. This is better, because you have lesser requests now. But it is still not scalable because on the server you will continue to hammer onto the database.
Websockets is what you are looking for. Your browser connects to the websocket server, and keeps the connection forever. It is a two way communication channel that is always available to both sides. There are two more things to know :
you need another server for websockets, Apache can't do it.
on the server side you need an event system. Hammering on the database is just not a solution.
Look into ratchet as a php based websocket deamon, and into Autobahn.js for the client side.
Edit: Ratchet is unfortunately no longer maintained. I switched to node.js.
PHP database connections use the PDO base class. By default they are closed each time a request is finished (the PHP script finishes). You can find out more information related to this here http://php.net/manual/en/pdo.connections.php.
You can force your database connection to be persistent which is normally beneficial if you are going to reuse the database connection often.
Apache is (im assuming) like other servers. It is constantly listening on a given port for incoming connections. It establishes that connection reads the request sends out a response and then closes the connection.
Your error is caused by taking up to many connections (the OS will only allow so many) OR overflowing the buffer for the connection. Both of these can be inferred from your error message.

Browser Timing out waiting for Soap Client

I'm working on an application which gets some data from a web service using a PHP soap client. The web service accesses the clients SQL server, which has very slow performance (some requests will take several minutes to run).
Everything works fine for the smaller requests, but if the browser is waiting for 2 minutes, it prompts me to download a blank file.
I've increased the php max_execution_time, memory_limit and default_socket_timeout, but the browser will always seem to stop waiting at exactly 2 minutes.
Any ideas on how to get the brower to hang around indefinitely?
You could change your architecture from pull to push. Then the user can carrying on using your web application & be notified when the data is ready.
Or as a simple work around (not ideal) if you are able to modify the soap server you could have another web service that checks if the data is ready, then on the client you could call this every 30secs to keep checking if data is available rather than waiting.
The web server was timing out - in my case, Apache. I initially thought it was something else as I increased the timeout value in httpd.conf, and it was still stopping after two minutes. However, I'm using Zend Server, which has an additional configuration file which was setting the timeout to 120 seconds - I increased this and the browser no longer stops after two minutes.

CURL fails after many runs saying "could not establish connection" or "connect() timed out"

I'm trying to index many hundrets of web-pages.
In Short
Calling a PHP script using a CRON-job
Getting some (only around 15) of the least recently updated URLs
Querying theses URLs using CURL
The Problem
In development everything went fine. But when I started to index much more then some testpages, CURL refused to work after some runs. It does not get any data from the remote server.
Error messages
These errors CURL has printed out (of course not at once)
couldn't connect to host
Operation timed out after 60000 milliseconds with 0 bytes received
I'm working on a V-Server and tried to connect to the remote server using Firefox or wget. Also nothing. But when connecting to that remote server from my local machine everything works fine.
Waiting some hours, it again works for some runs.
For me it seems like a problem on the remote server or a DDOS-protection or something like that, what do you guys think?
You should be using proxies when you send out too many requests as your IP can be blocked by the site by their DDOS protection or similar setups.
Here are somethings to note : (What I used for scraping datas of websites)
1.Use Proxies.
2.Use Random User Agents
3.Random Referers
4.Random Delay in crons.
5.Random Delay between requets.
What I would do is make the script run for ever and add sleep in between.
ignore_user_abort(1);
set_time_limit(0);
Just trigger it with visiting the url for a sec and it will run forever.
How often is the script run? It really could be triggering some DOS-like protection. I would recommend implementing some random delay to make the requests seem delayed by some time to make them appear more "natural"

Categories