I have Apache 2.4/PHP 5.6.13 running on Windows Server 2008 R2.
I have an API connector which make 1 call per second per user to read a messaging queue.
I am using setInterval(...., 1000) to make an ajax request to a handler which does the actual API call.
The handler makes cURL calls to the API service to read the messaging queue.
This works fine for 2 users but now I have 10 users using the system which mean more API calls being sent from my server.
Many users "using the API caller or not" have been facing a timeout error. When I look at the php logs I see this fatal error
[14-Aug-2015 16:37:08 UTC] PHP Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000] [2002] An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.
I did a research on this issue and found out that it is not really a SQL error but rather a windows error. It is explained here.
It seems to me that I will need to edit the Windows Registry to correct the issue as it is explained here but I don't like touching windows registry specially on a production server.
My question is does PHP keep ths TCP connection open or does it close it after every request?
I have 10 users using the "API caller" and about 200 that were not. This is only addition was 10 users/10 API calls per second.
Assuming that PHP/cURL automaticly close the TCP connection, then how could I be reaching and 5000 connection from only 10 people using the API?
The problem lies in your application's architecture. Ajax polling is not scalable.
Short polling (what you do) is not scalable, because it just floods the server with requests. You have one request per second and per user. This gives already 10 requests per second for 10 users. You set up a DoS attack against your server!
Long polling (also called comet) means that your server does not immediately respond to the request, but waits until there's a message to send or until a timeout is reached. This is better, because you have lesser requests now. But it is still not scalable because on the server you will continue to hammer onto the database.
Websockets is what you are looking for. Your browser connects to the websocket server, and keeps the connection forever. It is a two way communication channel that is always available to both sides. There are two more things to know :
you need another server for websockets, Apache can't do it.
on the server side you need an event system. Hammering on the database is just not a solution.
Look into ratchet as a php based websocket deamon, and into Autobahn.js for the client side.
Edit: Ratchet is unfortunately no longer maintained. I switched to node.js.
PHP database connections use the PDO base class. By default they are closed each time a request is finished (the PHP script finishes). You can find out more information related to this here http://php.net/manual/en/pdo.connections.php.
You can force your database connection to be persistent which is normally beneficial if you are going to reuse the database connection often.
Apache is (im assuming) like other servers. It is constantly listening on a given port for incoming connections. It establishes that connection reads the request sends out a response and then closes the connection.
Your error is caused by taking up to many connections (the OS will only allow so many) OR overflowing the buffer for the connection. Both of these can be inferred from your error message.
Related
First of all, i'm using pthreads. So the scenario is this: There are servers of a game that send logs over UDP to an ip and port you give them. I'm building an application that will receive those logs, process them and insert them in a mysql database. Since i'm using blocking sockets because the number of servers will never go over 20-30, I'm thinking that i will create a thread for each socket that will receive and process logs for that socket. All the mysql infromation that needs to be inserted in the database will be send to a redis queue where it will get processed by another php running. Is this ok, or better, is it reliable ?
Don't use php for long running processes (php script used for inserting in your graph). The language is designed for web requests (which die after a couple of ms or max seconds). You will run into memory problems all the time.
I have an App service in Azure: a php script that makes a migration from a database (server1) to a another database (azure db in a virtual machine).
This script makes a lot of queries and requests, so it takes a lot of time and the server (App service) returns:
"500 - The request timed out. The web server failed to respond within
the specified time."
I found that it's something about "idle timeout." I would like to know how to increase this time.
In my test, I have tried the following so far:
Add ini_set('max_execution_time', 300); at the top of my PHP script.
App settings on portal: SCM_COMMAND_IDLE_TIMEOUT = 3600.
But nothing seems to work.
After some searching, I found the post by David Ebbo, as he said:
There is a 230 second (i.e. a little less than 4 mins) timeout for
requests that are not sending any data back. After that, the client
gets the 500 you saw, even though in reality the request is allowed to
continue server side.
And the similar thread from SO, you can refer here.
The suggestion for migration is that you can leverage Web Jobs to run PHP scripts as background processes on App Service Web Apps.
For more details, you can refer to https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-create-web-jobs.
I am running into this problem :
I am sending a request to the server using AJAX, which takes some parameters in and on the server side will generate a PDF.
The generation of the pdf can take a lot of time depending on the data used
The Elastic Load Balancer of AWS, after 60s of "idle" connection decides to drop the socket, and therefore my request fails in that case.
I know it's possible to increase the timeout in ELB settings, but not only my sysadmin is against it, it's also a false solution, and bad practice.
I understand the best way to solve the problem would be to send data through the socket to sort of "tell ELB" that I am still active. Sending a dummy request to the server every 30s doesn't work because of our architecture and the fact that the session is locked (ie. we cannot have concurrent AJAX requests from the same session, otherwise one is pending until the other one finishes)
I tried just doing a get request to files on the server but it doesn't make a difference, I assume the "socket" is the one used by the original AJAX call.
The function on the server is pretty linear and almost impossible to divide in multiple calls, and the idea of letting it run in the background and checking every 5sec until it's finished is making me uncomfortable in terms of resource control.
TL;DR : is there any elegant and efficient solution to maintain a socket active while an AJAX request is pending?
Many thanks if anyone can help with this, I have found a couple of similar questions on SO but both are answered by "call amazon team to ask them to increase the timeout in your settings" which sounds very bad to me.
Another approach is to divided the whole operations into two services:
The first service accepts a HTTP request for generating a PDF document. This service finishes immediately after request is accepted. And it will return a UUID or URL for checking result
The second service accepts the UUID and return the PDF document if it's ready. If PDF document is not ready, this service can return an error code, such as HTTP 404.
Since you are using AJAX to call the server side, it will be easy for you to change your javascript and call the 2nd servcie when the 1st service finished successfully. Will this work for your scenario?
Have you tried to following the trouble shooting guide of ELB? Quoted the relevant part below:
HTTP 504: Gateway Timeout
Description: Indicates that the load balancer closed a connection
because a request did not complete within the idle timeout period.
Cause 1: The application takes longer to respond than the configured
idle timeout.
Solution 1: Monitor the HTTPCode_ELB_5XX and Latency metrics. If there
is an increase in these metrics, it could be due to the application
not responding within the idle timeout period. For details about the
requests that are timing out, enable access logs on the load balancer
and review the 504 response codes in the logs that are generated by
Elastic Load Balancing. If necessary, you can increase your capacity
or increase the configured idle timeout so that lengthy operations
(such as uploading a large file) can complete.
Cause 2: Registered instances closing the connection to Elastic Load
Balancing.
Solution 2: Enable keep-alive settings on your EC2 instances and set
the keep-alive timeout to greater than or equal to the idle timeout
settings of your load balancer.
I'm using the JAXL library to implement a jabber chat bot written in php, which is then ran as a background process using the PHP CLI.
Things work quite well, but I've been having a hard time figuring out how to make the chat bot reconnect upon disconnection!
I notice when I leave it running over night sometimes it drops off and doesn't come back. I've experimented with $jaxl->connect() and $jaxl->startStream(), and $jaxl->startCore() after jaxl_post_disconnect hook, but I think I'm missing something.
One solution would be to test your connection:
1) making a "ping" request to your page/controller or whatever
2) setTimeout(functionAjaxPing(), 10000);
3) then read the Ajax response and if == "anyStringKey" then your connection works find
4) else: reconnect() / errorMessage() / whatEver()
This is what IRC chat use i think.
But this will generate more traffic since the ping/ping request will be needed.
Hop this will help you a bit. :)
If you are using Jaxl v3.x all you need is to add a callback for on_disconnect event.
Also you must be using XEP-0199 XMPP Ping. What this XEP will do is, periodically send out XMPP pings to connected jabber server. It will also receive server pings and send back required pong packet (for instance if your client is not replying to server pings, jabber.org will drop your connection after some time).
Finally you MUST also use whitespace pings. A whitespace ping is a single space character sent to the server. This is often enough to make NAT devices consider the connection “alive”, and likewise for certain Jabber servers, e.g. Openfire. It may also make the OS detect a lost connection faster—a TCP connection on which no data is sent or received is indistinguishable from a lost connection.
What I ended up doing was creating a crontab that simply executed the PHP script again.
In the PHP script I read a specific file for the pid of the last fork. If it exists, the script attempts to kill it. Then the script uses pcntl_fork() to fork the process (which is useful for daemonifying a PHP script anyway) and capture the new PID to a file. The fork then logs in with to Jabber with JAXL per usual.
After talking with the author of JAXL it became apparent this would be the easiest way to go about this, despite being hacky. The author may have worked on this particular flaw in more recent iterations, however.
One flaw to this particular method is it requires pcntl_fork() which is not compiled with PHP by default.
I'm running into a problem while running both a Flash socket and using Ajax to load pages. Both work fine separately. I am able to browse my site using the Ajax calls, or I am able to send/receive messages through the socket.
However, when the socket is connected for some reason the Ajax calls start to seemingly be put into a queue and never actually finish until I stop the socket. If I disconnect from the socket, or close the socket on the server side, then the Ajax call immediately finishes and loads the page. The Ajax call never times out, it just waits forever, right up until I close the socket connection.
In JavaScript, I'm using jQuery's $.getJSON() function to load the pages (which I thought were asynchronous calls).
In Flash I'm using the basic ActionScript 3 Socket class:
this._socket = new Socket();
this._socket.addEventListener(Event.CONNECT, onConnectHandler);
this._socket.addEventListener(Event.CLOSE, onCloseHandler);
this._socket.addEventListener(IOErrorEvent.IO_ERROR, onIOErrorHandler);
this._socket.addEventListener(ProgressEvent.SOCKET_DATA, onDataHandler);
this._socket.addEventListener(SecurityErrorEvent.SECURITY_ERROR, onSecurityErrorHandler);
EDIT:
I've verified that no HTTP request is being made. It is in fact being queued by the browser for some reason. I also noticed that not only will it queue the Ajax requests, but it also will queue a browser refresh. If I hit the refresh button it hangs forever as well.
EDIT 2:
Actually, I was checking port 80 when I should have been checking port 443. There is actually a request being made to the server, it's just hanging for some reason. This leads me to believe that there's an issue with the socket (which is using PHP) somehow making the PHP processor queue the requests, or maybe Apache is queuing the requests since it sees PHP is being used by the socket. I'm still not sure why the additional requests to the PHP processor are not being fulfilled until the socket is closed, but I'm pretty sure it has something to do with the fact that the PHP socket is in an always-open state and blocking the other requests.
Apparently the issue is due to the 2-limit connection that browsers implement. Since the socket uses one of the persistent connections to the host, the rest of the HTTP requests get bottle-necked.
The problem and solution are described here:
...if the browser was going to limit the number of
connections to a single host that the answer was simply to trick the
browser into thinking it was talking to more than one host. Turns out
doing this is rather trivial: simply add multiple CNAMEs for the same
host to DNS, and then reference those as the host for some of the
objects in the page.