Ajax calls queued while connected to Flash Socket - php

I'm running into a problem while running both a Flash socket and using Ajax to load pages. Both work fine separately. I am able to browse my site using the Ajax calls, or I am able to send/receive messages through the socket.
However, when the socket is connected for some reason the Ajax calls start to seemingly be put into a queue and never actually finish until I stop the socket. If I disconnect from the socket, or close the socket on the server side, then the Ajax call immediately finishes and loads the page. The Ajax call never times out, it just waits forever, right up until I close the socket connection.
In JavaScript, I'm using jQuery's $.getJSON() function to load the pages (which I thought were asynchronous calls).
In Flash I'm using the basic ActionScript 3 Socket class:
this._socket = new Socket();
this._socket.addEventListener(Event.CONNECT, onConnectHandler);
this._socket.addEventListener(Event.CLOSE, onCloseHandler);
this._socket.addEventListener(IOErrorEvent.IO_ERROR, onIOErrorHandler);
this._socket.addEventListener(ProgressEvent.SOCKET_DATA, onDataHandler);
this._socket.addEventListener(SecurityErrorEvent.SECURITY_ERROR, onSecurityErrorHandler);
EDIT:
I've verified that no HTTP request is being made. It is in fact being queued by the browser for some reason. I also noticed that not only will it queue the Ajax requests, but it also will queue a browser refresh. If I hit the refresh button it hangs forever as well.
EDIT 2:
Actually, I was checking port 80 when I should have been checking port 443. There is actually a request being made to the server, it's just hanging for some reason. This leads me to believe that there's an issue with the socket (which is using PHP) somehow making the PHP processor queue the requests, or maybe Apache is queuing the requests since it sees PHP is being used by the socket. I'm still not sure why the additional requests to the PHP processor are not being fulfilled until the socket is closed, but I'm pretty sure it has something to do with the fact that the PHP socket is in an always-open state and blocking the other requests.

Apparently the issue is due to the 2-limit connection that browsers implement. Since the socket uses one of the persistent connections to the host, the rest of the HTTP requests get bottle-necked.
The problem and solution are described here:
...if the browser was going to limit the number of
connections to a single host that the answer was simply to trick the
browser into thinking it was talking to more than one host. Turns out
doing this is rather trivial: simply add multiple CNAMEs for the same
host to DNS, and then reference those as the host for some of the
objects in the page.

Related

How are realtime applications implemented? Websockets/PHP

I want to create a web application where the UI updates in real time (or as close to real time as you're going to get). Data for the UI comes from the server.
I don't want to use traditional HTTP requests where I constantly send requests to the server for new data. I'd rather have a connection open, and have the server push data to the client.
I believe this the publisher/subscriber pattern.
I've heard people mention zeromq, and React and to use Websockets. But from all the examples I've looked at I can't really find anything on this. For example zeromq has examples that show server and client. Do I implement the server, and then use websockets on the UI end as the client?
How would something like this be implemented?
Traditional HTTP Requests is still what all of this is all about still.
You can have Regular HTTP Requests:
- Users sends Request to Server
- Server responds to said request
There's also Ajax Polling and Ajax Long Polling, the concept is similar.
Ajax Polling means, an HTTP request is sent every X seconds to to look for new information.
Example: Fetch new Comments for a section.
Ajax Long Polling is similar, but when you send a request to the server, if there are no responses ready to give to the client, you let the connection hang (for a defined period of time).
If during that time new information comes in, you are already waiting for it. Otherwise after the time expires, the process restarts. Instead of going back and forth, you send a request - wait, wait - and whether you receive a response or not, after a period of time, you restart the process.
WebSockets is still an HTTP Request.
It consists in the client handling the weight in front-end, by opening WebSocket request to a destination.
This connection will not close - and it will receive and send real-time information back and forth.
Specific actions and replies from the server, need to be programmed and have callbacks in the client side for things to happen.
With WebSockets you can receive and transmit in realtime, it's a duplex bi-directional connection.
So yes, in case it wasn't clear.
You setup a WebSocket Server, running on a loop, waiting for connections.
When it receives one, there's a chat-like communication between that said server and the client; the client who needs to have programmed callbacks for the server responses.

Does php automaticly close the TCP connection after every request?

I have Apache 2.4/PHP 5.6.13 running on Windows Server 2008 R2.
I have an API connector which make 1 call per second per user to read a messaging queue.
I am using setInterval(...., 1000) to make an ajax request to a handler which does the actual API call.
The handler makes cURL calls to the API service to read the messaging queue.
This works fine for 2 users but now I have 10 users using the system which mean more API calls being sent from my server.
Many users "using the API caller or not" have been facing a timeout error. When I look at the php logs I see this fatal error
[14-Aug-2015 16:37:08 UTC] PHP Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000] [2002] An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.
I did a research on this issue and found out that it is not really a SQL error but rather a windows error. It is explained here.
It seems to me that I will need to edit the Windows Registry to correct the issue as it is explained here but I don't like touching windows registry specially on a production server.
My question is does PHP keep ths TCP connection open or does it close it after every request?
I have 10 users using the "API caller" and about 200 that were not. This is only addition was 10 users/10 API calls per second.
Assuming that PHP/cURL automaticly close the TCP connection, then how could I be reaching and 5000 connection from only 10 people using the API?
The problem lies in your application's architecture. Ajax polling is not scalable.
Short polling (what you do) is not scalable, because it just floods the server with requests. You have one request per second and per user. This gives already 10 requests per second for 10 users. You set up a DoS attack against your server!
Long polling (also called comet) means that your server does not immediately respond to the request, but waits until there's a message to send or until a timeout is reached. This is better, because you have lesser requests now. But it is still not scalable because on the server you will continue to hammer onto the database.
Websockets is what you are looking for. Your browser connects to the websocket server, and keeps the connection forever. It is a two way communication channel that is always available to both sides. There are two more things to know :
you need another server for websockets, Apache can't do it.
on the server side you need an event system. Hammering on the database is just not a solution.
Look into ratchet as a php based websocket deamon, and into Autobahn.js for the client side.
Edit: Ratchet is unfortunately no longer maintained. I switched to node.js.
PHP database connections use the PDO base class. By default they are closed each time a request is finished (the PHP script finishes). You can find out more information related to this here http://php.net/manual/en/pdo.connections.php.
You can force your database connection to be persistent which is normally beneficial if you are going to reuse the database connection often.
Apache is (im assuming) like other servers. It is constantly listening on a given port for incoming connections. It establishes that connection reads the request sends out a response and then closes the connection.
Your error is caused by taking up to many connections (the OS will only allow so many) OR overflowing the buffer for the connection. Both of these can be inferred from your error message.

Does the server "shut down" when the browser is closed? PHP - MYSQL

I was wondering - as I wrote a snippet of code that could update up to 10,000 rows and might take a few seconds to complete, if when the file is accessed via an ajax request, the post query is send to the php file, then the browser is closed, does the file get fully executed? assume it takes about 25 seconds to complete the request, the user might not wait for 25 seconds, is good enough to "ping" this file and let the user browse along or close its browser window as the mysql queries are taking place?
The request has 3 parts
A browser connected to the web server
PHP script that is executed by the server
A query running in the DB server
When you close the browser, connection with the server is closed. The server may or may not kill the started PHP script (if PHP is running as apache module, it would be killed, unless ignore_user_abort is called). Also the web server may have a time limit for the request and either kill the script or just send the client a connection timeout message, without killing the script, but without giving it the chance to send anything to the browser.
Here is the tricky part - the update is running in the database and it won't be killed by the web server, neither by PHP.
So what you want to achieve is pinging a PHP script, that is executing a query, but the client do not wait the result. You may or may not want the query itself to be asyncronous (the PHP script not to wait the query), but you have to tell the client that the request is fulfilled, by sending content-length of 0 for example, and flushing the output (the http headers actually), and running PHP with ignore_user_abort so it continues the execution.
Use ignore-user-abort to continue running the script even after the client has disconnected
ignore_user_abort(true);
set_time_limit(0);
You can use connection_status to track if the connection has disconnected
if (connection_status()!=0) { //connection disconnected
Here's the answer for your question:
http://www.php.net/manual/en/features.connection-handling.php
Normally no, but your script pass in ABORTED status.
More details in the manual page about Connection handling:
http://www.php.net/manual/en/features.connection-handling.php
Internally in PHP a connection status is maintained. There are 3
possible states:
0 - NORMAL
1 - ABORTED
2 - TIMEOUT
When a PHP script is running normally the NORMAL state, is active. If
the remote client disconnects the ABORTED state flag is turned on. A
remote client disconnect is usually caused by the user hitting his
STOP button.
As soon as you close the browser, it disconnects from the server before getting the reply. I do not know exactly how different servers behave in this condition but I assume that most of the server will abort the thread that they are working on to reply the request.
Further, things can be different with different operations - i.e. file i/o or database operation. If it is an atomic database operation, my assumption is, it will complete any how.

Ajax Long Polling Restrictions

So a friend and I are building a web based, AJAX chat software with a jQuery and PHP core. Up to now, we've been using the standard procedure of calling the sever every two seconds or so looking for updates. However I've come to dislike this method as it's not fast, nor is it "cost effective" in that there are tons of requests going back and forth from the server, even if no data is returned.
One of our project supporters recommended we look into a technique known as COMET, or more specifically, Long Polling. However after reading about it in different articles and blog posts, I've found that it isn't all that practical when used with Apache servers. It seems that most people just say "It isn't a good idea", but don't give much in the way of specifics in the way of how many requests can Apache handle at one time.
The whole purpose of PureChat is to provide people with a chat that looks great, goes fast, and works on most servers. As such, I'm assuming that about 96% of our users will being using Apache, and not Lighttpd or Nginx, which are supposedly more suited for long polling.
Getting to the Point:
In your opinion, is it better to continue using setInterval and repeatedly request new data? Or is it better to go with Long Polling, despite the fact that most users will be using Apache? Also, it possible to get a more specific rundown on approximately how many people can be using the chat before an Apache server rolls over and dies?
As Andrew stated, a socket connection is the ultimate solution for asynchronous communication with a server, although only the most cutting edge browsers support WebSockets at this point. socket.io is an open source API you can use which will initiate a WebSocket connection if the browser supports it, but will fall back to a Flash alternative if the browser does not support it. This would be transparent to the coder using the API however.
Socket connections basically keep open communication between the browser and the server so that each can send messages to each other at any time. The socket server daemon would keep a list of connected subscribers, and when it receives a message from one of the subscribers, it can immediately send this message back out to all of the subscribers.
For socket connections however, you need a socket server daemon running full time on your server. While this can be done with command line PHP (no Apache needed), it is better suited for something like node.js, a non-blocking server-side JavaScript api.
node.js would also be better for what you are talking about, long polling. Basically node.js is event driven and single threaded. This means you can keep many connections open without having to open as many threads, which would eat up tons of memory (Apaches problem). This allows for high availability. What you have to keep in mind however is that even if you were using a non-blocking file server like Nginx, PHP has many blocking network calls. Since It is running on a single thread, each (for instance) MySQL call would basically halt the server until a response for that MySQL call is returned. Nothing else would get done while this is happening, making your non-blocking server useless. If however you used a non-blocking language like JavaScript (node.js) for your network calls, this would not be an issue. Instead of waiting for a response from MySQL, it would set a handler function to handle the response whenever it becomes available, allowing the server to handle other requests while it is waiting.
For long polling, you would basically send a request, the server would wait 50 seconds before responding. It will respond sooner than 50 seconds if it has anything to report, otherwise it waits. If there is nothing to report after 50 seconds, it sends a response anyways so that the browser does not time out. The response would trigger the browser to send another request, and the process starts over again. This allows for fewer requests and snappier responses, but again, not as good as a socket connection.

Forcing chat bot made with JAXL/XMPPHP to reconnect upon disconnection

I'm using the JAXL library to implement a jabber chat bot written in php, which is then ran as a background process using the PHP CLI.
Things work quite well, but I've been having a hard time figuring out how to make the chat bot reconnect upon disconnection!
I notice when I leave it running over night sometimes it drops off and doesn't come back. I've experimented with $jaxl->connect() and $jaxl->startStream(), and $jaxl->startCore() after jaxl_post_disconnect hook, but I think I'm missing something.
One solution would be to test your connection:
1) making a "ping" request to your page/controller or whatever
2) setTimeout(functionAjaxPing(), 10000);
3) then read the Ajax response and if == "anyStringKey" then your connection works find
4) else: reconnect() / errorMessage() / whatEver()
This is what IRC chat use i think.
But this will generate more traffic since the ping/ping request will be needed.
Hop this will help you a bit. :)
If you are using Jaxl v3.x all you need is to add a callback for on_disconnect event.
Also you must be using XEP-0199 XMPP Ping. What this XEP will do is, periodically send out XMPP pings to connected jabber server. It will also receive server pings and send back required pong packet (for instance if your client is not replying to server pings, jabber.org will drop your connection after some time).
Finally you MUST also use whitespace pings. A whitespace ping is a single space character sent to the server. This is often enough to make NAT devices consider the connection “alive”, and likewise for certain Jabber servers, e.g. Openfire. It may also make the OS detect a lost connection faster—a TCP connection on which no data is sent or received is indistinguishable from a lost connection.
What I ended up doing was creating a crontab that simply executed the PHP script again.
In the PHP script I read a specific file for the pid of the last fork. If it exists, the script attempts to kill it. Then the script uses pcntl_fork() to fork the process (which is useful for daemonifying a PHP script anyway) and capture the new PID to a file. The fork then logs in with to Jabber with JAXL per usual.
After talking with the author of JAXL it became apparent this would be the easiest way to go about this, despite being hacky. The author may have worked on this particular flaw in more recent iterations, however.
One flaw to this particular method is it requires pcntl_fork() which is not compiled with PHP by default.

Categories