Struggling with (understanding) PHP Asynchronous Requests - php

I'm really struggling to understand the way PHP asynchronous requests work. It seems to me that there cannot be any true asynchronous behaviour in PHP because to receive a response from an asynchronous request, you have to delay execution of the script (blocking, essentially) which then defeats the purpose.. right?
What I'm trying to do from a page on my site is:
Initiate AJAX request to (local) controller to begin the remote API
requests
Use a loop of GuzzleHTTP requests to remote API
Upon completion of each request, send the result to a socket.io server which then emits a message back to the listening client (the original page that initiated the AJAX request)
Now, I have this working - with one caveat: the GuzzleHTTP requests are not asynchronous and the remote API is very slow (usually takes around 10 seconds to receive the JSON response) so when there are 10 tasks, my connection to the server is frozen for over a minute. Opening new tabs/windows and trying to access the site results in waiting until the original script run has completed.
This seems to indicate that I misunderstood how HTTP requests work. I thought that each individual browser window (or request) was completely separate as far as the server is concerned but from the result I'm seeing, perhaps this is not exactly the case?
This is why I started looking into asynchronous requests and discovered that they don't really seem all that.. asynchronous.
So what I'm really looking for is help filling the gaps/misunderstandings in my knowledge of:
Why the blocking occurs in the first place (in terms of new, completely separate requests)
Where I might look to implement a solution.

Related

cURL proxy hangs for 30 seconds on the initial request

Firstly I will try to explain how the setup I am trying to debug works.
First off, an XHR/Ajax request is made from the client (JavaScript).
This request gets sent off to a Windows/Apache/PHP server
This server acts as a proxy, which accepts the request, processes the data and proxies it onto another server, with cURL
The cURL request is sent to a Windows IIS server, which accepts the data, and returns some values back to the PHP proxy server
The PHP proxy server does some work with the values and then returns them back to the client (JavaScript)
Some other information:
The purpose of this PHP cURL proxy is that it is a single sign-on authentication API.
The process is much inspired by this answer: How youtube gets logged in to gmail account without redirect?.
The PHP cURL proxy acts as the single sign-on API which stores authentication tokens.
The Windows IIS server is the place that username/passwords are sent to, and potentially returns an auth token.
The problem:
On the initial request, there is a 30 second wait time until a response is returned, it's almost like the servers are asleep and need waking up. After the initial request successfully gets a response, subsequent requests are returned in less than 1 second. If no requests are made for a space of time (say 30 minutes), the slow initial request happens again.
A direct request to the PHP Auth API never has this long wait, the same goes for the Windows IIS endpoint - but when there's a cURL between the two, the long wait occurs.
Can anyone point me in the right direction here? Maybe it's something obvious that I haven't considered? Would "Keep connection alive" type parameters be of help?
I hope I have explained the problem properly and if more information is needed, just ask.
Thank you Stack Overflow!
Have you considered that as this is the 'first' time you have called the API, that the API itself may have just gone to sleep,
and its own initialisation has startup steps which take a little time to run.

How are realtime applications implemented? Websockets/PHP

I want to create a web application where the UI updates in real time (or as close to real time as you're going to get). Data for the UI comes from the server.
I don't want to use traditional HTTP requests where I constantly send requests to the server for new data. I'd rather have a connection open, and have the server push data to the client.
I believe this the publisher/subscriber pattern.
I've heard people mention zeromq, and React and to use Websockets. But from all the examples I've looked at I can't really find anything on this. For example zeromq has examples that show server and client. Do I implement the server, and then use websockets on the UI end as the client?
How would something like this be implemented?
Traditional HTTP Requests is still what all of this is all about still.
You can have Regular HTTP Requests:
- Users sends Request to Server
- Server responds to said request
There's also Ajax Polling and Ajax Long Polling, the concept is similar.
Ajax Polling means, an HTTP request is sent every X seconds to to look for new information.
Example: Fetch new Comments for a section.
Ajax Long Polling is similar, but when you send a request to the server, if there are no responses ready to give to the client, you let the connection hang (for a defined period of time).
If during that time new information comes in, you are already waiting for it. Otherwise after the time expires, the process restarts. Instead of going back and forth, you send a request - wait, wait - and whether you receive a response or not, after a period of time, you restart the process.
WebSockets is still an HTTP Request.
It consists in the client handling the weight in front-end, by opening WebSocket request to a destination.
This connection will not close - and it will receive and send real-time information back and forth.
Specific actions and replies from the server, need to be programmed and have callbacks in the client side for things to happen.
With WebSockets you can receive and transmit in realtime, it's a duplex bi-directional connection.
So yes, in case it wasn't clear.
You setup a WebSocket Server, running on a loop, waiting for connections.
When it receives one, there's a chat-like communication between that said server and the client; the client who needs to have programmed callbacks for the server responses.

Handling multiple HTTP GET requests on same TCP connection parallely

I am very new to learning PHP. I am trying to create a PHP script that would handle multiple GET request which are JSON encoded coming from a clients software on a single TCP connection to the PHP script simultaneously.
Whilst reading through I encountered the "HTTP pipe-lining, processing requests in parallel" article on StackOverflow. Well I would want to process the requests as and when they arrive. By design the requests are pipe-lined, hence the requests are processed one by one.
The problem here being if the client software makes 100 requests to the PHP script with a difference of a few milliseconds, my PHP script would take some time to process each request and eventually add on immense amount of time before the last request is processed and sent back to the requesting entity.
I am using $_GET method for retrieving the requests. I have looked for this information and don't seem to find anything substantial. I would appreciate any help on this. Could anyone kindly guide me in the right direction.
Thank you in advance.
If you are using a web server, like Apache, this is handled for you, in the exact manner you are describing.

PHP: how does http_get work?

Let's say I have an index.php where I use some form of http get/post operation. How exactly is this executed by the server? does it pause and wait for a response before completing execution? What if nothing is returned? What if I want the execution to continue and another script to be executed once the response arrives (as in Ajax)?
enlightenment appreciated.
It's a simple matter of logic.
Does it pause and wait for a response before completing execution?
Yes.
What if nothing is returned?
Then you either get false or a empty string.
What if I want the execution to continue and another script to be executed once the response arrives (as in Ajax)?
You need to play with libevent (not for the soft-hearted - a lot harder than Ajax).
Server receives the request (let's say it's Apache), it recognizes someone is requesting a .php file so it knows it has to pass the request to PHP engine. PHP engine receives the request and parses the headers into $_POST / $_GET / $_FILES ($_REQUEST) superglobals so that it can be worked with.
During this time the execution is as follows:
Client requests a resource from the server.
Server receives it and does certain work to return response (in this case it invokes PHP engine).
PHP engine does what it has to do and returns a result (be it a valid result or a parse error - server doesn't care). In any way, if nothing went wrong server will return a response with appropriate response status code (2xx, 3xx, 4xx, 5xx, you probably know of 404 already).
Once Apache receives response from PHP, script execution is stopped.
It's not full-duplex communication where you can have socket open at all times to be used as a telephone wire (think Skype or any other IM).
In case of Javascript and async calls - since JS is asynchronous language (it implements an event loop rather than threaded model), you specify a callback function to be executed when the response arrives. Depending on what you need, you can send yet another request to the server.
However, there's the WebSocket protocol that enables full-duplex communication which leaves the connection open and where server can push the data to the client. It requires a different server than Apache / Nginx such as Node.js or a custom one.
Reading from the docs, it seems like http_get is a blocking call, i.e. it will freeze your script until the HTTP transaction completes, fails or timeouts. It seems like you cannot set it in non-blocking mode, and PHP has no threads. I'm not an expert in PHP, but I think there's no easy way to continue the script.
Besides the question itself, if I were you, I would really reconsider my choices. I feel like you're not thinking it the right way, because I can hardly imagine a scenario where it's strictly needed to perform an HTTP GET in PHP. It is done very, very rarely.
PHP scripts do not continue running in any fashion unless the page is still being passed to the browser. If your browser's "Loading" icon isn't spinning, then PHP has stopped being executed. They run and then terminate almost instantaneously (for reasonably-sized pages).
When you pass an HTTP GET/POST signal, you're passing it to a PHP script, but not one that is already running and waiting for a response. It's an entirely new instantiation of the script, which has to re-assign everything, re-include everything, and re-grab everything from the database, if you're using one.
The index.php will be executed and terminated right away.
If there's a request post to the php, the php file will be executed (again, if it's index.php) and terminated.
You can use exec() function to execute your script in your php file.

What does AJAX Push have to pay to get web pages more interactive?

(Sorry if the topic isn't titled properly, would appreciate if somebody helps me to make it more related to what I explain below).
Recently I feel very interested in getting to know AJAX push and all its basic ideas. I know AJAX push technique makes the web pages more interactive on the server-side, but behind all the smoothly interactive movements, there are apparently some "hard-work" behind the stage, in both implementation and how it deals with resources.
So in brief, let's forget about the implementation, I need somebody to explain me the way AJAX push works with respect to server connection, how (or how much) resources are used, or some other aspects that should be taken into account when implementing this method.
I haven't done much research so you have any documents related to this, I'm more than happy to read.
I don't really see how "ajax push" is a thing. I know long polling and I think it's the same.
The downside of long polling is that your server can have a lot of unfinished requests open. Not sockets, but actual requests. The idea of long polling:
Client makes request to Server
Server doesn't respond (doesn't finish the request)
Server waits until there's something to tell (to the Client)
When the server receives new information (from another Client or somewhere else) it prints that information to all waiting Clients (can be a lot!) and finishes the request
Client receives that information and immediately makes another request (with new timestamp and/or hash etc)
The downside: if 500 clients all do step 1 and nothing else, the server has 500 requests open and is just waiting to send some information and end those requests. Most servers don't allow 500 open HTTP requests...
If you have time, you might want to read this PDF. (It's long though.)
PS. The upside is that your server receives less HTTP requests (which means less HTTP overhead) and that information is only sent when there's something to send (which also means less overhead).
edit
Long polling example: http://hotblocks.nl/tests/ajax/poller/ with source http://hotblocks.nl/tests/ajax/poller/callback.php?source
explanation
The upside: less HTTP overhead, because less HTTP requests. Let's say the amount of users is static (it is) and 500.
With long polling: 500 users make 1 request and then wait............ and then something changes and all 500 requests are finished (by the Server) and then 'renewed' (new HTTP request) by the Client.
Upside: less requests (1 per user per new information).
Downside: longer requests (very long idling, which means more open requests).
Without long polling: 500 users make a request, server responds with "nothing new", so 500 users make another request 500ms/1s/5s later and the server responds again with "nothing new" etc etc etc until the server has actual news and then the response contains something. And even then the clients immediately make a new request.
Upside: quick, short requests to the server that can be finished quickly.
Downside: many, many, many of those requests to the server (and every HTTP request => HTTP headers => MUCH overhead).
example explanation
The example is very (much too) easy:
You (Client) make a request to Server to fetch current info
Server gives you that info and a timestamp
Client receives info, uses it (show message) and makes new request with timestamp
Server compares Client timestamp with Server timestamp (filemtime of a file in this case)
If file change is newer than Client timestamp: print new file contents
Client receives that info and the new Server timestamp
Step 3 again etc
Time between step 4 and 5 can be very long. In an active chat, it won't be. (New information is added all the time.) In a multiplayer game, it might be (seconds, not minutes).
This might get you started:
http://www.icefaces.org/main/ajax-java/ajaxpush.iface
However this link is way better its in COMIC BOOK FORM =))
http://www.ape-project.org/comics/1/Chapter-I-common-let-s-push.html
Essentially its very similar to AJAX only the server can now talk to the clients rather than only having client requests.

Categories