Firstly I will try to explain how the setup I am trying to debug works.
First off, an XHR/Ajax request is made from the client (JavaScript).
This request gets sent off to a Windows/Apache/PHP server
This server acts as a proxy, which accepts the request, processes the data and proxies it onto another server, with cURL
The cURL request is sent to a Windows IIS server, which accepts the data, and returns some values back to the PHP proxy server
The PHP proxy server does some work with the values and then returns them back to the client (JavaScript)
Some other information:
The purpose of this PHP cURL proxy is that it is a single sign-on authentication API.
The process is much inspired by this answer: How youtube gets logged in to gmail account without redirect?.
The PHP cURL proxy acts as the single sign-on API which stores authentication tokens.
The Windows IIS server is the place that username/passwords are sent to, and potentially returns an auth token.
The problem:
On the initial request, there is a 30 second wait time until a response is returned, it's almost like the servers are asleep and need waking up. After the initial request successfully gets a response, subsequent requests are returned in less than 1 second. If no requests are made for a space of time (say 30 minutes), the slow initial request happens again.
A direct request to the PHP Auth API never has this long wait, the same goes for the Windows IIS endpoint - but when there's a cURL between the two, the long wait occurs.
Can anyone point me in the right direction here? Maybe it's something obvious that I haven't considered? Would "Keep connection alive" type parameters be of help?
I hope I have explained the problem properly and if more information is needed, just ask.
Thank you Stack Overflow!
Have you considered that as this is the 'first' time you have called the API, that the API itself may have just gone to sleep,
and its own initialisation has startup steps which take a little time to run.
Related
I'm really struggling to understand the way PHP asynchronous requests work. It seems to me that there cannot be any true asynchronous behaviour in PHP because to receive a response from an asynchronous request, you have to delay execution of the script (blocking, essentially) which then defeats the purpose.. right?
What I'm trying to do from a page on my site is:
Initiate AJAX request to (local) controller to begin the remote API
requests
Use a loop of GuzzleHTTP requests to remote API
Upon completion of each request, send the result to a socket.io server which then emits a message back to the listening client (the original page that initiated the AJAX request)
Now, I have this working - with one caveat: the GuzzleHTTP requests are not asynchronous and the remote API is very slow (usually takes around 10 seconds to receive the JSON response) so when there are 10 tasks, my connection to the server is frozen for over a minute. Opening new tabs/windows and trying to access the site results in waiting until the original script run has completed.
This seems to indicate that I misunderstood how HTTP requests work. I thought that each individual browser window (or request) was completely separate as far as the server is concerned but from the result I'm seeing, perhaps this is not exactly the case?
This is why I started looking into asynchronous requests and discovered that they don't really seem all that.. asynchronous.
So what I'm really looking for is help filling the gaps/misunderstandings in my knowledge of:
Why the blocking occurs in the first place (in terms of new, completely separate requests)
Where I might look to implement a solution.
I want to create a web application where the UI updates in real time (or as close to real time as you're going to get). Data for the UI comes from the server.
I don't want to use traditional HTTP requests where I constantly send requests to the server for new data. I'd rather have a connection open, and have the server push data to the client.
I believe this the publisher/subscriber pattern.
I've heard people mention zeromq, and React and to use Websockets. But from all the examples I've looked at I can't really find anything on this. For example zeromq has examples that show server and client. Do I implement the server, and then use websockets on the UI end as the client?
How would something like this be implemented?
Traditional HTTP Requests is still what all of this is all about still.
You can have Regular HTTP Requests:
- Users sends Request to Server
- Server responds to said request
There's also Ajax Polling and Ajax Long Polling, the concept is similar.
Ajax Polling means, an HTTP request is sent every X seconds to to look for new information.
Example: Fetch new Comments for a section.
Ajax Long Polling is similar, but when you send a request to the server, if there are no responses ready to give to the client, you let the connection hang (for a defined period of time).
If during that time new information comes in, you are already waiting for it. Otherwise after the time expires, the process restarts. Instead of going back and forth, you send a request - wait, wait - and whether you receive a response or not, after a period of time, you restart the process.
WebSockets is still an HTTP Request.
It consists in the client handling the weight in front-end, by opening WebSocket request to a destination.
This connection will not close - and it will receive and send real-time information back and forth.
Specific actions and replies from the server, need to be programmed and have callbacks in the client side for things to happen.
With WebSockets you can receive and transmit in realtime, it's a duplex bi-directional connection.
So yes, in case it wasn't clear.
You setup a WebSocket Server, running on a loop, waiting for connections.
When it receives one, there's a chat-like communication between that said server and the client; the client who needs to have programmed callbacks for the server responses.
I have a website which spambots visit a lot. I would like to stop these spambots from skewing my website statistics with non-human requests.
I run IIS 7 (fast-CGI), mainly with PHP as the server side language. I have PHP code in place that detects what a spambot is, when it detects this spambot it doesn't log the entry to my request history log file. This solves the problem.
However I would like to punish the spambots further, and hopefully slow down/deter their requests.
Is there any way I can NOT send a HTTP response to the spambot's HTTP request?
In PHP I figured I would use a sleep(30) time out before calling exit() but I hear this still requires some server resources during the sleep i.e. keeping memory allocated to the script etc. Is there any way I can exit the PHP script, and essentially send nothing to the spambot? Which will hopefully lock it's request thread until the client side request timout it reached.
I'm making curl requests to a SOAP server and I'm getting sporadic timeouts from the server. How can I determine if this is a limitation of the number of concurrent requests from my server, or if it's the server I'm making a request to?
PS - the curl timeout I specified is 60 seconds and it's been working consistently up until about a week ago, and while we have made changes to one of the applications using this, there are other applications which have broken and have not had any updates made on them for awhile.
At the moment I'm implementing failure tracking so I can see the http response codes, timestamps, and relevant variables in the requests but if anyone has any suggestions from past similar incidents I'd greatly appreciate it.
Thanks.
To see all of what is going on, fire up Wireshark on your server, or some other packet sniffing utility. Then, you can see which connections are causing you troubles.
You could try simply telnetting to the server and pasting the request in and seeing what the response comes back like.
(Sorry if the topic isn't titled properly, would appreciate if somebody helps me to make it more related to what I explain below).
Recently I feel very interested in getting to know AJAX push and all its basic ideas. I know AJAX push technique makes the web pages more interactive on the server-side, but behind all the smoothly interactive movements, there are apparently some "hard-work" behind the stage, in both implementation and how it deals with resources.
So in brief, let's forget about the implementation, I need somebody to explain me the way AJAX push works with respect to server connection, how (or how much) resources are used, or some other aspects that should be taken into account when implementing this method.
I haven't done much research so you have any documents related to this, I'm more than happy to read.
I don't really see how "ajax push" is a thing. I know long polling and I think it's the same.
The downside of long polling is that your server can have a lot of unfinished requests open. Not sockets, but actual requests. The idea of long polling:
Client makes request to Server
Server doesn't respond (doesn't finish the request)
Server waits until there's something to tell (to the Client)
When the server receives new information (from another Client or somewhere else) it prints that information to all waiting Clients (can be a lot!) and finishes the request
Client receives that information and immediately makes another request (with new timestamp and/or hash etc)
The downside: if 500 clients all do step 1 and nothing else, the server has 500 requests open and is just waiting to send some information and end those requests. Most servers don't allow 500 open HTTP requests...
If you have time, you might want to read this PDF. (It's long though.)
PS. The upside is that your server receives less HTTP requests (which means less HTTP overhead) and that information is only sent when there's something to send (which also means less overhead).
edit
Long polling example: http://hotblocks.nl/tests/ajax/poller/ with source http://hotblocks.nl/tests/ajax/poller/callback.php?source
explanation
The upside: less HTTP overhead, because less HTTP requests. Let's say the amount of users is static (it is) and 500.
With long polling: 500 users make 1 request and then wait............ and then something changes and all 500 requests are finished (by the Server) and then 'renewed' (new HTTP request) by the Client.
Upside: less requests (1 per user per new information).
Downside: longer requests (very long idling, which means more open requests).
Without long polling: 500 users make a request, server responds with "nothing new", so 500 users make another request 500ms/1s/5s later and the server responds again with "nothing new" etc etc etc until the server has actual news and then the response contains something. And even then the clients immediately make a new request.
Upside: quick, short requests to the server that can be finished quickly.
Downside: many, many, many of those requests to the server (and every HTTP request => HTTP headers => MUCH overhead).
example explanation
The example is very (much too) easy:
You (Client) make a request to Server to fetch current info
Server gives you that info and a timestamp
Client receives info, uses it (show message) and makes new request with timestamp
Server compares Client timestamp with Server timestamp (filemtime of a file in this case)
If file change is newer than Client timestamp: print new file contents
Client receives that info and the new Server timestamp
Step 3 again etc
Time between step 4 and 5 can be very long. In an active chat, it won't be. (New information is added all the time.) In a multiplayer game, it might be (seconds, not minutes).
This might get you started:
http://www.icefaces.org/main/ajax-java/ajaxpush.iface
However this link is way better its in COMIC BOOK FORM =))
http://www.ape-project.org/comics/1/Chapter-I-common-let-s-push.html
Essentially its very similar to AJAX only the server can now talk to the clients rather than only having client requests.