Php efficiently set read timeout on HTTPS requests - php

Is there a way to efficiently set a timeout on an https request in php?
There is a solution with curl simulating a SOAP request, however it is significantly slower.
Benchmarking it against a slow service shows an average of 6.5 seconds instead of 5.5 seconds the SOAP Client takes for the same request.
There is an alternative of stream socket
(implemented by Zend Framework on ZendHttpClientAdapterSocket) however the
stream_set_timeout function does not seem to work on https connections.
Please note that the issue is the read timeout (time to get a response) and not the connect timeout (time to do the handshake) which works on both http and soap requests
Finding a solution to make curl faster would resolve the issue as well.
USER STORY
I am making requests on an external https soap webservice using zend soap client.
The service usually responds on average on 5.5 seconds when the network is ok.
When there are network issues however, some requests take up to 7 to 8 minutes
consuming server resources.
I can use curl and force a timeout and then i am solving my problems when there are network issues with the webservice.
However my average response time goes up to 6.5 seconds when the network is ok
The business requirement suggests that requests that take longer than 30 seconds should rather be dropped in order to ensure stability of the system.

That depends on what you're using other than cURL.
If you're using streams you can just use stream_set_timeout (which sets the read-write timeout).
The connect timeout you can specify in fsockopen or however you create your stream.
See if you can specify a read-write timeout in your SOAP client?

Related

Keep Elastic Load Balancer connection alive during long AJAX request

I am running into this problem :
I am sending a request to the server using AJAX, which takes some parameters in and on the server side will generate a PDF.
The generation of the pdf can take a lot of time depending on the data used
The Elastic Load Balancer of AWS, after 60s of "idle" connection decides to drop the socket, and therefore my request fails in that case.
I know it's possible to increase the timeout in ELB settings, but not only my sysadmin is against it, it's also a false solution, and bad practice.
I understand the best way to solve the problem would be to send data through the socket to sort of "tell ELB" that I am still active. Sending a dummy request to the server every 30s doesn't work because of our architecture and the fact that the session is locked (ie. we cannot have concurrent AJAX requests from the same session, otherwise one is pending until the other one finishes)
I tried just doing a get request to files on the server but it doesn't make a difference, I assume the "socket" is the one used by the original AJAX call.
The function on the server is pretty linear and almost impossible to divide in multiple calls, and the idea of letting it run in the background and checking every 5sec until it's finished is making me uncomfortable in terms of resource control.
TL;DR : is there any elegant and efficient solution to maintain a socket active while an AJAX request is pending?
Many thanks if anyone can help with this, I have found a couple of similar questions on SO but both are answered by "call amazon team to ask them to increase the timeout in your settings" which sounds very bad to me.
Another approach is to divided the whole operations into two services:
The first service accepts a HTTP request for generating a PDF document. This service finishes immediately after request is accepted. And it will return a UUID or URL for checking result
The second service accepts the UUID and return the PDF document if it's ready. If PDF document is not ready, this service can return an error code, such as HTTP 404.
Since you are using AJAX to call the server side, it will be easy for you to change your javascript and call the 2nd servcie when the 1st service finished successfully. Will this work for your scenario?
Have you tried to following the trouble shooting guide of ELB? Quoted the relevant part below:
HTTP 504: Gateway Timeout
Description: Indicates that the load balancer closed a connection
because a request did not complete within the idle timeout period.
Cause 1: The application takes longer to respond than the configured
idle timeout.
Solution 1: Monitor the HTTPCode_ELB_5XX and Latency metrics. If there
is an increase in these metrics, it could be due to the application
not responding within the idle timeout period. For details about the
requests that are timing out, enable access logs on the load balancer
and review the 504 response codes in the logs that are generated by
Elastic Load Balancing. If necessary, you can increase your capacity
or increase the configured idle timeout so that lengthy operations
(such as uploading a large file) can complete.
Cause 2: Registered instances closing the connection to Elastic Load
Balancing.
Solution 2: Enable keep-alive settings on your EC2 instances and set
the keep-alive timeout to greater than or equal to the idle timeout
settings of your load balancer.

What is the limit on scripts running at the same time on a nginx/php-fpm config?

The problem is I have to use curl and sometimes the curl requests take a long time because of the timeouts. I have set the timeouts to 1 second so no request should take more than 1 second but still the server is unable to process other php requests.
My question is how many concurrent scripts(running at the same time) can nginx/php-fpm handle. What I see is that a few requests lasting 1 second make the whole server unresponsive. What are the settings that I can change so more requests can be processed at the same time?
Multicurl is indeed not the solution to your probleme, but asynchrousity probably is. I am not sure that the solution is tweaking Nginx. It would scale better if you were to consider one of the following options :
You can abstract Curl with Guzzle http://docs.guzzlephp.org/en/latest/ and use their approach to async call and promises.
You can use Gearmand http:/gearman.org/getting-started/ which will enable you to send an async message to a remote server which will process the instruction based on a script you register to your message. (I use this mechanism for non blocking logging)
Either way, your call will be made in milliseconds and won't block your nginx but your code will have to change a little bit.
Php-curl did not respond in a timely manner because of DNS.
The problem was that I had to access files from a CDN but the IP behind the domain changed frequently and unfortunately curl keeps a DNS cache.
So from time to time it would try to access files from IPs that were not valid anymore, but they were still in the DNS cache of php-curl.
I had to drop php-curl completely and use a plain file_get_contents(...) request. This completely solved the problem.

PHP cUrl Rate Limiting per API

I am working on an API mashup in PHP of various popular APIs and would liek to implement rate limiting to ensure i am playing nice.
I did some research and have taken a look at CURLOPT_MAXCONNECTS and CURLOPT_TIMEOUT but I have some confusion about how they function.
As I understand it, likely incorrectly:
CURLOPT_MAXCONNECTS
---
Each script that calls a cUrl request opens a connection.
When the MAXCONNECTS limit is reached, then the server delays the request.
CURLOPT_TIMEOUT
---
The amount of time that the server will wait to make a connection.
Working with MAXCONNECTS, does that mean that cUrl will make the listed
number of connections and then wait up to TIMEOUT for an open thread?
So-- I am, obviously, very confused about how cUrl actually functions with these parameters. The application I am developing needs to limit cUrl requests at different limits for each API I am calling. As I understand things, the cUrl options are server wide? Is there some method of attaching a token to a specific cUrl call and applying the limit per API that way? Do I need to work some global/shared memory magic?
Your truly and considerably confused,
Samantha.
CURLOPT_MAXCONNECTS is just the maximum number of simultaneous requests.
CURLOPT_TIMEOUT is the time cURL will wait before abording request in the case that there is no answer.
You'll have to work you limits manually

HTTP Server Push with PHP / Apache2

Is it possible to perform HTTP Push with Apache2+PHP? I've done some Googleing around and the only thing close to what i was looking for was a PECL Socket tutorial which didn't quite tackle what i was looking for.
My application at the moment has a basic read GET API, the client requests a read to the API once every 15 seconds. I think this is kind of silly as an open port that just sends data when there is data to send seems like a much better method. My client is written in .net.
Is this possible at all on these technologies? Or will i have to try and use java/comet, which at the moment i just don't have the resources / infrastructure readily available
More information on HTTP Server Push:
https://en.wikipedia.org/wiki/Push_technology#HTTP_server_push
When deciding between different technologies to report events from a HTTP server to a client there are allways tradeoffs to make: 150 Clients polling every 15 seconds with a poll taking 1s will statistically tie up 10 connections, the same 150 clients with a server push technology will tie up 150 connections but with much less CPU.
IMHO Long polling has the best balance if used in combination with Apache/PHP, as it allows for the server to influence the clients, if these are in your control: If the connection count on the server goes too high, it can just return the longest running poll and send information to the client to not repoll immediately, but with some delay.

KeepAlive packets over a Soap request

I've been debugging some Soap requests we are making between two servers on the same VLAN. The app on one server is written in PHP, the app on the other is written in Java. I can control and make changes to the PHP code, but I can't affect the Java server. The PHP app forms the XML using the DOMDocument objects, then sends the request using the cURL extension.
When the soap request took longer than 5 minutes to complete, it would always wait until the max timeout limit and exit with a message like this:
Operation timed out after 900000 milliseconds with 0 bytes received
After sniffing the packets that were being sent, it turns out that the problem was caused by a 5 minute timeout in the network that was closing what it thought was a stale connection. There were two ways to fix it: bump up the timeout in iptables, or start sending KeepAlive packets over the request.
To be thorough, I would like to implement both solutions. Bumping up the timeout was easy for ops to do, but sending KeepAlive packets is turning out to be difficult. The cURL library itself supports this (see the --keepalive-time flag for the CLI app), but it doesn't appear that this has been implemented in the PHP cURL library. I even checked the source to make sure it wasn't an undocumented feature.
So my question is this: How the heck can I get these packets sent? I see a few clear options, but I don't like any of them:
Write a wrapper that will kick off the request by shell_execing the CLI app. This is a hack that I just don't like
Update the cURL extension to support this. This is a non-option according to Ops.
Open the socket myself. I know just enough to be dangerous. I also haven't seen a way to do this with fsockopen, but I could be missing something.
Switch to another library. What exists that supports this?
Thanks for any help you can offer.

Categories