Timeout faster \w memcache - php

I'm trying to force PHP's Memcache extension to timeout almost immediately if a memcached server I'm connecting to isn't available (for whatever reason). I'd like to throw an exception in this case (which will be handled somewhere else).
I've been searching and trying different things without any luck. I'm adding servers (only one for now) to the pool with the standard:
$this->memcache->addServer ( $server['host'], $server['port'] );
I then killed the memcached deamon (also tried with a wrong port&host) and opened my page. It just loads for a very long time and then nginx comes back with a 504 Gateway Time-out error.
How can I tell the memcache client to try for, I don't know, 1 second and then give up, at which point I should be able to detect the timeout somehow.
The bottom line is that if our memcached server would be down I'd like to display a user-friendly error page (already working for uncaught exceptions) as soon as possible and not make the user wait for 30 sec before he sees a generic server error.

Just call:
Memcache::getServerStatus() or
Memcache::getExtendedStats()
Also, this question is pretty much identical to yours.

Reduce the the value of max_failover_attempts memcache module configuration parameter, default number is too high.
You can also specify timeout as 3rd parameter to connect() method:
$memcache->connect('memcache_host', 11211, $timeout);
however the default timeout should be already set to 1 second.
Another place to look are TCP timeout parameters in OS.

Related

change the 'max number of client reached' redis error message to ''

is it possible to change the error message 'max number of client reached' to null or empty string?
I'm using redis as a cache for my DB values and in cases that I can't get the values from the cache I will get it from the DB.
if I could configure it in the redis it self it would be the best option for me because my code won't have to change in order to support that edge case.
if someone has some tips on how to avoid such errors it would be nice as well :) (I'm using php scripts with predis package)
The error message max number of clients reached is clearly indicate that Redis is reached client limit and unable to serve any new requests.
this issue probably can be related to incorrect use of Predis\Client in code. Instead to create a connection object once (singleton) and use it across process lifetime. The code probably create a new object on every request to Redis and keep all these connections open.
another thing is worth to check how php processes are managed by a web server. The web server (e.g. apache prefork, nginx php-fpm) might leave processes for a long time both holding connections to Redis and exhaust server resources (mem, cpu).
if nothing from above is true - the issue (bug) might be in the predis library.
Bottom line: the code/web server exhaust maxclients limit.
If you don't have control over code/web server (e.g. nginx), to reduce amount of error messages you can:
increase maxclients over 10k (depends on your Redis server resources). This will reduce frequency of error messages.
consider to enable (disabled by default) connection timeout (use it with cautious, as your code may assume that connections are never timeout). This will release old connections from a connection pool.
decrease tcp-keepalive from 300 seconds to less than timeout. This will close connections to dead peers (clients that cannot be reached even if they look connected).

Keep Elastic Load Balancer connection alive during long AJAX request

I am running into this problem :
I am sending a request to the server using AJAX, which takes some parameters in and on the server side will generate a PDF.
The generation of the pdf can take a lot of time depending on the data used
The Elastic Load Balancer of AWS, after 60s of "idle" connection decides to drop the socket, and therefore my request fails in that case.
I know it's possible to increase the timeout in ELB settings, but not only my sysadmin is against it, it's also a false solution, and bad practice.
I understand the best way to solve the problem would be to send data through the socket to sort of "tell ELB" that I am still active. Sending a dummy request to the server every 30s doesn't work because of our architecture and the fact that the session is locked (ie. we cannot have concurrent AJAX requests from the same session, otherwise one is pending until the other one finishes)
I tried just doing a get request to files on the server but it doesn't make a difference, I assume the "socket" is the one used by the original AJAX call.
The function on the server is pretty linear and almost impossible to divide in multiple calls, and the idea of letting it run in the background and checking every 5sec until it's finished is making me uncomfortable in terms of resource control.
TL;DR : is there any elegant and efficient solution to maintain a socket active while an AJAX request is pending?
Many thanks if anyone can help with this, I have found a couple of similar questions on SO but both are answered by "call amazon team to ask them to increase the timeout in your settings" which sounds very bad to me.
Another approach is to divided the whole operations into two services:
The first service accepts a HTTP request for generating a PDF document. This service finishes immediately after request is accepted. And it will return a UUID or URL for checking result
The second service accepts the UUID and return the PDF document if it's ready. If PDF document is not ready, this service can return an error code, such as HTTP 404.
Since you are using AJAX to call the server side, it will be easy for you to change your javascript and call the 2nd servcie when the 1st service finished successfully. Will this work for your scenario?
Have you tried to following the trouble shooting guide of ELB? Quoted the relevant part below:
HTTP 504: Gateway Timeout
Description: Indicates that the load balancer closed a connection
because a request did not complete within the idle timeout period.
Cause 1: The application takes longer to respond than the configured
idle timeout.
Solution 1: Monitor the HTTPCode_ELB_5XX and Latency metrics. If there
is an increase in these metrics, it could be due to the application
not responding within the idle timeout period. For details about the
requests that are timing out, enable access logs on the load balancer
and review the 504 response codes in the logs that are generated by
Elastic Load Balancing. If necessary, you can increase your capacity
or increase the configured idle timeout so that lengthy operations
(such as uploading a large file) can complete.
Cause 2: Registered instances closing the connection to Elastic Load
Balancing.
Solution 2: Enable keep-alive settings on your EC2 instances and set
the keep-alive timeout to greater than or equal to the idle timeout
settings of your load balancer.

What is the limit on scripts running at the same time on a nginx/php-fpm config?

The problem is I have to use curl and sometimes the curl requests take a long time because of the timeouts. I have set the timeouts to 1 second so no request should take more than 1 second but still the server is unable to process other php requests.
My question is how many concurrent scripts(running at the same time) can nginx/php-fpm handle. What I see is that a few requests lasting 1 second make the whole server unresponsive. What are the settings that I can change so more requests can be processed at the same time?
Multicurl is indeed not the solution to your probleme, but asynchrousity probably is. I am not sure that the solution is tweaking Nginx. It would scale better if you were to consider one of the following options :
You can abstract Curl with Guzzle http://docs.guzzlephp.org/en/latest/ and use their approach to async call and promises.
You can use Gearmand http:/gearman.org/getting-started/ which will enable you to send an async message to a remote server which will process the instruction based on a script you register to your message. (I use this mechanism for non blocking logging)
Either way, your call will be made in milliseconds and won't block your nginx but your code will have to change a little bit.
Php-curl did not respond in a timely manner because of DNS.
The problem was that I had to access files from a CDN but the IP behind the domain changed frequently and unfortunately curl keeps a DNS cache.
So from time to time it would try to access files from IPs that were not valid anymore, but they were still in the DNS cache of php-curl.
I had to drop php-curl completely and use a plain file_get_contents(...) request. This completely solved the problem.

PHP server timeout on script

I have a function for a wordpress plugin I'm developing that takes a lot of time.
It connects to the TMDb (movies database) and retrieves one by one all movies by id (from 0 to 8000) and creates a XML document that is saved on the local server.
Of course it takes a bunch of time, and PHP says "504 Gateway Time-out The server didn't respond in time."
What can I do???? any sugestions!!!
Assuming a one-time execution and it's bombing on you, you can set_time_limit to 0 and allow it to execute.
<?php
set_time_limit(0); // impose no limit
?>
However, I would make sure this is not in production and it will only be ran when you want it to (otherwise this will place (and continue to place) a large load on the server).
Try to set:
set_time_limit(0);
at the script head. But i think it's the servers problem, you read too long. Try read in thread mode.
I think this is not related to script timeout.
504- Gateway Timeout problem is entirely due to slow IP communication between back-end computers, possibly including the Web server.
Fix:
Either use proxies or increase your cache size(search for "cache" in your php.ini and play with it) limit.
Dot

600+ memcache req/s problems - help!

I am running memcached on my server and when it hits 600+ req/s it becomes unstable and causes a big load of problems. It appears when the request rate gets that high, my PHP applications at random times are unable to connect to the memcache server, causing slow load times which makes nginx and php-fpm freak out and I receive a bunch of 104: Connection reset by peer errors in my nginx logs.
I would like to point out that in my memcache server I have 'hot objects' - objects that at times receive 90% of the memcache requests. I also noticed when so many requests hit a single object, it slightly adds a little more load time to the overall page (when it manages to load).
I would greatly appreciate any help to this problem. Thanks so much!
Switch away from using TCP sockets and going to UNIX sockets (assuming you are on a unix based server)
Start memcached with a socket enabled:
Add -s /tmp/memcached.socket to your memcached startup line (Note, sockets disables networking support)
Then in PHP, connect using persistent connections, and to the new memcache socket:
$memcache_obj = new Memcache;
$memcache_obj->pconnect('unix:///tmp/memcached.socket', 0);
Another recommendation, if you have multiple "types" of cached objects, start a memcached instance for each "type" and distribute your hot items amongst them.
Drupal does this, you can see how their config file and memcached init is setup here.
Also, it sounds to me like your memcached timeout is set WAY to high. If it's anything above 1 or 2 seconds, you can lock scripts up. The timeout should be reached, and the script should default to retrieving the object via another method (SQL, file, etc)
The other thing is verify that your memcache isn't being put into a swap file, if your cache is smaller than your average free ram, try starting memcache with the -k option, this will force it's cache to always stay in ram and can't be swapped.
If you have a multi-core server, also make sure memcached is compiled with thread support, and enable it using -t <numcores>
600 requests per second is profoundly low for memcached.
If you're establishing a connection for every request, you'll spend more time connecting than requesting and burn through your ephemeral ports very rapidly which might be the problem you're seeing.
There's a couple of things you could try:
If you have memcached running locally, you can use the named socket 'localhost' instead of '127.0.0.1'
Use persisntent connections

Categories