I have worked on NodeJs and Redis before. Since NodeJs is a web server I could maintain a single connection to Redis and all the http requests use same Redis client to connect to Redis.
But in PHP each page upon HTTP request creates a new connection to Redis Server and this is slowing down the performance. How do they maintain the connection state in PHP? It must be same issue with PHP-Mysql too so I guess there are solutions out there?
The way php works, it that it is a program, not a server. Every time you request a page on your web server, PHP is called to run the program. Once the page is done loading, the thread is ended. PHP is not a server, so once a page is done loading, all connections associated with it are terminated. Therefor, every time a page is requested, a new connection to the database has to be made. If you are noticing a performance issue when connecting, you should try php-redis if you are not already doing so.
Let says you are using php-fpm. php-fpm has a master process and run multiple worker process according the pool configuration.
Each worker process is independent (but can use shared resource such as opcache/APUc cache ...), consume CPU and memory (memory is most important factor to tune the pool configuration max-children property).
So yes, 1 HTTP query = 1 php-fpm worker (fresh or reuse) = 1 new socket connection (or reuse persistent connection), to scale:
Redis cluster
Proxy such as HAProxy between php and redis (so you can limit maxconn)
Use local cache such as APCu to limit Redis access (complicate but the most powerful)
Check OS ulimit and opened file descriptors
Related
is it possible to change the error message 'max number of client reached' to null or empty string?
I'm using redis as a cache for my DB values and in cases that I can't get the values from the cache I will get it from the DB.
if I could configure it in the redis it self it would be the best option for me because my code won't have to change in order to support that edge case.
if someone has some tips on how to avoid such errors it would be nice as well :) (I'm using php scripts with predis package)
The error message max number of clients reached is clearly indicate that Redis is reached client limit and unable to serve any new requests.
this issue probably can be related to incorrect use of Predis\Client in code. Instead to create a connection object once (singleton) and use it across process lifetime. The code probably create a new object on every request to Redis and keep all these connections open.
another thing is worth to check how php processes are managed by a web server. The web server (e.g. apache prefork, nginx php-fpm) might leave processes for a long time both holding connections to Redis and exhaust server resources (mem, cpu).
if nothing from above is true - the issue (bug) might be in the predis library.
Bottom line: the code/web server exhaust maxclients limit.
If you don't have control over code/web server (e.g. nginx), to reduce amount of error messages you can:
increase maxclients over 10k (depends on your Redis server resources). This will reduce frequency of error messages.
consider to enable (disabled by default) connection timeout (use it with cautious, as your code may assume that connections are never timeout). This will release old connections from a connection pool.
decrease tcp-keepalive from 300 seconds to less than timeout. This will close connections to dead peers (clients that cannot be reached even if they look connected).
Does anyone have an idea if there is already a zmq module for apache? If there is please share the link or any reference. My scenario is as follows:
Server Config:
Apache 2.4.12, with prefork
PHP 5.5
ZMQ 4.0.X
My problem is, whenever I try to create a zmq socket(pub) connection from my application to a separate service (SUB) with a streamer device in between, it creates a new socket everytime the application is initialized as my apache is in prefork mode, creating new instance(child) on every request. How can I create a single context/socket where any number of PHP requests from subsequent apache child processes can send data to the socket, which will avoid the creation of multiple sockets and exhausting the system resources. This will also, I believe, reduce the overhead caused due to creation of new sockets and make it faster.
As an alternative is it possible to create an apache module, whose functions and resources I can access from PHP application and use it to just send data, where the context and socket are created only once, and are persistent, during apache load.
Short answer - you can't. Your problem in this is Apache and how it works - it shuts down the PHP process after request finishes. Also, you can't share a context or a socket created in an Apache process between PHP processes.
I don't know what you're trying to do or why you even exhaust system resources (quite odd), but if I were you I'd use a more advanced server that uses ZeroMQ internally for its transport layer: Mongrel2. You could create a PHP extension, serve PHP via FPM and then have Apache proxy requests to your PHP-FPM which can then pool the already existing ZMQ connections. However, I would expand the question with how the resources are exhausted that fast.
If that's all too much, then you can consider this:
PHP processes spawned by Apache accept the data and fill some sort of storage (database, file, shared memory)
Once the makeshift-queue has been populated, before exiting the PHP scripts raise SIGUSR2 for a daemon process which reads the queue
You have a daemon running that reads the queue, activates upon SIGUSR2 and sends the data via ZMQ socket - you have a single process that uses ZMQ now and multiple PHP processes that interact with it
Since your requirement is a bit unclear, it's quite possible that all I wrote is for nothing so if you can expand your question just a little bit with more info.
I'm using Redis in a PHP project. I use phpredis as a client. Sometimes, during long CLI-scripts, I experience PHP segmentation faults.
I've experienced before that phpredis has problems when the connection times out. As my Redis config is configured to automatically close idle connections after 300 seconds, I guess that causes the segmentation fault.
In order to be able to choose whether to increase the connection timeout or default it to 0 (which means "never timeout"), I would like to know what the possible advantages and disadvantages are?
Why should I never close a connection?
Why should I make sure connections don't stay open?
Thanks
Generally, opening a connection is an expensive operation so modern best practices are to keep them open. On the other hand, open connections requires resources (from the database) to manage so keeping a lot of idle connections open can also be problematic. This trade off is usually resolved via the use of connection pools.
That said, what's more interesting is why does PHP segfault. The timeout is, evidently, caused by a long running command (CLI script in your case) that blocks Redis (which is mostly single threaded) from attending to the PHP app's connections. While this is a well-known Redis behavior, I would expect PHP (event without featuring reconnect at the client library) not to s**t its pants so miserably.
The answer to your question much depends on cases of redis usage in your application. So, should your never close a connection with idle connection timeout?
In general no, your should keep it default - 0. Why or when:
Any types of long living application. Such as CLI-script ot background worker. Why - phpredis do not has builded in reconnection feature so your should take care about this by yourself or do not your idle timeout.
Each time your request processed or CLI script die - all connections would be closed by php engine. Redis server close all connection for closed client sockets. You will have no problems like zombie connection or something like that. As extension, phpredis close connection in destructor - so your may be sure connections don't stay open.
p.s. Of course your can implement reconnection insome proxy class in php by yourself. We have redis in high load environment - ~4000 connections per second on instance. After 2.4 version we do not use idle connection timeout. And do not have any types of troubles with that.
I am running memcached on my server and when it hits 600+ req/s it becomes unstable and causes a big load of problems. It appears when the request rate gets that high, my PHP applications at random times are unable to connect to the memcache server, causing slow load times which makes nginx and php-fpm freak out and I receive a bunch of 104: Connection reset by peer errors in my nginx logs.
I would like to point out that in my memcache server I have 'hot objects' - objects that at times receive 90% of the memcache requests. I also noticed when so many requests hit a single object, it slightly adds a little more load time to the overall page (when it manages to load).
I would greatly appreciate any help to this problem. Thanks so much!
Switch away from using TCP sockets and going to UNIX sockets (assuming you are on a unix based server)
Start memcached with a socket enabled:
Add -s /tmp/memcached.socket to your memcached startup line (Note, sockets disables networking support)
Then in PHP, connect using persistent connections, and to the new memcache socket:
$memcache_obj = new Memcache;
$memcache_obj->pconnect('unix:///tmp/memcached.socket', 0);
Another recommendation, if you have multiple "types" of cached objects, start a memcached instance for each "type" and distribute your hot items amongst them.
Drupal does this, you can see how their config file and memcached init is setup here.
Also, it sounds to me like your memcached timeout is set WAY to high. If it's anything above 1 or 2 seconds, you can lock scripts up. The timeout should be reached, and the script should default to retrieving the object via another method (SQL, file, etc)
The other thing is verify that your memcache isn't being put into a swap file, if your cache is smaller than your average free ram, try starting memcache with the -k option, this will force it's cache to always stay in ram and can't be swapped.
If you have a multi-core server, also make sure memcached is compiled with thread support, and enable it using -t <numcores>
600 requests per second is profoundly low for memcached.
If you're establishing a connection for every request, you'll spend more time connecting than requesting and burn through your ephemeral ports very rapidly which might be the problem you're seeing.
There's a couple of things you could try:
If you have memcached running locally, you can use the named socket 'localhost' instead of '127.0.0.1'
Use persisntent connections
Apache/PHP/MySQL persistent connections have such a bad reputation because Apache handles each request as a child php process, each having 1 persistent connection. When visitors scale, MySQL reachs max connections limit from all the apache/php child processes, each with 1 persistent connection. There's also the issues of temporary tables, user variables, charsets, transactions and last-insert-id.
In my situation, we don't have to deal with the latter issues, because we are only READING from MySQL. There are no updates to the db: these are handled by another set of scripts on a different machine. The scripts we want to have persistent connections are the server-end of AJAX connections, returning JSON data.
Each pageview has 5 AJAX requests, so 6 different php child processes are started on the server for each page requested (5 ajax, 1 html). Ideally, I could have ONLY 1 connection from the php/ajax server to MySQL. This connection would be shared by all php child processes.
So, how can I do this?
Should I use another webserver software other than apache? ngynx?
cheers
UPDATE: In this situation, the right way to connect to the MySQL server (http://bit.ly/15rBDP):
using mysql native driver (mysqlnd)
and mysqli
each client reconnecting using the
mysql-change-user function
using
MYSQLI-NO-CHANGE-USER-ON-PCONNECT in
php config ('coz we don't need to
cleanup).
UPDATE 2:
To clarify my question, what i want to have is: ALL php client processes, connecting through only ONE persistent connection. This connection is defined, ran and stored some how (my question), but all new php client processes know it and can use it. The problem with apache/php is that each php client process has 1 connection. If I serve 20,000 pages per minute, there will be 20,000 persistent connections. I want the 20,000 php child processes connecting to 1 unique, central, persistent connection to mysql.
You do realize that having only one (persistent) connection for all your requests, effectively serializes all requests to your server. So request C has to wait for request B to finish, which has to wait for request A to finish etc.
So having one connection turns your multi-threaded/multi-process webserver into a single-threaded application.
Read the accepted answer on this post : Which is better: mysql_connect or mysql_pconnect
Simply, using mysql persistent connections may be good or bad, depending on the hardware resources that you have as well as the way you code your applications.
A php native mysql connector is included in php 5.3 which has improved support for persistent connections.
http://dev.mysql.com/downloads/connector/php-mysqlnd/
http://blog.ulf-wendel.de/?p=211