change the 'max number of client reached' redis error message to '' - php

is it possible to change the error message 'max number of client reached' to null or empty string?
I'm using redis as a cache for my DB values and in cases that I can't get the values from the cache I will get it from the DB.
if I could configure it in the redis it self it would be the best option for me because my code won't have to change in order to support that edge case.
if someone has some tips on how to avoid such errors it would be nice as well :) (I'm using php scripts with predis package)

The error message max number of clients reached is clearly indicate that Redis is reached client limit and unable to serve any new requests.
this issue probably can be related to incorrect use of Predis\Client in code. Instead to create a connection object once (singleton) and use it across process lifetime. The code probably create a new object on every request to Redis and keep all these connections open.
another thing is worth to check how php processes are managed by a web server. The web server (e.g. apache prefork, nginx php-fpm) might leave processes for a long time both holding connections to Redis and exhaust server resources (mem, cpu).
if nothing from above is true - the issue (bug) might be in the predis library.
Bottom line: the code/web server exhaust maxclients limit.
If you don't have control over code/web server (e.g. nginx), to reduce amount of error messages you can:
increase maxclients over 10k (depends on your Redis server resources). This will reduce frequency of error messages.
consider to enable (disabled by default) connection timeout (use it with cautious, as your code may assume that connections are never timeout). This will release old connections from a connection pool.
decrease tcp-keepalive from 300 seconds to less than timeout. This will close connections to dead peers (clients that cannot be reached even if they look connected).

Related

AWS EC2: Internal server error when huge number of api calls at the same time from jmeter

We have made the backend of a mobile AP in laravel and mysql. The application is hosted on AWS Ec2 and using RDS mysql database.
We are stress testing the app using jmeter. When we send upto 1000 API requests from jmeter, it seems to work fine. However, when we send more than 1000 (roughly) requests in parallel, The jmeter starts getting internal server error (500) as a response for many requests. the internal 500 error percentage increases as we increase the number of APIs
Normally, we would expect that if we increase the APIs, they should be queued and the response should slow if the server is out of resources. We also monitored the resources on the server and they never reached even 50% of the available resources
Is there any timeout setting or any other possible setting that I could tweak so that the we dont get the internal server error before reaching 80% of the resource usage
Regards
Syed
500 is the externally visible symptom of some sort of failure in the server delivering your API. You should look at the error log of that server to see details of the failure.
If you are using php scripts to deliver the API, your mysql (rds) server may be running out of connections. Here's how that might work.
A php-driven web server under heavy load runs a lot of php instances. Each php instance opens up one or more connections to the mysql server. When there are too many php instances x connections per instance the mysql server starts refusing more of them.
Here's what you need to do: restrict the number of php instances your web server is allowed to use at a time. When you restrict that number, incoming requests will queue up (in the TCP connect queue of your OS's communication stack). Then, when an instance is available to serve each item in the queue it will do so.
Apache has a MaxRequestWorkers parameter, with a default extremely large value of 256. Try setting it much lower, for example to 32, and see whether your problem changes.
If you can shrink the number of request workers, you paradoxically may improve high-load performance. Serializing many requests often generates better throughput than trying to do many of them in parallel.
The same goes for the number of active connections to your MySQL server. It obviously depends on the nature of the queries you use, but generally speaking fewer concurrent queries improves performance. So, you won't solve a real-world problem by adding MySQL connections.
You should be aware that the kind of load imposed by server-hammering tools like jmeter is not representative of real world load. 1000 simultaneous jmeter operations without failure is a very good result. If your load-testing setup is robust and powerful, you will always be able to bring your server system to its knees. So, deciding when to stop is an important part of a load testing plan. If this were my system I would stop at 1000 for now.
For your app to be robust in the field, you probably should program it to respond to 500 status by waiting a random amount of time and trying again.

How should I handle a "too many connections" issue with mysql?

The web startup I'm working at gets a spike in number of concurrent web users from 5000 on a normal day to 10,000 on weekends. This Saturday the traffic was so high that we started getting a "too many connections" error intermittently. Our CTO fixed this by simply increasing the max_connections value on the tatabase servers. I want to know if using one persistent connection is a better solution here?
i.e. instead of using:
$db = new mysqli('db_server_ip', 'db_user', 'db_user_pass', 'db_name');
We use:
$db = new mysqli('p:db_server_ip', 'db_user', 'db_user_pass', 'db_name');
We're already using multiple MySQL servers and as well as multiple web servers (Apache + mod_php).
You should share the database connection across multiple web requests. Every process that is running on the application server should get an own mysql connection, that is kept open as long as the process is running and reused for every web request that comes in.
From the PHP Docs:
Persistent connections are good if the overhead to create a link to your SQL server is high.
And
Note, however, that this can have some drawbacks if you are using a database with connection limits that are exceeded by persistent child connections. If your database has a limit of 16 simultaneous connections, and in the course of a busy server session, 17 child threads attempt to connect, one will not be able to.
Persistent connections aren't the solution to your problem. Your problem is that your burst usage is beyond the limits set in your database configuration, and potentially your infrastructure. What your CTO did, increasing the connection limit, is a good first step. Now you need to monitor the resource utilization on your database servers to make sure they can handle the increased load from additional connections. If they can, you're fine. If you start seeing the database server running out of resources, you'll need to set up additional servers to handle the burst in traffic.
Too Many Connections
Cause
This is error is caused by
a lot of simultaneous connections, or
by old connections not being released soon enough
You already did SHOW VARIABLES LIKE "max_connections"; and increased the value.
Permanent Connections
If you use permanent or persistent database connections, you have to always take the MySQL directive wait_timeout into account. Closing won't work, but you could lower the timeout. So used resources will be faster available again. Utilize netstat to find out whats going on exactly as described here https://serverfault.com/questions/355750/mysql-lowering-wait-timeout-value-to-lower-number-of-open-connections.
Do not forget to free your result sets to reduce wasting of db server resources.
Be advised to use temporary, short lived connections instead of persistent connections.
Introducing persistence is pretty much against the whole web request-response flow, because it's stateless. You know: 1 pconnect request, causes an 8 hour persistant connection dangling around at the db server, waiting for the next request, which never comes. Multiply by number of users and look at your resources.
Temporary connections
If you use mysql_connect() - do not forget to mysql_close().
Set new_link set to false and pass the CLIENT_INTERACTIVE flag.
You might adjusting interactive_timeout, which helps in stopping old connections blocking up the work.
If the problem persists, scale
If the problem remains, then decide to scale.
Either by adding another DB server and putting a proxy in front,
(MySQL works well with HAProxy) or by switching to an automatically scaling cloud-service.
I really doubt, that your stuff is correctly configured.
How can this be a problem, when you are already running multiple MySQL servers, as well as multiple web servers? Please describe your load balancing setup.
Sounds like Apache 2.2 + mod_php + MySQL + unknown balancer, right?
Maybe try
Apache 2.4 + mod_proxy_fcgi + PHP 5.5/5.6 (php-fpm) + MySQL (InnoDb) + HAProxy or
Nginx + PHP 5.5/5.6 (php-fpm) + MySQL (InnoDb) + HAProxy.

PHP max DB connections reached - using Kohana/ORM to initiate connections

In a load test of our PHP based web application we can easily reach our DBs hard limit of 150 max connections. We run Kohana with ORM to manage the DB connections.
This causes connection exceptions (and thus failed transactions), mysql_pconnect seems to perform even worse.
We're looking for a solution to have graceful degradation under load. Options considered:
A DB connection pool (uh, that's not possible with PHP right?)
Re-try a failed connection when the failure was due to max
connections reached
2 seems logical, but Kohana/ORM manages the DB connection process. Can we configure this somehow?
Is there something I'm not thinking of?
EDIT
This is an Amazon AWS RDS database instance, Amazon sets the 150 limit for me, and the server is most certainly configured correctly. I just want to ensure graceful degradation under load with whichever database I'm using. Clearly I can always upgrade the DB and have a higher connection limit, but I want to guard against a failure situation in case we do hit our limit unexpectedly. Graceful degradation under load.
When you say load testing, I am assuming you are pushing roughly 150 concurrent requests and not that you are hitting the connection limit because you make multiple connections within the same request. If so, check out mysql_pconnect. To enable it in Kohana, simply enable persistent = true in the config/database file for your connections.
If that doesn't work, then you'll have to find an Amazon product that allows more connections since PHP does not share resources between threads.
This answers your question about PHP database connection pooling.
If the limit is 150 for connections (default for max_connections is 151), you are most likely running mysql without a config file
You will need to create a config file to raise that number
Create /etc/my.cnf and put in these two lines
[mysqld]
max_connections=300
You do not have to restart mysql (you could if you wish)
You could just run this MySQL command to raise it dynamically
SET GLOBAL max_connections = 300;
UPDATE 2012-04-06 12:39 EDT
Try using mysql_pconnect instead of mysql_connect. If Kohana can be configured to use mysql_pconnect, you are good to go.

MongoDB - Too Many Connection Error

We have developed chat module using node.js() and mongo sharding and gone live to production server. But today its reached 20000 connection in mongodb and getting error "Too many connection" in logs. After that we have restarted the node server and started again. now its comes normal. But we have to know how will solve this problem immediately.
Any configuration are there to set it in mongodb to kill the connection if not used or set the expire time while establish the connection.
Please help us to close this issue.
Regards,
Kumaran
You're probably not running into a MongoDB issue. There's a cap to the amount of connections you can make to MongoDB that's usually roughly equal to the maximum number of file descriptors available to it.
It sounds like there is a bug in your code (likely) or mongoose (less likely) that either creates more connections than it closes or never closes connections in the first place. In Java for example creating a new "Mongo" class instance for each query would result in this sort of problem but I don't work with node.js/mongoose so I do not know what the JS equivalent of that is.
Keep an eye on mongostat and check to see if the connection count always increases or if it decreases sometimes. If it's the former your code never releases connections for whatever reason. If it's the latter you're simply creating them faster than idle connections are disconnected. That's usually due to doing something heavy weight (like the driver initialising it's connection pool) for every query rather than once.

600+ memcache req/s problems - help!

I am running memcached on my server and when it hits 600+ req/s it becomes unstable and causes a big load of problems. It appears when the request rate gets that high, my PHP applications at random times are unable to connect to the memcache server, causing slow load times which makes nginx and php-fpm freak out and I receive a bunch of 104: Connection reset by peer errors in my nginx logs.
I would like to point out that in my memcache server I have 'hot objects' - objects that at times receive 90% of the memcache requests. I also noticed when so many requests hit a single object, it slightly adds a little more load time to the overall page (when it manages to load).
I would greatly appreciate any help to this problem. Thanks so much!
Switch away from using TCP sockets and going to UNIX sockets (assuming you are on a unix based server)
Start memcached with a socket enabled:
Add -s /tmp/memcached.socket to your memcached startup line (Note, sockets disables networking support)
Then in PHP, connect using persistent connections, and to the new memcache socket:
$memcache_obj = new Memcache;
$memcache_obj->pconnect('unix:///tmp/memcached.socket', 0);
Another recommendation, if you have multiple "types" of cached objects, start a memcached instance for each "type" and distribute your hot items amongst them.
Drupal does this, you can see how their config file and memcached init is setup here.
Also, it sounds to me like your memcached timeout is set WAY to high. If it's anything above 1 or 2 seconds, you can lock scripts up. The timeout should be reached, and the script should default to retrieving the object via another method (SQL, file, etc)
The other thing is verify that your memcache isn't being put into a swap file, if your cache is smaller than your average free ram, try starting memcache with the -k option, this will force it's cache to always stay in ram and can't be swapped.
If you have a multi-core server, also make sure memcached is compiled with thread support, and enable it using -t <numcores>
600 requests per second is profoundly low for memcached.
If you're establishing a connection for every request, you'll spend more time connecting than requesting and burn through your ephemeral ports very rapidly which might be the problem you're seeing.
There's a couple of things you could try:
If you have memcached running locally, you can use the named socket 'localhost' instead of '127.0.0.1'
Use persisntent connections

Categories