AWS Redis with Nginx and PHP + To many Connections - php

I'm using AWS Elasticache Redis and I've been having issues with Redis and to many connections. Using "Info All" I can see the total used connections can grow to around 50,000 in a day.
Currently I'm using PHP to connect to Redis and I have this put into many different php functions so it can be called many times for a single page request. Also PHP session are on redis.
I wanted to ask:
- Can I create a persistent connection to redis from PHP? One connection for all requests to use.
- Should I use a Global PHP variable as a Redis connection that different functions use rather then creating a new connection to redis for each function?
- I've read about Nginx proxying redis requests "redis_pass" - would this connect just once and proxy requests through? would this be a better solution?
Any other solutions?
Just looking for a wise way to reduce the number of redis connections from PHP. (Note: using php 7.0.6).
thanks :)
Adam
Edit:
I noticed you can also get your PHP Session data on Redis via Nginx - would this also reduce connections - I assume yes:
How can I get the value from Redis and put it in a variable in NGiNX?

Maybe not helpful, but do you have the redis server secured from the internet? They are often a target for attackers trying to exploit vulnerabilities.

Related

redis : 40+ servers reading the same redis content

I'm gathering sports data every minute with PHP scripts and store them into Redis. It's all done on one ubuntu 16.04 server. Let's call it the collector server.
My goal is to have that Redis generated database available to our customers. The DB will only be read-only to our customers.
The way we connect customers servers to our Redis content is by directly
pointing them to the Redis host: port of that collector server. If all our clients would want to access the DB, I'm afraid the collector server would get stuck (40+ customers)...
That Redis content is updated every minute, and we are the owners of the customers' servers and content.
Is there setup to do in Redis or ways to have 40 +external servers reading the same Redis content DB without killing the collector server?
Before scaling, I recommend that you benchmark your application against Redis with real and/or simulated load - a single Redis server can handle an impressive load (see https://redis.io/topics/benchmarks) so you may be over engineering this.
That said, to scale reads only, read about Redis' replication. If you want to scale writes as well, read about Redis cluster.
+1 For Itamar's answer. But one more important thing you should keep in mind, letting your customers connect to your Redis resource directly is dangerous and should be avoided.
They will have your host:port and password and they will be able to connect, write, modify, delete, and even shutdown or change your password.
It is not scalable, and you'll probably notice it when it is already too late and too hard to change.
Some customers might have troubles connecting and passing some routers and FW with the non standard TCP port.
You should have an app server(s) that does the Redis communication for your customers.

phpredis persistent connect with nginx + PHP-FPM

I have classical server configuration schema with Nginx + PHP-FPM. Most of pages on my site contain data that saved into redis. Thus it is too many indirect (through php-fpm) lite requests to redis from many independent users. I use phpredis PHP extension to communicate with redis from PHP code. Can I use phpredis pconnect() method to decrease number of TCP connections between my backend servers and redis server? Should I expect no mush up of different users data within shared connections?
PHP version is 5.3.x
phpredis version is 2.2.4
Using pconnect will definitely enable using existing connection and help decreasing the number of tcp connections thus enabling a better performance

How to design the system used for data query and data update

The target is simple: clients post http requests to query data and update record by some keys。 Highest request: 500/sec (the higher the better, but the best is to fulfil this requirement while making the system easy to achieve and using less mashines)
what I've done: nginx + php-cgi(using php) to serve http request, the php use thrift RPC to retrieve data from a DB proxy which is only used to query and update DB(mysql). The DB proxy uses mysql connection pool and thrift's TNonblockingServer. (In my country, there are 2 ISP, DB Proxy will be deployed in multi-isp machine and so is the db, web servers can be deployed on single-isp mashine according to the experience)
what trouble me: when I do stress test(when >500/sec), I found " TSocket: Could not connect to 172.19.122.32:9090 (Connection refused [111]" from php log. I think it may be caused by the port's running out(may be incorrect conclusion). So I design to use thrift connection bool to reduce thrift connection. But there is no connection pool in php (there seem to be some DB connection pool tech) and php does not support the feature.
So I think maybe the project is designed in the wrong way from the beginning(like use php ,thrift). Is there a good way to solve this based on what i've done? And I think most people will doubt my awkward scheme. Well, your new scheme will help a lot
thanks.
"TSocket: Could not connect to 172.19.122.32:9090 (Connection refused [111])" from php log shows the ports running out because of too many short connections in a short time. So I config the tcp TIME_WAIT status to recycle port in time using:
sysctl -w net.ipv4.tcp_timestamps=1
sysctl -w net.ipv4.tcp_tw_recycle=1
it works!
what droubles me is sloved, but to change the kernal parameter will affect the NAT. It's not a perfect solution. I think a new good design of this system can be continue to discuss.

Is it a good practice to close the external connections (e.g. memcached, redis) in PHP?

My Memcached server and Redis server are different from my web server, so that inside the PHP scripts I would have to make connections to these two external ip.
My concern is that, it would be better for my web server to keep its connections to the two memcached/redis servers. So that when new users request a php page, the web server does not need to connect to the memcached/redis again and again.
$redis = new Redis();
$redis -> close();
(or similarly with memcached and mysql)
I am unsure about what close actually means in this case. Does it mean close connection with the redis server for this particular php script execution? Given my previous concern, would calling close in fact hinder my performance?
Nothing really happens to "close" the actual connection from your server. This actually is more of a memory management issue within the application than a networking/infrastructure issue between servers. Consider the case where you have a running program that may instantiate an arbitrary number of objects. The close() method allows these objects to be destructed and garbage collected. If you were creating hundreds of instances without closing them as they're finished with, you'd end up with memory leaks in your application.
If you'll consistently just have one connection, and you're wondering whether to close and reopen this connection every time it needs to be used, rest assured, this is what connection pools are for. More info here. I know Predis uses connection pools. Not sure about whatever library Php uses to interface with Memcached.

(PHP)choose memcache::connect or memcache::pconnect?

I'm using php::memcache module to connect a local memcached server (#127.0.0.1), but I don't know that which one I should use, memcache::connect() or memcache::pconnect ? Does memcache::pconnect will consume many resource of the server?
Thank you very much for your answer!
Memcached uses a TCP connection (handshake is 3 extra packets, closing is usually 4 packets) and doesn't require any authentication. Therefore, the only upside to using a persistent connection is that you don't need to send those extra 7 packets and don't have to worry about having a leftover TIME-WAIT port for a few seconds.
Sadly, the downside of sacrificing those resources is far greater than the minor upsides. So I recommend not using persistent connections in memcached.
pconnect stands for persistant connection. This means that the client (in your case the script) will constantly have a connection open to your server which might not be a resouces problem - more a lack of connections available.
You should probably be wanting the standard connect unless you know you need to use persistant connections.
As far as I know, the same rules that govern persistent vs. regular connections when connecting to MySQL apply to memcached as well. The upshot is, you probably shouldn't use persistent connections in either case.
"Consumes" TCP port.
In application I'm developing I use pconnect as it uses connection pool and from the view of hardware - one server keeps one connection to memcache. I don't know exactly how it works but I think memcached is smart enough to track IP of memcached client machine.
I've played with memcached for a long time and found that using memcache::getStatus shows that connections count doesn't increased when using pconnect.
You can use debug page which show memcached stats and try to tweak pconnect or connect and see what's going on.
One downside is that PHP gets no blatant error or warning if one or all of the persistently-connected memcached daemons vanish(es). That's a pretty darn big downside.

Categories