I have classical server configuration schema with Nginx + PHP-FPM. Most of pages on my site contain data that saved into redis. Thus it is too many indirect (through php-fpm) lite requests to redis from many independent users. I use phpredis PHP extension to communicate with redis from PHP code. Can I use phpredis pconnect() method to decrease number of TCP connections between my backend servers and redis server? Should I expect no mush up of different users data within shared connections?
PHP version is 5.3.x
phpredis version is 2.2.4
Using pconnect will definitely enable using existing connection and help decreasing the number of tcp connections thus enabling a better performance
Related
I'm using AWS Elasticache Redis and I've been having issues with Redis and to many connections. Using "Info All" I can see the total used connections can grow to around 50,000 in a day.
Currently I'm using PHP to connect to Redis and I have this put into many different php functions so it can be called many times for a single page request. Also PHP session are on redis.
I wanted to ask:
- Can I create a persistent connection to redis from PHP? One connection for all requests to use.
- Should I use a Global PHP variable as a Redis connection that different functions use rather then creating a new connection to redis for each function?
- I've read about Nginx proxying redis requests "redis_pass" - would this connect just once and proxy requests through? would this be a better solution?
Any other solutions?
Just looking for a wise way to reduce the number of redis connections from PHP. (Note: using php 7.0.6).
thanks :)
Adam
Edit:
I noticed you can also get your PHP Session data on Redis via Nginx - would this also reduce connections - I assume yes:
How can I get the value from Redis and put it in a variable in NGiNX?
Maybe not helpful, but do you have the redis server secured from the internet? They are often a target for attackers trying to exploit vulnerabilities.
I am experimenting with using Redis for a Drupal website, hosted on Ubuntu 14.04.
I have installed the redis drupal module and am using the Predis library. I have also installed the 'redis-server' Ubuntu package and left the default configuration.
Configuring the Drupal site to use Redis for its cache backend works fine and the pages are lightning fast.
The problem arrives when I tried to spark up an m3.medium AWS instance and hosting the redis server there. The reason behind this is so that we can use one redis server and connect to it from multiple servers (live website hosted on multiple instances behind a load balancer, so each instance should connect to the same redis server).
I have set up the redis server on the instance, modified the redis.conf file to bind the correct IP address so it can be accessed from the outside, opened up the 6379 port, then tried connecting to it from my local computer
redis-cli -h IP
It worked fine so I decided to flip my local site's configuration to point to the new redis server.
The moment I did that the site became painfully slow, and at first I thought it might not even load at all. After almost a minute it finally loaded the home page. Clicking around the site was almost as slow, but the time reduced to maybe 10-15 seconds. That it still unacceptable and doesn't even compare to the lightning fast page load when using the redis server.
My question is: is there some specific configuration I need to do to make the remote connection faster? Is there something preventing it from performing well? some bottleneck somewhere?
Let me know if you want me to add the drupal settings.php configuration, although I am using a pretty standard config.
Although I ran the same configuration for a php application as you are trying, I had no issues hosting redis on either a small or medium instance and handling large amounts of traffic. There must be a config issue somewhere. Another option to debug it would be to try switching to Elasticcache (AWS' redis offering) it requires that all clients be within the same region, but could make finding your problem very easy.
I'm looking into using Heroku for a PHP app that uses Redis. I've seen the various addons for redis. With Redis To Go, for example, you can use an environment variable $_ENV['REDISTOGO_URL'] in your PHP code, as the URL of the Redis Server.
Most of these add ons have their own pricing schemes which I'd like to avoid. I'm a little confused about how heroku works. Is there a way that I can just install Redis on my own Dynos without the addons?
Like for example, have one worker dyno that acts as a server, and another that acts as a client? If possible, how would I go about:
Installing and running the redis server on a Dyno? Is this just the same as
installing on any other unix box? Can I just ssh to it and install whatever i want?
Have one Dyno connect to
another with an IP/port via TCP? Do the worker dynos have their own
reference-able IP addresses or named URLS that I can use? Can I get them dynamically from PHP somehow?
The php code for a redis client assumes there is a host and port that you can connect to, but have no idea what it would be?
$redis = new Predis\Client(array(
"scheme" => "tcp",
"host" => $host, //how do i get the host/port of a dyno?
"port" => $port));
Running redis on a dyno is an interesting idea. You will probably need to create a redis buildpack so your dynos can download and run redis. As "redis has no dependencies other than a working GCC compiler and libc" this should be technically possible.
However, here are some problems you may run into:
Heroku dynos don't have a static IP address
"dynos don’t have static IP addresses .. you can never access a dyno directly by IP"
Even if you set up and run Redis on a dyno I am not aware of a way to locate that dyno instance and send it redis requests. This means your Redis server will probably have to run on the same dyno as your web server/main application.
This also means that if you attempt to scale your app by creating more web dynos you will also be creating more local redis instances. Data will not be shared between them. This does not strike me as a particularly scalable design, but if your app is small enough to only require one web dyno it may work.
Heroku dynos have an ephemeral filesystem
"no files that are written are visible to processes in any other dyno and any files written will be discarded the moment the dyno is stopped or restarted"
By default Redis writes its RDB file and AOF log to disk. You'll need to regularly back these up somewhere so you can fetch and restore after your dyno restarts. See the documentation on Redis persistence.
Heroku dynos are rebooted often
"Dynos are cycled at least once per day, or whenever the dyno manifold detects a fault in the underlying hardware"
You'll need to be able to start your redis server each time the dyno starts and restore the data.
Heroku dynos have 512MB of RAM
"Each dyno is allocated 512MB of memory to operate within"
If your Redis server is running on the same dyno as your web server, subtract the RAM needed for your main app. How much Redis memory do you need?
Here are some questions attempting to estimate and track Redis memory use:
Redis: Database Size to Memory Ratio?
Profiling Redis Memory Usage
--
Overall: I suggest reading up on 12 Factor Apps to understand a bit more about heroku's intended application model.
The short version is that dynos are intended to be independent workers that can be easily created and discarded to meet demand, and that dynos access various resources to read or write data and serve your app. A redis instance is an example of a resource. As you can see from the items above, by using a redis add-on you're getting something that's guaranteed to be static, stable, and accessible.
Reading material:
http://www.12factor.net/ - specifically Processes and Services
The Heroku Process Model
Heroku Blog - The Process Model
redis has a client server architecture you can install it on one machine(in your case dyno) and access it from any client.
for more help on libraries you can refer this link
or you can go through this Redis documentaion which is a simple case study of implementing a twitter clone using Redis ad database and PHP
Colleagues!
I'm running php 5.3 (5.3.8) with memcache (2.2.6) client library (http://pecl.php.net/package/memcache) to deal with memcached server.
My goal is to have failover solution for sessions engine, namely:
Only native php sessions support (no custom handlers)
Few memcached servers in the pool
What I expect is that in case if one of memcached servers is down, php will attempt to utilize the second server in the pool [will successfully connect it and become happy], however when first memcached server in the pool is down I'm receiving the following error:
Session start failed. Original message: session_start(): Server 10.0.10.111 (tcp 11211) failed with: Connection refused (111)
while relevant php settings are:
session.save_handler memcache
session.save_path tcp://10.0.10.111:11211?persistent=1&weight=1&timeout=1&retry_interval=10, tcp://10.0.10.110:11211?persistent=1&weight=1&timeout=1&retry_interval=10
and memcache settings (while I think that it's near to standard) are:
Directive Local Value
memcache.allow_failover 1
memcache.chunk_size 8192
memcache.default_port 11211
memcache.default_timeout_ms 1000
memcache.hash_function crc32
memcache.hash_strategy standard
memcache.max_failover_attempts 20
Memcached still running on the second server and perfectly accessible from the WEB server:
telnet 10.0.10.110 11211
Trying 10.0.10.110...
Connected to 10.0.10.110 (10.0.10.110).
Escape character is '^]'.
get aaa
END
quit
Connection closed by foreign host.
So in other words, instead of querying all of the listed servers sequentially it crashes after unsuccessful attempt to connect the first server in the queue. Finally I do realize that there are releases of 3.0.x client library available, however it does not look too reliable for me as it still in beta version.
Please advice how can I get desired behavior with standard PHP, client lib and server.
Thanks a lot!
Best,
Eugene
Use the Memcached extension. Note that there are two memcache plugins for PHP. One is called Memcache, the other is called Memcached. Yes, that's confusing, but true anyway.
The Memcache plugin supports those complex URL's you're using, with the protocol identifier (tcp) and the parameters (persistency and so on), while the Memcached plugin supports connection pools.
The documentation you're mentioning in the comments above (http://www.php.net/manual/en/memcached.sessions.php) is about the Memcached extension, not about Memcache.
Update: Some interesting read: https://serverfault.com/questions/164350/can-a-pool-of-memcache-daemons-be-used-to-share-sessions-more-efficiently
I would like to thank everybody who participated this question, the answer is the following: in reality memcache (not memcached) as session handler supports comma separated servers as the session.save_path, moreover it supports failover. The error mentioned above Session start failed. Original message: session_start(): Server 10.0.10.111 (tcp 11211) failed with: Connection refused (111) had only 8th (Notice) level. In fact engine just informs you about the fact that one of the servers is unavailable (which is logical, as otherwise how will you know?) and then successfully connects to the second server and using it.
So all of the misunderstanding has been caused by weak documentation, memcache/memcached confusions and paranoid (E_ALL) settings of my custom error handler. In the meantime the issue has been resolved by ignoring notices referring to error Connection refused (111) in the session establishing context
You must change the hash strategy
Change your config to
memcache.hash_strategy consistent
When you make the hash strategy to consistent memcache copies the data across multiple servers. If one of the servers is down, it retries to copy it on the next request.
I have 2 load balanced web servers and a DB server. Each one has 6GB of ram dedicated to memcache.
On the 2 web servers, I'm having issues with memcache where they don't seem to have access to the same pool of data, sometimes.
Currently I have it setup so each of the 2 web servers connects to localhost first, and then adds the other 2 servers to the pool. Should I keep the connection string the same and have both of them connect to the DB server memcache instance initially, and then add themselves to the pool after in the same order?
The order of the memcached servers in your list is important. Also important is not using "localhost", ever. The key hashes are built based on the pool of servers you have provided. If your data is different, the hashes come out differently.
http://code.google.com/p/memcached/wiki/NewConfiguringClient#Configuring_Servers_Consistently