I am using PHP-memcache on various web servers to connect memcache servers.
I connect like this:
$memcache = new Memcache;
$memcache->addServer('memcache_host', 11211);
$memcache->addServer('memcache_host2', 11211);
Then fetch or set the data using get & set.
It works fine in most of the case but if something goes slow down then i see a sudden increase in memcache connections, which will create issues.
I think this is because by default addServer creates persistent connections & may be not closing them quickly after serving the request.
A similar issue has been reported here also.
So please let me know is this only because of default behavior of addServer function. Should i use non-persistent connection by passing false as third argument in addServer function.
Because memcached open connections might be kept by the kernel in keepalive mode if not explicitely closed by the client, lowering the following parameters might help, but will affect any other connection, like SSH. So putting tcp_keepalive_time too low is not a good idea.
Create the following file :
vim /etc/sysctl.d/low-tcp-timeout.conf
# Keep connections in keepalive for 600 seconds. Default 7200s = 2h.
net.ipv4.tcp_keepalive_time = 600
# 0 probes. Default 9
net.ipv4.tcp_keepalive_probes = 0
# Default 75 seconds between each probe
net.ipv4.tcp_keepalive_intvl = 75
and run sysctl -p to apply these values.
You can also have a look at net.ipv4.tcp_fin_timeout
Related
I have configured Oracle to use DRCP and setup PHP to connect to the pooled connection, but when I pull the connection stats there are always zero hits while the number of requests and number of misses continues to climb.
CCLASS_NAME NUM_REQUESTS NUM_HITS NUM_MISSES NUM_WAITS WAIT_TIME CLIENT_REQ_TIMEOUTS
BIGTUNACAN.drcp_pooling_test 9828 0 9828 6 0 0
My connection in tnsnames.ora is using SERVER = POOLED and my php.ini has drcp_pooling_test set.
I'm at a loss right now why cached connections would never be used.
TNS entry below
TESTPOOL.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = test-db.bigtunacan.com)(PORT = 1521))
)
(CONNECT_DATA =
(SID = TEST)
(SERVER = POOLED)
)
)
The problem won't be the use of SID (although you should still change that, since you potentially lose a bunch of inherent functionality), but that you are using oci_connect(). Use oci_pconnect() instead. This is a guess, since you missed giving the testcase.
NUM_HITS is the "Total number of times client requests found matching pooled servers and sessions in the pool.", but oci_connect() has to recreate the session, so it won't give a 'hit'. See table 11 on page 261 of The Underground PHP and Oracle Manual which says that oci_connect() "Gets a pooled server from the DRCP pool and creates a brand new session.". You will get some benefits from reusing a pooled server but not the full benefit that oci_pconnect() can give you.
However, you should step back and really review why you want DRCP. If you're not already using oci_pconnect() then your PHP connection calls will be slow. Change to use oci_pconnect(). You may then be able to reduce the number of Apache processes needed, which will reduce the number of concurrent connections needed. Implement other best practices such as using bind variables. Only if your database host doesn't have enough memory to handle all the open connections concurrently would you then move to use Shared Servers or DRCP. DRCP is a pool solution, so there is some overhead (and a small amount of extra administration).
I'm facing a really weird behaviour while testing persistent connections from php to mysql. I have a small script that looks like this:
<?php
$db = new mysqli('p:db-host','user','pass','schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
My setup is :
OS: CentOS 7.3
PHP/7.1.18
php-fpm
nginx/1.10.2
MySQL-5.6.30
I tried to do some requests with ab:
$ ab -c 100 -n 500 http://mysite/my_test_script.php
PHP-FPM was configured to have 150 workers ready, and i saw what i was expecting, 150 established connections to mysql, which stayed open after the ab finished. I launched ab once again, and the behaviour was still the same, 150 connections, no new connections where opened. All fine. Then i created a script which did the the same exact requests, same IP, same HTTP headers, but used curl to make the request, and BOOM i had 300 connections on mysql instead of 150. I launched the script again, i got still 300 connections. Subsequent runs of the same script didn't increase the number of connections. Did anyone ever faced anything like this? Does anyone know what could make php open more connections than needed? Am I missing something obvious?
If it's not clear what i'm asking, please comment below and i will try to better my explain problem.
P.S. I tried this with PDO too, same behaviour.
EDIT: My tests where not accurate
After further testing i noticed that my first tests where not accurate. I was in a multi-tenant environment and different connections ( different schema ) where initialized when i launched ab. In my case the php documentation was a bit missleading, it says:
PHP checks if there's already an identical persistent connection (that remained open from earlier) - and if it exists, it uses it. If it does not exist, it creates the link. An 'identical' connection is a connection that was opened to the same host, with the same username and the same password (where applicable).
http://php.net/manual/en/features.persistent-connections.php
Maybe its i obvious to everyone, I don't know, it was not for me. Passing the 4th parameter to mysqli made php consider connections not identical. Once i changed my code to something like this:
<?php
$db = new mysqli('p:db-host','user','pass');
$db->select_db('schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
The application started to behave as i expected, one connection per worker.
I've done quite a bit of reading before asking this, so let me preface by saying I am not running out of connections, or memory, or cpu, and from what I can tell, I am not running out of file descriptors either.
Here's what PHP throws at me when MySQL is under heavy load:
Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (11 "Resource temporarily unavailable")
This happens randomly under load - but the more I push, the more frequently php throws this at me. While this is happening I can always connect locally through the console and from PHP through 127.0.0.1 instead of "localhost" which uses the faster unix socket.
Here's a few system variables to weed out the usual problems:
cat /proc/sys/fs/file-max = 4895952
lsof | wc -l = 215778 (during "outages")
Highest usage of available connections: 26% (261/1000)
InnoDB buffer pool / data size: 10.0G/3.7G (plenty o room)
soft nofile 999999
hard nofile 999999
I am actually running MariaDB (Server version: 10.0.17-MariaDB MariaDB Server)
These results are generated both under normal load, and by running mysqlslap during off hours, so, slow queries are not an issue - just high connections.
Any advice? I can report additional settings/data if necessary - mysqltuner.pl says everything is a-ok
and again, the revealing thing here is that connecting via IP works just fine and is fast during these outages - I just can't figure out why.
Edit: here is my my.ini (some values may seem a bit high from my recent troubleshooting changes, and please keep in mind that there are no errors in the MySQL logs, system logs, or dmesg)
socket=/var/lib/mysql/mysql.sock
skip-external-locking
skip-name-resolve
table_open_cache=8092
thread_cache_size=16
back_log=3000
max_connect_errors=10000
interactive_timeout=3600
wait_timeout=600
max_connections=1000
max_allowed_packet=16M
tmp_table_size=64M
max_heap_table_size=64M
sort_buffer_size=1M
read_buffer_size=1M
read_rnd_buffer_size=8M
join_buffer_size=1M
innodb_log_file_size=256M
innodb_log_buffer_size=8M
innodb_buffer_pool_size=10G
[mysql.server]
user=mysql
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
open-files-limit=65535
Most likely it is due to net.core.somaxconn
What is the value of /proc/sys/net/core/somaxconn
net.core.somaxconn
# The maximum number of "backlogged sockets". Default is 128.
Connections in the queue which are not yet connected. Any thing above that queue will be rejected. I suspect this in your case. Try increasing it according to your load.
as root user run
echo 1024 > /proc/sys/net/core/somaxconn
This is something that can and should be solved by analysis. Learning how to do this is a great skill to have.
Analysis to find out just what is happening under a heavy load...number of queries, execution time should be your first step. Determine the load and then make the proper db config settings. You might find you need to optimize the sql queries instead!
Then make sure the PHP db driver settings are in alignment as well to fully utilize the database connections.
Here is a link to the MariaDB threadpool documentation. I know it says version 5.5, but its still relevant and the page does reference version 10. There are settings listed that may not be in your .cnf file that you can use.
https://mariadb.com/kb/en/mariadb/threadpool-in-55/
From the top of my head, I can think of max_connections as a possible source of the problem. I'd increase the limit, to at least eliminate the possibility.
Hope it helps.
We are facing performance issue with our web server. We are using an apache server (2.4.4) with php 5.4.14 (it's a uniserver package) and a postgresql database 9.2. It’s on a Windows system (can be XP, 7 or server…).
Problem is that requests answers from the web server are too slow; we have made some profiling and found that database connection is around 20 ms (millisecond).
We are using PDO like this:
$this->mConnexion = new \PDO(“postgres: host=127.0.0.1;dbname=”, $pUsername,$pPassword, array(\PDO::ATTR_PERSISTENT => false));
We have made some time profiling like this:
echo "Connecting to db <br>";$time_start = microtime();
$this->mConnexion = new \PDO(…
$time_end = microtime();$time = $time_end - $time_start;
echo "Connecting to db done in $time sec<br>";
We have made a test with ATTR_PERSISTENT to true and we came up with a connection time much faster. Code reports connection time = 2. E-5 second (whereas it’s 0.020 s with persistent to false).
Is 20 ms a normal value (and we have to move to persistent connection) ?
we have also made a test with mysql, connection time for non persistent connection is around 2 ms.
We have these options set in postgresql configuration file :
listen_addresses = '*'
port = 5432
max_connections = 100
SSL = off
shared_buffers = 32MB
EDIT
We do not use permanent (yet) because there are some drawbacks, if the script fail connection will be in a bad state (so we will have to manage these cases, and it’s what we will have to do…). I would like to have more points of view concerning this database connection time before directly switching to persistent connection.
To answer Daniel Vérité question, SSL is off (I already checked this option from my previous search about the subject).
#Daniel : i have tested on a intel core 2 Extreme CPU X9100 # 3.06Ghz 4Gb RAM
Try using unix domain socket by leaving host empty. It's a little bit faster.
I'm having an issue with memcached. Not sure if it's memcached, php, or tcp sockets but everytime I try a benchmark with 50 or more concurrency to a page with memcached, some of those request failed using apache ab. I get the (99) Cannot assign requested address error.
When I do a concurrency test of 5000 to a regular phpinfo() page. Everything is fine. No failed requests.
It seems like memcached cannot support high concurrency or am I missing something? I'm running memcached with the -c 5000 flag.
Server: (2) Quad Core Xeon 2.5Ghz, 64GB ram, 4TB Raid 10, 64bit OpenSUSE 11.1
Ok, I've figured it out. Maybe this will help others who have the same problem.
It seems like the issue can be a combination of things.
Set the sever.max-worker in the lighttpd.conf to a higher number
Original: 16 Now: 32
Turned off keep-alive in lighttpd.conf, it was keeping the connections opened for too long.
server.max-keep-alive-requests = 0
Change ulimit -n open files to a higher number.
ulimit -n 65535
If you're on linux use:
server.event-handler = "linux-sysepoll"
server.network-backend = "linux-sendfile"
Increase max-fds
server.max-fds = 2048
Lower the tcp TIME_WAIT before recycling, this keep close the connection faster.
In /etc/sysctl.conf add:
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 3
Make sure you force it to reload with: /sbin/sysctl -p
After I've made the changes, my server is now running 30,000 concurrent connections and 1,000,000 simultaneous requests without any issue, failed requests, or write errors with apache ab.
Command used to benchmark: ab -n 1000000 -c 30000 http://localhost/test.php
My Apache can't get even close to this benchmark. Lighttd make me laugh at Apache now. Apache crawl at around 200 concurrency.
I'm using just a 4 byte integer, using it as a page counter for testing purposes. Other php pages works fine even with 5,000 concurrent connections and 100,000 requests. This server have alot of horsepower and ram, so I know that's not the issue.
The page that seems to die have nothing but 5 lines to code to test the page counter using memcached. Making the connection gives me this error: (99) Cannot assign requested address.
This problem start to arise starting with 50 concurrent connections.
I'm running memcached with -c 5000 for 5000 concurrency.
Everything is on one machine (localhost)
The only process running is SSH, Lighttpd, PHP, and Memcached
There are no users connected to this box (test machine)
Linux -nofile is set to 32000
That's all I have for now, I'll post more information as I found more. It seems like there are alot of people with this problem.
I just tested something similar with a file;
$mc = memcache_connect('localhost', 11211);
$visitors = memcache_get($mc, 'visitors') + 1;
memcache_set($mc, 'visitors', $visitors, 0, 30);
echo $visitors;
running on a tiny virtual machine with nginx, php-fastcgi, and memcached.
I ran ab -c 250 -t 60 http://testserver/memcache.php from my laptop in the same network without seeing any errors.
Where are you seeing the error? In your php error log?
This is what I used for Nginx/php-fpm adding this lines in /etc/sysctl.conf # Rackspace dedicate servers with Memcached/Couchbase/Puppet:
# Memcached fix
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 3
I hope it helps.