postgresql pdo very slow connect - php

We are facing performance issue with our web server. We are using an apache server (2.4.4) with php 5.4.14 (it's a uniserver package) and a postgresql database 9.2. It’s on a Windows system (can be XP, 7 or server…).
Problem is that requests answers from the web server are too slow; we have made some profiling and found that database connection is around 20 ms (millisecond).
We are using PDO like this:
$this->mConnexion = new \PDO(“postgres: host=127.0.0.1;dbname=”, $pUsername,$pPassword, array(\PDO::ATTR_PERSISTENT => false));
We have made some time profiling like this:
echo "Connecting to db <br>";$time_start = microtime();
$this->mConnexion = new \PDO(…
$time_end = microtime();$time = $time_end - $time_start;
echo "Connecting to db done in $time sec<br>";
We have made a test with ATTR_PERSISTENT to true and we came up with a connection time much faster. Code reports connection time = 2. E-5 second (whereas it’s 0.020 s with persistent to false).
Is 20 ms a normal value (and we have to move to persistent connection) ?
we have also made a test with mysql, connection time for non persistent connection is around 2 ms.
We have these options set in postgresql configuration file :
listen_addresses = '*'
port = 5432
max_connections = 100
SSL = off
shared_buffers = 32MB
EDIT
We do not use permanent (yet) because there are some drawbacks, if the script fail connection will be in a bad state (so we will have to manage these cases, and it’s what we will have to do…). I would like to have more points of view concerning this database connection time before directly switching to persistent connection.
To answer Daniel Vérité question, SSL is off (I already checked this option from my previous search about the subject).
#Daniel : i have tested on a intel core 2 Extreme CPU X9100 # 3.06Ghz 4Gb RAM

Try using unix domain socket by leaving host empty. It's a little bit faster.

Related

PHP mysql persistent connection not reused ( opens more than one connection per fpm-worker )

I'm facing a really weird behaviour while testing persistent connections from php to mysql. I have a small script that looks like this:
<?php
$db = new mysqli('p:db-host','user','pass','schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
My setup is :
OS: CentOS 7.3
PHP/7.1.18
php-fpm
nginx/1.10.2
MySQL-5.6.30
I tried to do some requests with ab:
$ ab -c 100 -n 500 http://mysite/my_test_script.php
PHP-FPM was configured to have 150 workers ready, and i saw what i was expecting, 150 established connections to mysql, which stayed open after the ab finished. I launched ab once again, and the behaviour was still the same, 150 connections, no new connections where opened. All fine. Then i created a script which did the the same exact requests, same IP, same HTTP headers, but used curl to make the request, and BOOM i had 300 connections on mysql instead of 150. I launched the script again, i got still 300 connections. Subsequent runs of the same script didn't increase the number of connections. Did anyone ever faced anything like this? Does anyone know what could make php open more connections than needed? Am I missing something obvious?
If it's not clear what i'm asking, please comment below and i will try to better my explain problem.
P.S. I tried this with PDO too, same behaviour.
EDIT: My tests where not accurate
After further testing i noticed that my first tests where not accurate. I was in a multi-tenant environment and different connections ( different schema ) where initialized when i launched ab. In my case the php documentation was a bit missleading, it says:
PHP checks if there's already an identical persistent connection (that remained open from earlier) - and if it exists, it uses it. If it does not exist, it creates the link. An 'identical' connection is a connection that was opened to the same host, with the same username and the same password (where applicable).
http://php.net/manual/en/features.persistent-connections.php
Maybe its i obvious to everyone, I don't know, it was not for me. Passing the 4th parameter to mysqli made php consider connections not identical. Once i changed my code to something like this:
<?php
$db = new mysqli('p:db-host','user','pass');
$db->select_db('schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
The application started to behave as i expected, one connection per worker.

Debugging php imap_open function - possible networking error

BACKGROUND: I just migrated my project to a different server (a shared DigitalOcean one, the cheapest offer) with CentOS, and noticed that PHP's imap functions take way more on the new server, than on the old one(like 10x more).
The old server is a dedicated server, and a mail server is also hosted on the old server, the one which I am trying to perform imap actions on.
IMPORTANT: I don't think that this slow down has anything to do with not trying to connect from the same physical server to the mail server anymore, because of the severe increase of the time it requires to run any imap function throughout my project, but anyone can prove me wrong, because I don't know anything about networking unfortunatly:(
I've made some simple script runtime test on both servers, by creating two php files, and testing the execution time of both, one using only php's imap functions (imap_open), and another one, where I am using PEAR Net_IMAP package. Here are the two scripts:
SCRIPT 1:
$start=microtime(true);
$temp=$start;
$mbox = imap_open ("{mymailserver:143/novalidate-cert}INBOX", "email#address.com", "password");
telltime($start, "Connect and Login", $temp);
$mbox = imap_reopen ($mbox,"{mymailserver:143/novalidate-cert}INBOX.Sent");
telltime($start, "Changefolder", $temp);
telltime($start, "End script");
SCRIPT 2:
require_once 'Net/IMAP.php';
$imapServer = new Net_IMAP('mymailserver', 143);
telltime($start, "Connect", $temp);
$loggedIn = $imapServer->login('email#address.com' , 'password');
telltime($start, "Login", $temp);
$imapServer->selectMailbox("INBOX.Sent");
telltime($start, "Change folder", $temp);
telltime($start, "End script");
I've ran these scripts the following way, with the following results:
SCRIPT 1 AS IS - old server
Connect and Login: 0.124350070953
Changefolder: 0.00585293769836
Whole time: 0.130313158035
SCRIPT 1 AS IS - new server
Connect and Login: 0.63277888298035
Changefolder: 0.15838479995728
Whole time: 0.79174709320068
SCRIPT 1 /novalidate-cert changed to /notls - old server
Connect and Login: 0.112071990967
Changefolder: 0.00407910346985
Whole time: 0.116246938705
SCRIPT 1 /novalidate-cert changed to /notls - new server
Connect and Login: 0.50686407089233
Changefolder: 0.17428183555603
Whole time: 0.68127012252808
SCRIPT 2 AS IS - new server
Connect: 0.42295503616333
Login: 0.4013729095459
Change folder: 0.057337045669556
End script: 0.88185501098633
The project also has a console based debugging system, from which I've managed to gather the following information:
- an average search to the mailbox takes around 0.01-0.02 seconds on the old server, while on the new server, the same search takes around 7-8x times more
- the fetching of a single e-mail from the mail server on the old server takes between 0.05s-0.1s while on the new server, there are e-mails(mainly those which have Text/HTML with attached image files) which take 4 seconds to fetch
Based on these results, I am presuming that I am facing a networking problem, but it is just a wild assumption, as till now I've never debugged networking errors, but I already took apart my php scripts, and I can't find any error in them.
I tried traceroute-ing and ping-ing the mail server from the new project environment, and I've got the following results:
traceroute to mymailserver (xxx.xxx.xxx.xx), 30 hops max, 60 byte packets
1 xxx.xxx.x.xxx 0.658 ms xxx.xxx.x.xxx 0.510 ms xxx.xxx.x.xxx 0.471 ms
2 xxx.xxx.xxx.xxx 0.434 ms xxx.xxx.xxx.xxx 0.333 ms xxx.xxx.xxx.xxx 0.247 ms
3 xxx.xxx.xxx.xx 0.984 ms 0.986 ms xxx.xxx.xxx.xxx 0.270 ms
4 xxx.xxx.xxx.xx 0.964 ms xxx.xxx.xx.xxx 1.414 ms 1.449 ms
5 xxx.xxx.xx.xxx 1.253 ms 1.211 ms xxx.xxx.xx.xxx 22.078 ms
6 xxx.xxx.xx.xxx 43.920 ms 41.971 ms 44.860 ms
7 xx.xx.xx.xxx 45.835 ms xxx.xxx.xx.xxx 42.055 ms 41.254 ms
8 * xxx.xxx.xxx.xxx 42.999 ms *
9 xxx.xxx.xxx.xx 41.989 ms 42.235 ms 44.925 ms
Yes, sometimes traceroute reports about lost packages, but not always, but the rest of this information unfortunatly is only giberish to me, cause I don't understand for what I have to look for, and I didn't find any usable tutorial for traceroute on the internet.
** OTHER INFORMATION BASED ON THE HELP FROM OTHER STACKOVERFLOW USERS: **
I've also downloaded xDebug, and tried function tracing the two above mentioned scripts. While I didn't get any usefull information by function tracing imap_open function, I've noticed that when I function trace the PEAR package, the fgets() that are performed somewhere in the class, take considerably more time, than any other function that is being run(around 0.05 seconds)
My question(s):
- Am I right in assuming from these information that this has to be a networking problem?
- If yes, how could I solve it, or what are the giveaways that this is indeed a networking problem?
- If no, how could I isolate the problem better, and obtain a solution to it?
I am offering a:
- a thank you, to someone who helps me isolate if this is a networking problem, or something else
- a bounty, if someone manages to help me solve this issue, and speed up the processing of imap functions

PHP / MYSQL connection failures under heavy load through mysql.sock

I've done quite a bit of reading before asking this, so let me preface by saying I am not running out of connections, or memory, or cpu, and from what I can tell, I am not running out of file descriptors either.
Here's what PHP throws at me when MySQL is under heavy load:
Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (11 "Resource temporarily unavailable")
This happens randomly under load - but the more I push, the more frequently php throws this at me. While this is happening I can always connect locally through the console and from PHP through 127.0.0.1 instead of "localhost" which uses the faster unix socket.
Here's a few system variables to weed out the usual problems:
cat /proc/sys/fs/file-max = 4895952
lsof | wc -l = 215778 (during "outages")
Highest usage of available connections: 26% (261/1000)
InnoDB buffer pool / data size: 10.0G/3.7G (plenty o room)
soft nofile 999999
hard nofile 999999
I am actually running MariaDB (Server version: 10.0.17-MariaDB MariaDB Server)
These results are generated both under normal load, and by running mysqlslap during off hours, so, slow queries are not an issue - just high connections.
Any advice? I can report additional settings/data if necessary - mysqltuner.pl says everything is a-ok
and again, the revealing thing here is that connecting via IP works just fine and is fast during these outages - I just can't figure out why.
Edit: here is my my.ini (some values may seem a bit high from my recent troubleshooting changes, and please keep in mind that there are no errors in the MySQL logs, system logs, or dmesg)
socket=/var/lib/mysql/mysql.sock
skip-external-locking
skip-name-resolve
table_open_cache=8092
thread_cache_size=16
back_log=3000
max_connect_errors=10000
interactive_timeout=3600
wait_timeout=600
max_connections=1000
max_allowed_packet=16M
tmp_table_size=64M
max_heap_table_size=64M
sort_buffer_size=1M
read_buffer_size=1M
read_rnd_buffer_size=8M
join_buffer_size=1M
innodb_log_file_size=256M
innodb_log_buffer_size=8M
innodb_buffer_pool_size=10G
[mysql.server]
user=mysql
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
open-files-limit=65535
Most likely it is due to net.core.somaxconn
What is the value of /proc/sys/net/core/somaxconn
net.core.somaxconn
# The maximum number of "backlogged sockets". Default is 128.
Connections in the queue which are not yet connected. Any thing above that queue will be rejected. I suspect this in your case. Try increasing it according to your load.
as root user run
echo 1024 > /proc/sys/net/core/somaxconn
This is something that can and should be solved by analysis. Learning how to do this is a great skill to have.
Analysis to find out just what is happening under a heavy load...number of queries, execution time should be your first step. Determine the load and then make the proper db config settings. You might find you need to optimize the sql queries instead!
Then make sure the PHP db driver settings are in alignment as well to fully utilize the database connections.
Here is a link to the MariaDB threadpool documentation. I know it says version 5.5, but its still relevant and the page does reference version 10. There are settings listed that may not be in your .cnf file that you can use.
https://mariadb.com/kb/en/mariadb/threadpool-in-55/
From the top of my head, I can think of max_connections as a possible source of the problem. I'd increase the limit, to at least eliminate the possibility.
Hope it helps.

PHP memcache not closing connections?

I am using PHP-memcache on various web servers to connect memcache servers.
I connect like this:
$memcache = new Memcache;
$memcache->addServer('memcache_host', 11211);
$memcache->addServer('memcache_host2', 11211);
Then fetch or set the data using get & set.
It works fine in most of the case but if something goes slow down then i see a sudden increase in memcache connections, which will create issues.
I think this is because by default addServer creates persistent connections & may be not closing them quickly after serving the request.
A similar issue has been reported here also.
So please let me know is this only because of default behavior of addServer function. Should i use non-persistent connection by passing false as third argument in addServer function.
Because memcached open connections might be kept by the kernel in keepalive mode if not explicitely closed by the client, lowering the following parameters might help, but will affect any other connection, like SSH. So putting tcp_keepalive_time too low is not a good idea.
Create the following file :
vim /etc/sysctl.d/low-tcp-timeout.conf
# Keep connections in keepalive for 600 seconds. Default 7200s = 2h.
net.ipv4.tcp_keepalive_time = 600
# 0 probes. Default 9
net.ipv4.tcp_keepalive_probes = 0
# Default 75 seconds between each probe
net.ipv4.tcp_keepalive_intvl = 75
and run sysctl -p to apply these values.
You can also have a look at net.ipv4.tcp_fin_timeout

PHP Mysql vs Mysqli in windows

I am using PHP 5.3.10 in windows (windows 7 64bits) using Apache and Mod_php.
Im in the process to decide which library i should use, so i am testing both, Mysql and MySQLi
I created two pages for the test
$mysqli = new mysqli("127.0.0.1", "root2", "secretsquirrel", "test");
for ($i=0;$i<100;$i++) {
$result = $mysqli->query("SELECT * from test");
// var_dump($result);
echo "ok $i";
$result->close();
}
And
$dbh=mysql_connect("127.0.0.1","root2","secretsquirrel",true);
mysql_select_db("test",$dbh);
for ($i=0;$i<100;$i++) {
$result=#mysql_query("SELECT * from test",$dbh);
echo "ok";
mysql_free_result($result);
}
In both test, i can connect without any problem and can fetch information.
However, if i do a concurrent test (5 concurrent users), MySqli is painfully slow.
And worst, if i do a concurrent test (10 concurrent users), then MySQLi crash Apache.
Faulting application name: httpd.exe, version: 2.2.22.0, time stamp: 0x4f4a84ad
Faulting module name: php5ts.dll, version: 5.3.10.0, time stamp: 0x4f2ae5d1
Exception code: 0xc0000005
Fault offset: 0x0000c7d7
Faulting process id: 0x1250
Faulting application start time: 0x01cd037de1e2092d
Faulting application path: C:\apache2\bin\httpd.exe
Faulting module path: C:\php53\php5ts.dll
Report Id: 1fb70b72-6f71-11e1-a64d-005056c00008
With MySql, everything works perfectly, even with 1000 concurrent users.
Question: i am doing something wrong?.
It is my php.ini configuration
[MySQLi]
mysqli.max_persistent = -1
;mysqli.allow_local_infile = On
mysqli.allow_persistent = On
mysqli.max_links = -1
mysqli.cache_size = 2000
mysqli.default_port = 3306
mysqli.default_socket =
mysqli.default_host =
mysqli.default_user =
mysqli.default_pw =
mysqli.reconnect = Off
ps: As espected, PDO is way worst
The code: (may be the test is wrong?)
$dbo = new PDO("mysql:host=127.0.0.1;dbname=test", "root2", "secretsquirrel" );
for ($i=0;$i<100;$i++) {
$dbo->query("SELECT * from test");
echo "ok $i";
}
The result is worst than MySQLi
Update:
I did the same in Linux (redhat) and MysqlI /PDO are more stable.
(1000 concurrent calls, with less there is not any difference).
Module Sum(ms) Min(ms) Max(ms)
MySQLi 66986 265 1762
MySQL 64521 234 1388
PDO 75426 249 1809
(minus is better).
Well, apparently there is not a answer, under windows MysqlI and PDO is a big no (unless to development process). For linux, the three are the same, however for a popular server (a lot of concurrent users), Mysql is the best, MysqlI is close (3% slower) and PDO is a big no (+10% slower).
However, it is not a real test so mileage can vary. However, the results are consistent with the popular believe, Mysql>Mysqli>pdo.
The single feature in mysqli that should make all the difference is parametrized queries. The mysql interface doesn't have these, which means you will be interpolating values into queries yourself, and this in turn means you have a big potential for SQL injection vulnerabilities - it just so turns out that securing your query concatenation isn't as trivial as it sounds.
BTW, have you considered PDO? It offers the same features mysqli does, but it can connect to any of the supported (and configured) databases, so if at any point you decide to migrate to, say, PostgreSQL, SQLite or SQL Server, you have only the SQL dialect differences to worry about, instead of porting everything to a different API.
Apache May Not Support everything, I've Faced Many Problems with Respect To Password Encryption & Decryption With Respect To Apache, but found solution on Hosting it on IIS...
Try Running your Application in IIS... http://learn.iis.net/page.aspx/246/using-fastcgi-to-host-php-applications-on-iis/ will helps you to HOST PHP on IIS 7... http://www.websitehosting.com/apache-vs-iis-web-server/ will clear the difference between Apache & IIS with respect to PHP...

Categories