BACKGROUND: I just migrated my project to a different server (a shared DigitalOcean one, the cheapest offer) with CentOS, and noticed that PHP's imap functions take way more on the new server, than on the old one(like 10x more).
The old server is a dedicated server, and a mail server is also hosted on the old server, the one which I am trying to perform imap actions on.
IMPORTANT: I don't think that this slow down has anything to do with not trying to connect from the same physical server to the mail server anymore, because of the severe increase of the time it requires to run any imap function throughout my project, but anyone can prove me wrong, because I don't know anything about networking unfortunatly:(
I've made some simple script runtime test on both servers, by creating two php files, and testing the execution time of both, one using only php's imap functions (imap_open), and another one, where I am using PEAR Net_IMAP package. Here are the two scripts:
SCRIPT 1:
$start=microtime(true);
$temp=$start;
$mbox = imap_open ("{mymailserver:143/novalidate-cert}INBOX", "email#address.com", "password");
telltime($start, "Connect and Login", $temp);
$mbox = imap_reopen ($mbox,"{mymailserver:143/novalidate-cert}INBOX.Sent");
telltime($start, "Changefolder", $temp);
telltime($start, "End script");
SCRIPT 2:
require_once 'Net/IMAP.php';
$imapServer = new Net_IMAP('mymailserver', 143);
telltime($start, "Connect", $temp);
$loggedIn = $imapServer->login('email#address.com' , 'password');
telltime($start, "Login", $temp);
$imapServer->selectMailbox("INBOX.Sent");
telltime($start, "Change folder", $temp);
telltime($start, "End script");
I've ran these scripts the following way, with the following results:
SCRIPT 1 AS IS - old server
Connect and Login: 0.124350070953
Changefolder: 0.00585293769836
Whole time: 0.130313158035
SCRIPT 1 AS IS - new server
Connect and Login: 0.63277888298035
Changefolder: 0.15838479995728
Whole time: 0.79174709320068
SCRIPT 1 /novalidate-cert changed to /notls - old server
Connect and Login: 0.112071990967
Changefolder: 0.00407910346985
Whole time: 0.116246938705
SCRIPT 1 /novalidate-cert changed to /notls - new server
Connect and Login: 0.50686407089233
Changefolder: 0.17428183555603
Whole time: 0.68127012252808
SCRIPT 2 AS IS - new server
Connect: 0.42295503616333
Login: 0.4013729095459
Change folder: 0.057337045669556
End script: 0.88185501098633
The project also has a console based debugging system, from which I've managed to gather the following information:
- an average search to the mailbox takes around 0.01-0.02 seconds on the old server, while on the new server, the same search takes around 7-8x times more
- the fetching of a single e-mail from the mail server on the old server takes between 0.05s-0.1s while on the new server, there are e-mails(mainly those which have Text/HTML with attached image files) which take 4 seconds to fetch
Based on these results, I am presuming that I am facing a networking problem, but it is just a wild assumption, as till now I've never debugged networking errors, but I already took apart my php scripts, and I can't find any error in them.
I tried traceroute-ing and ping-ing the mail server from the new project environment, and I've got the following results:
traceroute to mymailserver (xxx.xxx.xxx.xx), 30 hops max, 60 byte packets
1 xxx.xxx.x.xxx 0.658 ms xxx.xxx.x.xxx 0.510 ms xxx.xxx.x.xxx 0.471 ms
2 xxx.xxx.xxx.xxx 0.434 ms xxx.xxx.xxx.xxx 0.333 ms xxx.xxx.xxx.xxx 0.247 ms
3 xxx.xxx.xxx.xx 0.984 ms 0.986 ms xxx.xxx.xxx.xxx 0.270 ms
4 xxx.xxx.xxx.xx 0.964 ms xxx.xxx.xx.xxx 1.414 ms 1.449 ms
5 xxx.xxx.xx.xxx 1.253 ms 1.211 ms xxx.xxx.xx.xxx 22.078 ms
6 xxx.xxx.xx.xxx 43.920 ms 41.971 ms 44.860 ms
7 xx.xx.xx.xxx 45.835 ms xxx.xxx.xx.xxx 42.055 ms 41.254 ms
8 * xxx.xxx.xxx.xxx 42.999 ms *
9 xxx.xxx.xxx.xx 41.989 ms 42.235 ms 44.925 ms
Yes, sometimes traceroute reports about lost packages, but not always, but the rest of this information unfortunatly is only giberish to me, cause I don't understand for what I have to look for, and I didn't find any usable tutorial for traceroute on the internet.
** OTHER INFORMATION BASED ON THE HELP FROM OTHER STACKOVERFLOW USERS: **
I've also downloaded xDebug, and tried function tracing the two above mentioned scripts. While I didn't get any usefull information by function tracing imap_open function, I've noticed that when I function trace the PEAR package, the fgets() that are performed somewhere in the class, take considerably more time, than any other function that is being run(around 0.05 seconds)
My question(s):
- Am I right in assuming from these information that this has to be a networking problem?
- If yes, how could I solve it, or what are the giveaways that this is indeed a networking problem?
- If no, how could I isolate the problem better, and obtain a solution to it?
I am offering a:
- a thank you, to someone who helps me isolate if this is a networking problem, or something else
- a bounty, if someone manages to help me solve this issue, and speed up the processing of imap functions
Related
I've got a PHP socket server that sits around waiting for connections, then talks to a database to resolve them. It works fine when I'm testing, but then when I leave it sitting around, the next morning it no longer talks to the database.
When I look at my logs, I see this:
200327 11:54:37 24 Connect dbuser#localhost as anonymous on dbname
24 Quit
Where I'm expecting to see something more like this:
200327 11:54:20 23 Connect dbuser#localhost as anonymous on dbname
23 Query SELECT * FROM table1 WHERE num=4
23 Query SELECT * FROM table2 WHERE num='4' AND info='deleted'
23 Query SELECT * FROM table3 WHERE num='4'
23 Quit
But for some reason, after the server has been running for a while, the queries never go through after that initial connection.
The only thing I can think of is that maybe my PDO object is somehow timing out, as I create it once when I fire up the server.
$dbh = new PDO($dbName,$dbUser,$dbPass);
Any thoughts on what might be going on, and if creating the PDO object at the start of the process isn't correct, how to better manage that resource?
PHP is PHP 7.0.33, MySQL is 10.1.44-MariaDB-0+deb9u1 Debian 9.11.
Mysql server will close inactive connection after "wait_timout" seconds. The variable is described here:
https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_wait_timeout
I would suggest to open new connection on client side(php) every time request comes to socket and close after job is done because increasing wait_timeout could lead to too many hanging connections(unless php server never goes down and reuses same db connection)
I'm facing a really weird behaviour while testing persistent connections from php to mysql. I have a small script that looks like this:
<?php
$db = new mysqli('p:db-host','user','pass','schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
My setup is :
OS: CentOS 7.3
PHP/7.1.18
php-fpm
nginx/1.10.2
MySQL-5.6.30
I tried to do some requests with ab:
$ ab -c 100 -n 500 http://mysite/my_test_script.php
PHP-FPM was configured to have 150 workers ready, and i saw what i was expecting, 150 established connections to mysql, which stayed open after the ab finished. I launched ab once again, and the behaviour was still the same, 150 connections, no new connections where opened. All fine. Then i created a script which did the the same exact requests, same IP, same HTTP headers, but used curl to make the request, and BOOM i had 300 connections on mysql instead of 150. I launched the script again, i got still 300 connections. Subsequent runs of the same script didn't increase the number of connections. Did anyone ever faced anything like this? Does anyone know what could make php open more connections than needed? Am I missing something obvious?
If it's not clear what i'm asking, please comment below and i will try to better my explain problem.
P.S. I tried this with PDO too, same behaviour.
EDIT: My tests where not accurate
After further testing i noticed that my first tests where not accurate. I was in a multi-tenant environment and different connections ( different schema ) where initialized when i launched ab. In my case the php documentation was a bit missleading, it says:
PHP checks if there's already an identical persistent connection (that remained open from earlier) - and if it exists, it uses it. If it does not exist, it creates the link. An 'identical' connection is a connection that was opened to the same host, with the same username and the same password (where applicable).
http://php.net/manual/en/features.persistent-connections.php
Maybe its i obvious to everyone, I don't know, it was not for me. Passing the 4th parameter to mysqli made php consider connections not identical. Once i changed my code to something like this:
<?php
$db = new mysqli('p:db-host','user','pass');
$db->select_db('schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
The application started to behave as i expected, one connection per worker.
I have a report that generates an array of data from MySQL server by using looping through PHP code (Laravel framework). However, the maximum that the server can handle is an array with 400 row, and each row contains 61 child value in it.
[
[1, ...,61], // row 1
.
.
.
[1,....,61] // row 400
]
Each value is calculated from running a loop that retrieves data from MySQL server.
There is a no load balancer.
I tried to increase max_execution_time = 600 (10 minutes), but it still show the connection time out problem. Any thoughts? Thanks,
Connection Timed Out
Description: Connection Timed Out
Server version: Apache/2.4.7 (Ubuntu) - PHP 5.6
Would need more info for a definitive answer...
What is the Apache/httpd version (there have been some bugs that relate to this)?
Is there a firewall or load balancer in the mix?
If you are sure it is still a timeout error and not say memory, then it is probably httpd's TimeOut directive. It defaults to 300 seconds.
If still stuck paste the exact error you are seeing.
My PHP version was 5.6. After upgrading to PHP7, my application speed was increased significantly. Everything works fine now.
We are facing performance issue with our web server. We are using an apache server (2.4.4) with php 5.4.14 (it's a uniserver package) and a postgresql database 9.2. It’s on a Windows system (can be XP, 7 or server…).
Problem is that requests answers from the web server are too slow; we have made some profiling and found that database connection is around 20 ms (millisecond).
We are using PDO like this:
$this->mConnexion = new \PDO(“postgres: host=127.0.0.1;dbname=”, $pUsername,$pPassword, array(\PDO::ATTR_PERSISTENT => false));
We have made some time profiling like this:
echo "Connecting to db <br>";$time_start = microtime();
$this->mConnexion = new \PDO(…
$time_end = microtime();$time = $time_end - $time_start;
echo "Connecting to db done in $time sec<br>";
We have made a test with ATTR_PERSISTENT to true and we came up with a connection time much faster. Code reports connection time = 2. E-5 second (whereas it’s 0.020 s with persistent to false).
Is 20 ms a normal value (and we have to move to persistent connection) ?
we have also made a test with mysql, connection time for non persistent connection is around 2 ms.
We have these options set in postgresql configuration file :
listen_addresses = '*'
port = 5432
max_connections = 100
SSL = off
shared_buffers = 32MB
EDIT
We do not use permanent (yet) because there are some drawbacks, if the script fail connection will be in a bad state (so we will have to manage these cases, and it’s what we will have to do…). I would like to have more points of view concerning this database connection time before directly switching to persistent connection.
To answer Daniel Vérité question, SSL is off (I already checked this option from my previous search about the subject).
#Daniel : i have tested on a intel core 2 Extreme CPU X9100 # 3.06Ghz 4Gb RAM
Try using unix domain socket by leaving host empty. It's a little bit faster.
I've been using the same DB abstraction library for years. But today it started writing these Notice (8) messages in my log.
The application is working correctly but every time a script connects to the DB the same notice is logged.
I cannot think what might have changed. This is happening on my local dev machine.
OS X 10.6.2
PHP 5.3.0 (cli)
mysql Ver 14.12 Distrib 5.0.87
mysqlnd 5.0.5-dev - 081106 - $Revision: 1.3.2.27 $
If someone is struggling with this issue, here is the fix:
Try changing/setting up wait_timeout in your mysql my.cnf config file:
wait_timeout=3600
This config file is located in the /etc/mysql/my.cnf (Ubuntu/Debian) and /usr/local/mysql/my.cnf (OSX).
Restart mysql server and it should work.
Only solution I've found so far is, changing
// From
PDO::ATTR_PERSISTENT => true
// To
PDO::ATTR_PERSISTENT => false
Not so happy with this, but works in the meantime.
I'm using a very old PC for personal project, so I'm guessing the problem could have to do with lack of resources.
I am using PHP 5.6.20, PDO (throwing exceptions only), and MySQL 5.6.28 with persistent connections and EVERYTHING is utf8mb4. My entire stack is set up for utf-8 (dsn string settings, connections, database server databases, tables, columns, Apache 2.4.12, PHP, all webpages, CSS ... you name it).
I get the following error message intermittently and it is mystifying and annoying.
Notice: PDO::__construct(): send of 5 bytes failed with errno=32 Broken pipe in file /foo/bar/baz
Assuming a persistent connection is a noninteractive one, the MySQL 5.6 manual (5.1.4 Server System Variables) says the following about the server system variable wait_timeout.
The number of seconds the server waits for activity on a
noninteractive connection before closing it.
Default: 28800 sec
(28000 sec / 1) * (1 hour / 3600 sec) = 8 hours
Max: 31536000 sec
((31536000 sec / 1) * (1 hr / 3600 sec) * (1 day / 24 hrs) = 365 days
Therefore, check wait_timeout in your my.cnf and decide if persistent connections are what you need. Also, you'll have to invest in making your application more robust to account for a persistent connection that has been torn down. Clearly, you do not want your client to come back the next day (having gone home for the night) and say "What the heck?!"
It may be because your data contains 'utf-8' characters. I had the similar issue is caused by it.
Exception: mysql_query(): send of 1462592 bytes failed with errno=32 Broken pipe
I used
mysql -u username -p database < dump_file # this is bad
to import the sql file contains lot of UTF8 characters (Thai language), but I didn't set default-character-set=utf8 for [mysql]. So the wrong coded data in the database caused that issue.
just remove mysqlnd driver and use mysqli
Yes mysqlnd more modern, but what about stability?
next commands fix your problem
apt-get remove php5-mysqlnd
apt-get install php5-pdo-mysql