I've got a PHP socket server that sits around waiting for connections, then talks to a database to resolve them. It works fine when I'm testing, but then when I leave it sitting around, the next morning it no longer talks to the database.
When I look at my logs, I see this:
200327 11:54:37 24 Connect dbuser#localhost as anonymous on dbname
24 Quit
Where I'm expecting to see something more like this:
200327 11:54:20 23 Connect dbuser#localhost as anonymous on dbname
23 Query SELECT * FROM table1 WHERE num=4
23 Query SELECT * FROM table2 WHERE num='4' AND info='deleted'
23 Query SELECT * FROM table3 WHERE num='4'
23 Quit
But for some reason, after the server has been running for a while, the queries never go through after that initial connection.
The only thing I can think of is that maybe my PDO object is somehow timing out, as I create it once when I fire up the server.
$dbh = new PDO($dbName,$dbUser,$dbPass);
Any thoughts on what might be going on, and if creating the PDO object at the start of the process isn't correct, how to better manage that resource?
PHP is PHP 7.0.33, MySQL is 10.1.44-MariaDB-0+deb9u1 Debian 9.11.
Mysql server will close inactive connection after "wait_timout" seconds. The variable is described here:
https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_wait_timeout
I would suggest to open new connection on client side(php) every time request comes to socket and close after job is done because increasing wait_timeout could lead to too many hanging connections(unless php server never goes down and reuses same db connection)
Related
I'm facing a really weird behaviour while testing persistent connections from php to mysql. I have a small script that looks like this:
<?php
$db = new mysqli('p:db-host','user','pass','schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
My setup is :
OS: CentOS 7.3
PHP/7.1.18
php-fpm
nginx/1.10.2
MySQL-5.6.30
I tried to do some requests with ab:
$ ab -c 100 -n 500 http://mysite/my_test_script.php
PHP-FPM was configured to have 150 workers ready, and i saw what i was expecting, 150 established connections to mysql, which stayed open after the ab finished. I launched ab once again, and the behaviour was still the same, 150 connections, no new connections where opened. All fine. Then i created a script which did the the same exact requests, same IP, same HTTP headers, but used curl to make the request, and BOOM i had 300 connections on mysql instead of 150. I launched the script again, i got still 300 connections. Subsequent runs of the same script didn't increase the number of connections. Did anyone ever faced anything like this? Does anyone know what could make php open more connections than needed? Am I missing something obvious?
If it's not clear what i'm asking, please comment below and i will try to better my explain problem.
P.S. I tried this with PDO too, same behaviour.
EDIT: My tests where not accurate
After further testing i noticed that my first tests where not accurate. I was in a multi-tenant environment and different connections ( different schema ) where initialized when i launched ab. In my case the php documentation was a bit missleading, it says:
PHP checks if there's already an identical persistent connection (that remained open from earlier) - and if it exists, it uses it. If it does not exist, it creates the link. An 'identical' connection is a connection that was opened to the same host, with the same username and the same password (where applicable).
http://php.net/manual/en/features.persistent-connections.php
Maybe its i obvious to everyone, I don't know, it was not for me. Passing the 4th parameter to mysqli made php consider connections not identical. Once i changed my code to something like this:
<?php
$db = new mysqli('p:db-host','user','pass');
$db->select_db('schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
The application started to behave as i expected, one connection per worker.
BACKGROUND: I just migrated my project to a different server (a shared DigitalOcean one, the cheapest offer) with CentOS, and noticed that PHP's imap functions take way more on the new server, than on the old one(like 10x more).
The old server is a dedicated server, and a mail server is also hosted on the old server, the one which I am trying to perform imap actions on.
IMPORTANT: I don't think that this slow down has anything to do with not trying to connect from the same physical server to the mail server anymore, because of the severe increase of the time it requires to run any imap function throughout my project, but anyone can prove me wrong, because I don't know anything about networking unfortunatly:(
I've made some simple script runtime test on both servers, by creating two php files, and testing the execution time of both, one using only php's imap functions (imap_open), and another one, where I am using PEAR Net_IMAP package. Here are the two scripts:
SCRIPT 1:
$start=microtime(true);
$temp=$start;
$mbox = imap_open ("{mymailserver:143/novalidate-cert}INBOX", "email#address.com", "password");
telltime($start, "Connect and Login", $temp);
$mbox = imap_reopen ($mbox,"{mymailserver:143/novalidate-cert}INBOX.Sent");
telltime($start, "Changefolder", $temp);
telltime($start, "End script");
SCRIPT 2:
require_once 'Net/IMAP.php';
$imapServer = new Net_IMAP('mymailserver', 143);
telltime($start, "Connect", $temp);
$loggedIn = $imapServer->login('email#address.com' , 'password');
telltime($start, "Login", $temp);
$imapServer->selectMailbox("INBOX.Sent");
telltime($start, "Change folder", $temp);
telltime($start, "End script");
I've ran these scripts the following way, with the following results:
SCRIPT 1 AS IS - old server
Connect and Login: 0.124350070953
Changefolder: 0.00585293769836
Whole time: 0.130313158035
SCRIPT 1 AS IS - new server
Connect and Login: 0.63277888298035
Changefolder: 0.15838479995728
Whole time: 0.79174709320068
SCRIPT 1 /novalidate-cert changed to /notls - old server
Connect and Login: 0.112071990967
Changefolder: 0.00407910346985
Whole time: 0.116246938705
SCRIPT 1 /novalidate-cert changed to /notls - new server
Connect and Login: 0.50686407089233
Changefolder: 0.17428183555603
Whole time: 0.68127012252808
SCRIPT 2 AS IS - new server
Connect: 0.42295503616333
Login: 0.4013729095459
Change folder: 0.057337045669556
End script: 0.88185501098633
The project also has a console based debugging system, from which I've managed to gather the following information:
- an average search to the mailbox takes around 0.01-0.02 seconds on the old server, while on the new server, the same search takes around 7-8x times more
- the fetching of a single e-mail from the mail server on the old server takes between 0.05s-0.1s while on the new server, there are e-mails(mainly those which have Text/HTML with attached image files) which take 4 seconds to fetch
Based on these results, I am presuming that I am facing a networking problem, but it is just a wild assumption, as till now I've never debugged networking errors, but I already took apart my php scripts, and I can't find any error in them.
I tried traceroute-ing and ping-ing the mail server from the new project environment, and I've got the following results:
traceroute to mymailserver (xxx.xxx.xxx.xx), 30 hops max, 60 byte packets
1 xxx.xxx.x.xxx 0.658 ms xxx.xxx.x.xxx 0.510 ms xxx.xxx.x.xxx 0.471 ms
2 xxx.xxx.xxx.xxx 0.434 ms xxx.xxx.xxx.xxx 0.333 ms xxx.xxx.xxx.xxx 0.247 ms
3 xxx.xxx.xxx.xx 0.984 ms 0.986 ms xxx.xxx.xxx.xxx 0.270 ms
4 xxx.xxx.xxx.xx 0.964 ms xxx.xxx.xx.xxx 1.414 ms 1.449 ms
5 xxx.xxx.xx.xxx 1.253 ms 1.211 ms xxx.xxx.xx.xxx 22.078 ms
6 xxx.xxx.xx.xxx 43.920 ms 41.971 ms 44.860 ms
7 xx.xx.xx.xxx 45.835 ms xxx.xxx.xx.xxx 42.055 ms 41.254 ms
8 * xxx.xxx.xxx.xxx 42.999 ms *
9 xxx.xxx.xxx.xx 41.989 ms 42.235 ms 44.925 ms
Yes, sometimes traceroute reports about lost packages, but not always, but the rest of this information unfortunatly is only giberish to me, cause I don't understand for what I have to look for, and I didn't find any usable tutorial for traceroute on the internet.
** OTHER INFORMATION BASED ON THE HELP FROM OTHER STACKOVERFLOW USERS: **
I've also downloaded xDebug, and tried function tracing the two above mentioned scripts. While I didn't get any usefull information by function tracing imap_open function, I've noticed that when I function trace the PEAR package, the fgets() that are performed somewhere in the class, take considerably more time, than any other function that is being run(around 0.05 seconds)
My question(s):
- Am I right in assuming from these information that this has to be a networking problem?
- If yes, how could I solve it, or what are the giveaways that this is indeed a networking problem?
- If no, how could I isolate the problem better, and obtain a solution to it?
I am offering a:
- a thank you, to someone who helps me isolate if this is a networking problem, or something else
- a bounty, if someone manages to help me solve this issue, and speed up the processing of imap functions
We are facing performance issue with our web server. We are using an apache server (2.4.4) with php 5.4.14 (it's a uniserver package) and a postgresql database 9.2. It’s on a Windows system (can be XP, 7 or server…).
Problem is that requests answers from the web server are too slow; we have made some profiling and found that database connection is around 20 ms (millisecond).
We are using PDO like this:
$this->mConnexion = new \PDO(“postgres: host=127.0.0.1;dbname=”, $pUsername,$pPassword, array(\PDO::ATTR_PERSISTENT => false));
We have made some time profiling like this:
echo "Connecting to db <br>";$time_start = microtime();
$this->mConnexion = new \PDO(…
$time_end = microtime();$time = $time_end - $time_start;
echo "Connecting to db done in $time sec<br>";
We have made a test with ATTR_PERSISTENT to true and we came up with a connection time much faster. Code reports connection time = 2. E-5 second (whereas it’s 0.020 s with persistent to false).
Is 20 ms a normal value (and we have to move to persistent connection) ?
we have also made a test with mysql, connection time for non persistent connection is around 2 ms.
We have these options set in postgresql configuration file :
listen_addresses = '*'
port = 5432
max_connections = 100
SSL = off
shared_buffers = 32MB
EDIT
We do not use permanent (yet) because there are some drawbacks, if the script fail connection will be in a bad state (so we will have to manage these cases, and it’s what we will have to do…). I would like to have more points of view concerning this database connection time before directly switching to persistent connection.
To answer Daniel Vérité question, SSL is off (I already checked this option from my previous search about the subject).
#Daniel : i have tested on a intel core 2 Extreme CPU X9100 # 3.06Ghz 4Gb RAM
Try using unix domain socket by leaving host empty. It's a little bit faster.
I'm using PHP and a PDO object to connect to mysql. I have 3 DB servers that my php code can connect to. If I try to connect to DB server #1 and the connection fails I would like to immediately try to connect to DB server #2. The lowest I can set the connection timeout time is 1 second with the code below.
$DBH = new PDO("mysql:host=$host;dbname=$dbname", $username, $password,array(PDO::ATTR_TIMEOUT => "1",PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION));
Ideally I'd like to set the timeout time to <50 milliseconds. Or 0ms if possible. Is there any way to do this?
This is not possible because the underlying MySQL driver won't allow it:
Request #60716: Ability to set PDO connection timeout in milliseconds
I think it is impossible to set a timeout using milliseconds. Refer to the php doucmnetation of mysql.connect_timeout:
mysql.connect_timeout:
Connect timeout in seconds. On Linux this timeout is also used for waiting for the first answer from the server.
Btw, I think what you are about to do sounds a little bit hacky to me. If you have professional requirements I would use a load balancer. You can follow this tutorial
I am using MySQL 5.0 for a site that is hosted by GoDaddy (linux).
I was doing some testing on my web app, and suddenly I noticed that the pages were refreshing really slowly. Finally, after a long wait, I got to a page that said something along the lines of "MySQL Error, Too many connections...", and it pointed to my config.php file which connects to the database.
It has just been me connecting to the database, no other users. On each of my pages, I include the config.php file at the top, and close the mysql connection at the end of the page. There may be several queries in between. I fear that I am not closing mysql connections enough (mysql_close()).
However, when I try to close them after running a query, I receive connection errors on the page. My pages are PHP and HTML. When I try to close a query, it seems that the next one won't connect. Would I have to include config.php again after the close in order to connect?
This error scared me because in 2 weeks, about 84 people start using this web application.
Thanks.
EDIT:
Here is some pseudo-code of my page:
require_once('../scripts/config.php');
<?php
mysql_query..
if(this button is pressed){
mysql_query...
}
if(this button is pressed){
mysql_query...
}
if(this button is pressed){
mysql_query...
}
?>
some html..
..
..
..
..
<?php
another mysql_query...
?>
some more html..
..
..
<?php mysql_close(); ?>
I figured that this way, each time the page opens, the connection opens, and then the connection closes when the page is done loading. Then, the connection opens again when someone clicks a button on the page, and so on...
EDIT:
Okay, so I just got off the phone with GoDaddy. Apparently, with my Economy Package, I'm limited to 50 connections at a time. While my issue today happened with only me accessing the site, they said that they were having some server problems earlier. However, seeing as how I am going to have 84 users for my web app, I should probably upgrade to "Deluxe", which allows for 100 connections at a time. On a given day, there may be around 30 users accessing my site at a time, so I think the 100 would be a safer bet. Do you guys agree?
Shared-hosting providers generally allow a pretty small amount of simultaneous connections for the same user.
What your code does is :
open a connection to the MySQL server
do it's stuff (generating the page)
close the connection at the end of the page.
The last step, when done at the end of the page is not mandatory : (quoting mysql_close's manual) :
Using mysql_close() isn't usually
necessary, as non-persistent open
links are automatically closed at the
end of the script's execution.
But note you probably shouldn't use persistent connections anyway...
Two tips :
use mysql_connect insead of mysql_pconnect (already OK for you)
Set the fourth parameter of mysql_connect to false (already OK for you, as it's the default value) : (quoting the manual) :
If a second call is made to
mysql_connect() with the same
arguments, no new link will be
established, but instead, the link
identifier of the already opened link
will be returned.
The new_link
parameter modifies this behavior and
makes mysql_connect() always open a
new link, even if mysql_connect() was
called before with the same
parameters.
What could cause the problem, then ?
Maybe you are trying to access several pages in parallel (using multiple tabs in your browser, for instance), which will simulate several users using the website at the same time ?
If you have many users using the site at the same time and the code between mysql_connect and the closing of the connection takes lots of time, it will mean many connections being opened at the same time... And you'll reach the limit :-(
Still, as you are the only user of the application, considering you have up to 200 simultaneous connections allowed, there is something odd going on...
Well, thinking about "too many connections" and "max_connections"...
If I remember correctly, max_connections does not limit the number of connections you can open to the MySQL Server, but the total number of connections that can bo opened to that server, by anyone connecting to it.
Quoting MySQL's documentation on Too many connections :
If you get a Too many connections
error when you try to connect to the
mysqld server, this means that all
available connections are in use by
other clients.
The number of connections allowed is
controlled by the max_connections
system variable. Its default value is
100. If you need to support more connections, you should set a larger
value for this variable.
So, actually, the problem might not come from you nor your code (which looks fine, actually) : it might "just" be that you are not the only one trying to connect to that MySQL server (remember, "shared hosting"), and that there are too many people using it at the same time...
... And if I'm right and it's that, there's nothing you can do to solve the problem : as long as there are too many databases / users on that server and that max_connection is set to 200, you will continue suffering...
As a sidenote : before going back to GoDaddy asking them about that, it would be nice if someone could validate what I just said ^^
I had about 18 months of dealing with this (http://ianchanning.wordpress.com/2010/08/25/18-months-of-dealing-with-a-mysql-too-many-connections-error/)
The solutions I had (that would apply to you) in the end were:
tune the database according to MySQLTuner.
defragment the tables weekly based on this post
Defragmenting bash script from the post:
#!/bin/bash
# Get a list of all fragmented tables
FRAGMENTED_TABLES="$( mysql -e `use information_schema; SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN ('information_schema','mysql') AND
Data_free > 0` | grep -v '^+' | sed 's,t,.,' )"
for fragment in $FRAGMENTED_TABLES; do
database="$( echo $fragment | cut -d. -f1 )"
table="$( echo $fragment | cut -d. -f2 )"
[ $fragment != "TABLE_SCHEMA.TABLE_NAME" ] && mysql -e "USE $database;
OPTIMIZE TABLE $table;" > /dev/null 2>&1
done
Make sure you are not using persistent connections. This is usually a bad idea..
If you've got that .. At the very most you will need to support just as much connections as you have apache processes. Are you able to change the max_connections setting?
Are you completely sure that the database server is completely dedicated to you?
Log on to the datbase as root and use "SHOW PROCESSLIST" to see who's connected. Ideally hook this into your monitoring system to view how many connections there are over time and alert if there are too many.
The maximum database connections can be configured in my.cnf, but watch out for running out of memory or address space.
If you have shell access, use netstat to see how many sockets are opened to your database and where they come from.
On Linux, type:
netstat -n -a |grep 3306
On windows, type:
netstat -n -a |findstr 3306
The solution could one of these, i came across this in a MCQA test, even i did not understood which one is right!
Set this in my.cnf "set-variable=max_connections=200"
Execute the command "SET GLOBALmax_connections = 200"
Use always mysql_connect() function in order to connect to the mysql server
Use always mysql_pconnect() function in order to connect to the mysql server
Followings are possible solutions:
1) Increase the max connection setting by setting the global variable in mysql.
set global max_connection=200;
Note: It will increase the server load.
2) Empty your connection pool as below :
FLUSH HOSTS;
3) check your processList and kill specific processlist if you don't want any of them.
You may refer this :-
article link