I'm having trouble understanding an example of when this function would be used, and how it would be implemented. Could anyone provide some clarity on the subject? The php manual provides this information but I'd really appreciate it if someone could break it down "barney style" for me.
Thanks in advance.
Checks whether the connection to the server is working. If it has gone down, and global option mysqli.reconnect is enabled an automatic reconnection is attempted.
This function can be used by clients that remain idle for a long while, to check whether the server has closed the connection and reconnect if necessary.
http://php.net/manual/en/mysqli.ping.php
Lets say you have an PHP-Job that is running from an crontab under linux.
And the script maybe takes a long time to run.
Plus the script is runnnig more than one time at the same time.
Whitin the script you connect to your DB at the beginning, then the script does a lot of work (maybe download large data, prepare large data ....) and it is here and there using the database. But in same cases the database connection is lost because of too long idle time (Database Configuration). Some script maybe need 1 min. to download and another istance needs more than 5 hours.
Here comes the mysqli_ping function and handles that. Instead of allways reconnect to the database (before each query, to be really really sure its connected) the mysql_ping can test the connection if still working. if not you can then reconnect the connection.
Topic here: max_connction_timeout, max_allowed_connection, max_idle_time
see MYSQL Pages
Kindly, Barney
if you have a long-running script, for an example some back-end process such as cron job where there could be a time span between connecting and applying queries, mysqli_ping comes handy in checking db connection availability.
Related
I have a script that remotely calls a database on another server using PDO. At some point in the future I will thread this process, but for now that is not a luxury I have.
Basically, if the connection is good at all it's going to go through in less than a second; maybe 3-4 if there's a delay on the network but rarely that.
However, if the connection is bad, i.e. remote server down, PDO is going to keep trying - which will cause user frustration.
I would like to give PDO say 5 seconds to connect, and if it can't, die and go to exception handling and proceed on. (Since the contents of remote database are not essential to the application)
Any way to do this? set_time_limit() will not work, it will limit time for the script as a whole!
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
MySQL server has gone away - in exactly 60 seconds
We're using the mysql_connect method in PHP to create database handles for a daemon. The problem is, that connection might not be used for more than 8 hours (sometimes for more than a few weeks.)
We're running into issues where MySQL will end the session because the wait_timeout is reached.
http://dev.mysql.com/doc/refman/5.0/en/gone-away.html
We don't want to increase this value for ALL connections. Is there a way to increase the timeout for that handle only through PHP? Through MySQL?
It's not good to hang onto DB connections for long periods because the DB only provides a fixed number of connections at any one time; if you're using one up for ages it means your DB has less capacity to deal with other requests, even if you're not actually doing anything with that connection.
I suggest dropping the connection if the program is finished using it for the time being, and re-connect when the time comes to do more DB work.
In addition, this solution will protect your program from possible database downtime, ie if you need to reboot your DB server (it happens, even in the best supported network). If you keep the connection alive (ie a DB ping as per the other answers), then an event like that will leave you with exactly the same problem you have now. With a properly managed connection that is dropped when not needed, you can safely leave your daemon running even if you have planned downtime on your DB; as long as it remains idle for the duration, it needn't be any the wiser.
(as an aside, I'd also question the wisdom of writing a PHP program that runs continuously; PHP is designed for short duration web requested. It may be capable of running long term daemon programs, but there are better tools for the job)
For this problem, the best way is use mysql_ping();
Check it out on manual PHP mysql_ping()
I have a PHP script that runs indefinitely, performing a specific task every 5-10 seconds (a do-while loop, checks the database at the end of every iteration to determine whether or not it should continue). This task includes MySQL database queries. What is the best way to handle the database connection? Should I:
a.) disconnect and then reconnect to the database every iteration?
b.) set the connection timeout to an indefinite limit?
c.) ping the database to make sure I'm still connected, and reconnect if necessary before executing and queries?
d.) Something else?
EDIT: To clarify, the script sends a push notification to users' iPhones.
The suggestion that you cannot run your PHP script as a daemon is ridiculous. I've done it several times, and it works quite well. There is some example code to get you started. (Requires PEAR... if you're not a fan, roll your own.)
Now, on to your actual question. If you are making regular queries, your MySQL connection will not timeout on you. That timeout is for idle connections. Definitely stay connected... there is no reason for the overhead of disconnecting and reconnecting. In any case, on a database failure, since your script is running as a daemon, you probably don't want to immediately kill the process.
I recommend handling the exception and reconnecting. If your reconnect fails, fall back for a little while longer before trying again. After a handful of failures (whatever is appropriate), you may kill the process at that time, as something is probably broken that requires human intervention.
So I was wondering whether I should or should not ping the mysql server (mysqli_ping) to ensure that the server is always alive before running query?
You shouldn't ping MySQL before a query for three reasons:
Its not a reliable way of checking the server will be up when you attempt to execute your query, it could very well go down in the time between the ping response and query.
Your query may fail even if the server is up.
As the amount traffic to your website scales up, you will be adding a lot of extra overhead to the database. Its not uncommon in enterprise apps that have used this method to see a huge amount of the database's resources getting wasted on pings.
The best way to deal with database connections is error handling (try/catch), retries and transactions.
More on this on the MySQL performance blog:
Checking for a live database connection considered harmful
In that blog post you'll see 73% of the load on that instance of MySQL was caused by applications checking if the DB was up.
I don't do this. I rely on the fact that I'll have a connection error if the server's gone and I try to do something.
Doing the ping might save you a bit of time and appear to be more responsive to the user, but a fast connection error isn't much better than waiting a few seconds followed by a connection error. Either way, the user can't do anything about it.
No.
Do you ping SO before you navigate there in a browser, just to be sure the server is running?
So I was wondering whether I should or
should not ping the mysql server
(mysqli_ping) to ensure that the
server is always alive before running
query?
Not really. If it is not live, you will come to know through the error messages coming through your queries or when connecting to the database. You can get mysql error with:
mysql_error()
Example:
mysql_connect(......) or die(mysql_error());
This is not the standard way of dealing with it... If there's an exception, you'll deal with it then.
It's somewhat similar to the difference between checking that a file exists before trying to open it, or catching the file-not-found exception when it occurs... If it's a very, very common and likely error it may be worth it to check before, but usually execution should try to go normally and exceptions should be caught and handled when they occur.
Generally speaking, no.
However, if you have a long-running script, for example some back-end process that's called as a cron job where that may be a time span between connecting and subsequent queries, mysqli_ping() maybe useful.
Setting mysqli.reconnect to true in php.ini is useful in this case.
No.
Just because the ping succeeds doesn't mean the query will. What if the server becomes unavailable between the time you ping it and the time you execute the query?
For this reason, you'll have to have proper error-catching around the query anyway.
And if you do, you might as well simply rely on this as your primary error trap.
Adding the ping just adds unnecessary round-trips, ultimately slowing down your code.
The only time I can think of to do this is if the database is
1. non-critical to the functioning of your app, and,
2. it has a tendency to be offline.
Other than that, no.
The only time in which it would be worthwhile to use ping would be if you were implementing your own db connection pooling system. Even in that case, I wouldn't ping before every query, just on each "connect" / checkout from the pool.
We have an application that is comprised of a couple of off the shelf PHP applications (ExpressionEngine and XCart) as well as our own custom code.
I did not do the actual analysis so I don't know precisely how it was determined, but am not surprised to hear that too many MySQL connections are being left unclosed (I am not surprised because I have been seeing significant memory leakage on our dev server, where over the course of a day or two, starting from 100MB upon initial boot, the entire gig of ram gets consumed, and very little of it is cached).
So, how do we go about determining precisely which PHP code is the culprit? I've got prior experience with XDebug, and have suggested that, when we've gotten our separate, staging environment reasonably stable, that we retrofit XDebug on dev and use that to do some analysis. Is this reasonable, and/or does anybody else have more specific and/or additional suggestions?
You can use the
SHOW PROCESSLIST
SQL command to see what processes are running. That will tell you the username, host, database, etc that are in use by each process. That should give you some idea what's going on, especially if you have a number of databases being accessed.
More here: https://dev.mysql.com/doc/refman/8.0/en/show-processlist.html
This should not be caused by a php code because mysql connections are supposed to be automatically closed.
cf : http://www.php.net/manual/function.mysql-connect.php :
The link to the server will be closed
as soon as the execution of the script
ends, unless it's closed earlier by
explicitly calling mysql_close().
Some suggestions :
does your developper has technically a direct access to your production mysql server ? if yes, then they probably just leave their Mysql Manager open :)
do you have some daily batch process ? if yes, maybe that there are some zombi process in memory
PHP automatically closes any mysql connections when the page ends. the only reason that a PHP web application would have too many unclosed mysql connections is either 1) you're using connection pooling, or 2) there's a bug in the mysql server or the connector.
but if you really want to look at your code to find where it's connecting, see http://xdebug.org/docs/profiler
As others said, PHP terminates MySQL connections created through mysql_connect or the msqli/PDO equivalents.
However, you can create persistent connections with mysql_pconnect. It will look for existing connections open and use those; if it can't find one, it will open a new one. If you had a lot of requests at once, it could have caused loads of connections to open and stay open.
You could lower the maximum number of connections, or lower the timeout for persistent connections. See the comments at the bottom of the man page for more details.
I used to run a script that polled SHOW STATUS for thread count and I noticed that using mysql_pconnect always encouraged high numbers of threads. I found that very disconcerting because then I couldn't tell when my connection rate was actually dropping. So I made sure to centralize all the places where mysql_connect() was called and eliminate mysql_pconnect().
The next thing I did was look at the connection timeouts and adjust them to more like 30 seconds because. So I adjusted my my.cnf with
connect-timeout=30
so I could actually see the number of connections drop off. To determine the number of connections you need open is dependent on how many apache workers you're running times the number of database connections they each will open.
The other thing I started doing was adding a note to my queries in order to spot them in SHOW PROCESSLIST or mytop, I would add a note column to my results like:
$q = "SELECT '".__FILE__.'.'.__LINE__."' as _info, * FROM table ...";
This would show me the file issuing the query when I looked at mytop, and it didn't foil the MySQL query cache like using
/* __FILE__.'.'.__LINE__ */
at the start of my query would.
I suppose another couple of things I can do, with regard to the general memory issue, as opposed specifically to MySQL, and particularly within the context of our own custom code, would be to wrap our code with calls to one or the other of the following PHP built-in functions:
memory_get_usage
memory_get_peak_usage
In particular since I am currently working on logging from some custom code, I can log the memory usage while I'm at it