Should I ping mysql server before each query? - php

So I was wondering whether I should or should not ping the mysql server (mysqli_ping) to ensure that the server is always alive before running query?

You shouldn't ping MySQL before a query for three reasons:
Its not a reliable way of checking the server will be up when you attempt to execute your query, it could very well go down in the time between the ping response and query.
Your query may fail even if the server is up.
As the amount traffic to your website scales up, you will be adding a lot of extra overhead to the database. Its not uncommon in enterprise apps that have used this method to see a huge amount of the database's resources getting wasted on pings.
The best way to deal with database connections is error handling (try/catch), retries and transactions.
More on this on the MySQL performance blog:
Checking for a live database connection considered harmful
In that blog post you'll see 73% of the load on that instance of MySQL was caused by applications checking if the DB was up.

I don't do this. I rely on the fact that I'll have a connection error if the server's gone and I try to do something.
Doing the ping might save you a bit of time and appear to be more responsive to the user, but a fast connection error isn't much better than waiting a few seconds followed by a connection error. Either way, the user can't do anything about it.

No.
Do you ping SO before you navigate there in a browser, just to be sure the server is running?

So I was wondering whether I should or
should not ping the mysql server
(mysqli_ping) to ensure that the
server is always alive before running
query?
Not really. If it is not live, you will come to know through the error messages coming through your queries or when connecting to the database. You can get mysql error with:
mysql_error()
Example:
mysql_connect(......) or die(mysql_error());

This is not the standard way of dealing with it... If there's an exception, you'll deal with it then.
It's somewhat similar to the difference between checking that a file exists before trying to open it, or catching the file-not-found exception when it occurs... If it's a very, very common and likely error it may be worth it to check before, but usually execution should try to go normally and exceptions should be caught and handled when they occur.

Generally speaking, no.
However, if you have a long-running script, for example some back-end process that's called as a cron job where that may be a time span between connecting and subsequent queries, mysqli_ping() maybe useful.
Setting mysqli.reconnect to true in php.ini is useful in this case.

No.
Just because the ping succeeds doesn't mean the query will. What if the server becomes unavailable between the time you ping it and the time you execute the query?
For this reason, you'll have to have proper error-catching around the query anyway.
And if you do, you might as well simply rely on this as your primary error trap.
Adding the ping just adds unnecessary round-trips, ultimately slowing down your code.

The only time I can think of to do this is if the database is
1. non-critical to the functioning of your app, and,
2. it has a tendency to be offline.
Other than that, no.

The only time in which it would be worthwhile to use ping would be if you were implementing your own db connection pooling system. Even in that case, I wouldn't ping before every query, just on each "connect" / checkout from the pool.

Related

Getting "MySQL Server has gone away" in PHP, even if I'm closing my database connection

I have a PHP Socket Server that I can connect to via Telnet. Once connected, I am able to send messages in a certain format and they're saved in the database.
What happens is that when the PHP Socket receives a message, it opens a Database connection, executes the query, then closes the connection. When it receives another message, it opens the connection again, executes the query, then closes the connection.
So far, this works when I'm sending messages in an interval of 5-10 minutes. However, when the interval increases for over an hour or so, I get a MySQL Server has gone away error.
Upon doing some research, the common solution seems to be increasing the wait time, which is not an option for me. The PHP Socket server is supposed to be open 24/7, and I doubt there's a way to increase the wait time to infinity.
The other option is to check in PHP itself if the MySQL Server has gone away or not, and depending on the result, reset the MySQL Server, and try to establish the connection again.
Does anyone know how to do this in PHP? Or if anyone has other methods in keeping a MySQL Server constantly alive in a PHP Socket server, I would also be open to that idea.
You'll get this error if the database connection has timed out (perhaps from a long-running process since the last time the connection was used).
You can easily check the connection and restore it if necessary with a single command:
$mysqli->ping()
This error means that the connection has been established to the DBMS but has subsequently been lost. I've run sites with hundreds of concurrent connections to the same database instance handling thousands of queries per minute and have only seen this error when a query has been deliberately killed by an admin. You should check your config for the interactive timeout and read your server logs.
Given your description of the error (you really need to gather more data and characterized the circumstances of the error) the only explanation which springs to mind is that there is a deadlock somewhere.
I would question whether there is any benefit to closing the connection after each message (depending on the usage of the system). Artistic phoenix's comment is somewhat confused. But if there are capacity issues then I'd suggest using persistent connections with your existing open/close model, but I doubt that is relevant to the problem you describe here.

Aborting a Select Query if it Takes Too long

I'm having a web application written in PHP.
One function of this application is a document archive which is a MySQL database on another server. And this archive server is pretty unreliable performance wise, but not under my control. The archive server has got often long table locks often, which results in getting a connection, but not getting any data.
This often leads to open MySQL connections which saturate the resources of the web-application server. As a result the whole web application becomes slow/inacessible.
I would like to decouple the two systems.
I thought the logical way would be for my PHP application to abort a SELECT query if it takes longer than 1 or 2 Seconds to free up resources and present the user with a message that the remote system is not responding in time.
But how is it best to implement such a solution?
UPDATE: the set_time_limit() option looks Promising. but not fully satisfying as im not able to present the user with an "message" but at least it might help to prevent the saturation of the Resources.
I think you should use maximum execution limit function provided in php.
You can set MySQL time out like this
Or you can set it on code like this
I think second solution might be better for you
Then if the timeout error raised, you can tell the server not responded

What are "many successive interrupted connection requests" in MySQL?

I regularly have the following error:
PHP Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000] [1129] Host 'MY SERVER' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'
It is easy to solve the problem with a regular (like crontab) mysqladmin flush-hosts command or increasing the max_connect_errors system variable, as written here.
BUT ! What are "many successive interrupted connection requests", why is this happening?
I'd rather prevent the problem upstream, rather than correcting blocking.
MySQL version : 5.5.12. I'm using Zend Framework 1.11.10 and Doctrine 2.1.6.
There are no mysql_close() nor mysqli_close() in my PHP Code.
max_connect_errors has the default value, 10, and I don't want to increase it yet, I want to understand why I've got the errors. I use a cron, every 5 minutes which does a mysqladmin flush-hosts command.
This response is by design as a security measure, and is the result of reaching the max_connection_errors value for mysql. Here's a link Oracle provides which details most of the possible causes and solutions.
Ultimately this means that there are so many successive connection failures that MySql stops responding to connection attempts.
I use a cron, every 5 minutes which does a mysqladmin flush-hosts
command.
As you are reaching this limit so quickly, there are only a few likely culprits:
Server is not correctly configured to use PDO.
Running code includes very frequently creating new connections.
Results in quickly reaching the max_connections value, causing all subsequent connection attempts to fail... thus quickly reaching the max_connection_errors limit.
Code is hitting an infinite loop, or cascading failure.
Obvious possibility, but must be mentioned.
(i.e: pageA calls pageB and pageC, and pageC calls PageA)
PDO is running fine, but some scripts take a long time to run, or never end.
Easiest way to catch this is turn down the max_execution_time.
It is likely that whatever the case, this will be difficult to track down.
Log a stack-trace of every mysql connection attempt to find what code is causing this.
Check the mysql.err logfile
While PDO does not require explicitly closing mysql connections, for cases like this there's a few practices that can prevent such ServerAdmin hunts.
Always explicitly close mysql connections.
Build a simple Class to handle all connections. Open, return array, close.
The only time you need to keep a connection open is for cursors.
Always define connection arguments in one and only one file included everywhere it is needed.
Never increase max_execution_time unless you know you need it and you know the server can handle it. IF you need it, explicitly increase the value only for the script that needs it. php.net/manual/en/function.set-time-limit.php
If you increase max_execution_time, increase max_connections.
dev.mysql.com/doc/refman/5.0/en/cursors.html
It means that mysqld has received many connection requests from the given host that were interrupted in the middle. Check out this link from the documentation for more info.

MongoDB - Too Many Connection Error

We have developed chat module using node.js() and mongo sharding and gone live to production server. But today its reached 20000 connection in mongodb and getting error "Too many connection" in logs. After that we have restarted the node server and started again. now its comes normal. But we have to know how will solve this problem immediately.
Any configuration are there to set it in mongodb to kill the connection if not used or set the expire time while establish the connection.
Please help us to close this issue.
Regards,
Kumaran
You're probably not running into a MongoDB issue. There's a cap to the amount of connections you can make to MongoDB that's usually roughly equal to the maximum number of file descriptors available to it.
It sounds like there is a bug in your code (likely) or mongoose (less likely) that either creates more connections than it closes or never closes connections in the first place. In Java for example creating a new "Mongo" class instance for each query would result in this sort of problem but I don't work with node.js/mongoose so I do not know what the JS equivalent of that is.
Keep an eye on mongostat and check to see if the connection count always increases or if it decreases sometimes. If it's the former your code never releases connections for whatever reason. If it's the latter you're simply creating them faster than idle connections are disconnected. That's usually due to doing something heavy weight (like the driver initialising it's connection pool) for every query rather than once.

how to determine which PHP code opens MySQL connections that aren't getting closed

We have an application that is comprised of a couple of off the shelf PHP applications (ExpressionEngine and XCart) as well as our own custom code.
I did not do the actual analysis so I don't know precisely how it was determined, but am not surprised to hear that too many MySQL connections are being left unclosed (I am not surprised because I have been seeing significant memory leakage on our dev server, where over the course of a day or two, starting from 100MB upon initial boot, the entire gig of ram gets consumed, and very little of it is cached).
So, how do we go about determining precisely which PHP code is the culprit? I've got prior experience with XDebug, and have suggested that, when we've gotten our separate, staging environment reasonably stable, that we retrofit XDebug on dev and use that to do some analysis. Is this reasonable, and/or does anybody else have more specific and/or additional suggestions?
You can use the
SHOW PROCESSLIST
SQL command to see what processes are running. That will tell you the username, host, database, etc that are in use by each process. That should give you some idea what's going on, especially if you have a number of databases being accessed.
More here: https://dev.mysql.com/doc/refman/8.0/en/show-processlist.html
This should not be caused by a php code because mysql connections are supposed to be automatically closed.
cf : http://www.php.net/manual/function.mysql-connect.php :
The link to the server will be closed
as soon as the execution of the script
ends, unless it's closed earlier by
explicitly calling mysql_close().
Some suggestions :
does your developper has technically a direct access to your production mysql server ? if yes, then they probably just leave their Mysql Manager open :)
do you have some daily batch process ? if yes, maybe that there are some zombi process in memory
PHP automatically closes any mysql connections when the page ends. the only reason that a PHP web application would have too many unclosed mysql connections is either 1) you're using connection pooling, or 2) there's a bug in the mysql server or the connector.
but if you really want to look at your code to find where it's connecting, see http://xdebug.org/docs/profiler
As others said, PHP terminates MySQL connections created through mysql_connect or the msqli/PDO equivalents.
However, you can create persistent connections with mysql_pconnect. It will look for existing connections open and use those; if it can't find one, it will open a new one. If you had a lot of requests at once, it could have caused loads of connections to open and stay open.
You could lower the maximum number of connections, or lower the timeout for persistent connections. See the comments at the bottom of the man page for more details.
I used to run a script that polled SHOW STATUS for thread count and I noticed that using mysql_pconnect always encouraged high numbers of threads. I found that very disconcerting because then I couldn't tell when my connection rate was actually dropping. So I made sure to centralize all the places where mysql_connect() was called and eliminate mysql_pconnect().
The next thing I did was look at the connection timeouts and adjust them to more like 30 seconds because. So I adjusted my my.cnf with
connect-timeout=30
so I could actually see the number of connections drop off. To determine the number of connections you need open is dependent on how many apache workers you're running times the number of database connections they each will open.
The other thing I started doing was adding a note to my queries in order to spot them in SHOW PROCESSLIST or mytop, I would add a note column to my results like:
$q = "SELECT '".__FILE__.'.'.__LINE__."' as _info, * FROM table ...";
This would show me the file issuing the query when I looked at mytop, and it didn't foil the MySQL query cache like using
/* __FILE__.'.'.__LINE__ */
at the start of my query would.
I suppose another couple of things I can do, with regard to the general memory issue, as opposed specifically to MySQL, and particularly within the context of our own custom code, would be to wrap our code with calls to one or the other of the following PHP built-in functions:
memory_get_usage
memory_get_peak_usage
In particular since I am currently working on logging from some custom code, I can log the memory usage while I'm at it

Categories