I have a script that remotely calls a database on another server using PDO. At some point in the future I will thread this process, but for now that is not a luxury I have.
Basically, if the connection is good at all it's going to go through in less than a second; maybe 3-4 if there's a delay on the network but rarely that.
However, if the connection is bad, i.e. remote server down, PDO is going to keep trying - which will cause user frustration.
I would like to give PDO say 5 seconds to connect, and if it can't, die and go to exception handling and proceed on. (Since the contents of remote database are not essential to the application)
Any way to do this? set_time_limit() will not work, it will limit time for the script as a whole!
Related
I have a PHP Socket Server that I can connect to via Telnet. Once connected, I am able to send messages in a certain format and they're saved in the database.
What happens is that when the PHP Socket receives a message, it opens a Database connection, executes the query, then closes the connection. When it receives another message, it opens the connection again, executes the query, then closes the connection.
So far, this works when I'm sending messages in an interval of 5-10 minutes. However, when the interval increases for over an hour or so, I get a MySQL Server has gone away error.
Upon doing some research, the common solution seems to be increasing the wait time, which is not an option for me. The PHP Socket server is supposed to be open 24/7, and I doubt there's a way to increase the wait time to infinity.
The other option is to check in PHP itself if the MySQL Server has gone away or not, and depending on the result, reset the MySQL Server, and try to establish the connection again.
Does anyone know how to do this in PHP? Or if anyone has other methods in keeping a MySQL Server constantly alive in a PHP Socket server, I would also be open to that idea.
You'll get this error if the database connection has timed out (perhaps from a long-running process since the last time the connection was used).
You can easily check the connection and restore it if necessary with a single command:
$mysqli->ping()
This error means that the connection has been established to the DBMS but has subsequently been lost. I've run sites with hundreds of concurrent connections to the same database instance handling thousands of queries per minute and have only seen this error when a query has been deliberately killed by an admin. You should check your config for the interactive timeout and read your server logs.
Given your description of the error (you really need to gather more data and characterized the circumstances of the error) the only explanation which springs to mind is that there is a deadlock somewhere.
I would question whether there is any benefit to closing the connection after each message (depending on the usage of the system). Artistic phoenix's comment is somewhat confused. But if there are capacity issues then I'd suggest using persistent connections with your existing open/close model, but I doubt that is relevant to the problem you describe here.
I'm having trouble understanding an example of when this function would be used, and how it would be implemented. Could anyone provide some clarity on the subject? The php manual provides this information but I'd really appreciate it if someone could break it down "barney style" for me.
Thanks in advance.
Checks whether the connection to the server is working. If it has gone down, and global option mysqli.reconnect is enabled an automatic reconnection is attempted.
This function can be used by clients that remain idle for a long while, to check whether the server has closed the connection and reconnect if necessary.
http://php.net/manual/en/mysqli.ping.php
Lets say you have an PHP-Job that is running from an crontab under linux.
And the script maybe takes a long time to run.
Plus the script is runnnig more than one time at the same time.
Whitin the script you connect to your DB at the beginning, then the script does a lot of work (maybe download large data, prepare large data ....) and it is here and there using the database. But in same cases the database connection is lost because of too long idle time (Database Configuration). Some script maybe need 1 min. to download and another istance needs more than 5 hours.
Here comes the mysqli_ping function and handles that. Instead of allways reconnect to the database (before each query, to be really really sure its connected) the mysql_ping can test the connection if still working. if not you can then reconnect the connection.
Topic here: max_connction_timeout, max_allowed_connection, max_idle_time
see MYSQL Pages
Kindly, Barney
if you have a long-running script, for an example some back-end process such as cron job where there could be a time span between connecting and applying queries, mysqli_ping comes handy in checking db connection availability.
I have a PHP script that runs indefinitely, performing a specific task every 5-10 seconds (a do-while loop, checks the database at the end of every iteration to determine whether or not it should continue). This task includes MySQL database queries. What is the best way to handle the database connection? Should I:
a.) disconnect and then reconnect to the database every iteration?
b.) set the connection timeout to an indefinite limit?
c.) ping the database to make sure I'm still connected, and reconnect if necessary before executing and queries?
d.) Something else?
EDIT: To clarify, the script sends a push notification to users' iPhones.
The suggestion that you cannot run your PHP script as a daemon is ridiculous. I've done it several times, and it works quite well. There is some example code to get you started. (Requires PEAR... if you're not a fan, roll your own.)
Now, on to your actual question. If you are making regular queries, your MySQL connection will not timeout on you. That timeout is for idle connections. Definitely stay connected... there is no reason for the overhead of disconnecting and reconnecting. In any case, on a database failure, since your script is running as a daemon, you probably don't want to immediately kill the process.
I recommend handling the exception and reconnecting. If your reconnect fails, fall back for a little while longer before trying again. After a handful of failures (whatever is appropriate), you may kill the process at that time, as something is probably broken that requires human intervention.
In PDO, a connection can be made persistent using the PDO::ATTR_PERSISTENT attribute. According to the php manual -
Persistent connections are not closed at the end of the script, but
are cached and re-used when another script requests a connection using
the same credentials. The persistent connection cache allows you to
avoid the overhead of establishing a new connection every time a
script needs to talk to a database, resulting in a faster web
application.
The manual also recommends not to use persistent connection while using PDO ODBC driver, because it may hamper the ODBC Connection Pooling process.
So apparently there seems to be no drawbacks of using persistent connection in PDO, except in the last case. However., I would like to know if there is any other disadvantages of using this mechanism, i.e., a situation where this mechanism results in performance degradation or something like that.
Please be sure to read this answer below, which details ways to mitigate the problems outlined here.
The same drawbacks exist using PDO as with any other PHP database interface that does persistent connections: if your script terminates unexpectedly in the middle of database operations, the next request that gets the left over connection will pick up where the dead script left off. The connection is held open at the process manager level (Apache for mod_php, the current FastCGI process if you're using FastCGI, etc), not at the PHP level, and PHP doesn't tell the parent process to let the connection die when the script terminates abnormally.
If the dead script locked tables, those tables will remain locked until the connection dies or the next script that gets the connection unlocks the tables itself.
If the dead script was in the middle of a transaction, that can block a multitude of tables until the deadlock timer kicks in, and even then, the deadlock timer can kill the newer request instead of the older request that's causing the problem.
If the dead script was in the middle of a transaction, the next script that gets that connection also gets the transaction state. It's very possible (depending on your application design) that the next script might not actually ever try to commit the existing transaction, or will commit when it should not have, or roll back when it should not have.
This is only the tip of the iceberg. It can all be mitigated to an extent by always trying to clean up after a dirty connection on every single script request, but that can be a pain depending on the database. Unless you have identified creating database connections as the one thing that is a bottleneck in your script (this means you've done code profiling using xdebug and/or xhprof), you should not consider persistent connections as a solution to anything.
Further, most modern databases (including PostgreSQL) have their own preferred ways of performing connection pooling that don't have the immediate drawbacks that plain vanilla PHP-based persistent connections do.
To clarify a point, we use persistent connections at my workplace, but not by choice. We were encountering weird connection behavior, where the initial connection from our app server to our database server was taking exactly three seconds, when it should have taken a fraction of a fraction of a second. We think it's a kernel bug. We gave up trying to troubleshoot it because it happened randomly and could not be reproduced on demand, and our outsourced IT didn't have the concrete ability to track it down.
Regardless, when the folks in the warehouse are processing a few hundred incoming parts, and each part is taking three and a half seconds instead of a half second, we had to take action before they kidnapped us all and made us help them. So, we flipped a few bits on in our home-grown ERP/CRM/CMS monstrosity and experienced all of the horrors of persistent connections first-hand. It took us weeks to track down all the subtle little problems and bizarre behavior that happened seemingly at random. It turned out that those once-a-week fatal errors that our users diligently squeezed out of our app were leaving locked tables, abandoned transactions and other unfortunate wonky states.
This sob-story has a point: It broke things that we never expected to break, all in the name of performance. The tradeoff wasn't worth it, and we're eagerly awaiting the day we can switch back to normal connections without a riot from our users.
In response to Charles' problem above,
From : http://www.php.net/manual/en/mysqli.quickstart.connections.php -
A common complain about persistent connections is that their state is
not reset before reuse. For example, open and unfinished transactions
are not automatically rolled back. But also, authorization changes
which happened in the time between putting the connection into the
pool and reusing it are not reflected. This may be seen as an unwanted
side-effect. On the contrary, the name persistent may be understood as
a promise that the state is persisted.
The mysqli extension supports both interpretations of a persistent
connection: state persisted, and state reset before reuse. The default
is reset. Before a persistent connection is reused, the mysqli
extension implicitly calls mysqli_change_user() to reset the state.
The persistent connection appears to the user as if it was just
opened. No artifacts from previous usages are visible.
The mysqli_change_user() function is an expensive operation. For
best performance, users may want to recompile the extension with the
compile flag MYSQLI_NO_CHANGE_USER_ON_PCONNECT being set.
It is left to the user to choose between safe behavior and best
performance. Both are valid optimization goals. For ease of use, the
safe behavior has been made the default at the expense of maximum
performance.
Persistent connections are a good idea only when it takes a (relatively) long time to connect to your database. Nowadays that's almost never the case. The biggest drawback to persistent connections is that it limits the number of users you can have browsing your site: if MySQL is configured to only allow 10 concurrent connections at once then when an 11th person tries to browse your site it won't work for them.
PDO does not manage the persistence. The MySQL driver does. It reuses connections when a) they are available and the host/user/password/database match. If any change then it will not reuse a connection. The best case net effect is that these connections you have will be started and stopped so often because you have different users on the site and making them persistent doesn't do any good.
The key thing to understand about persistent connections is that you should NOT use them in most web applications. They sound enticing but they are dangerous and pretty much useless.
I'm sure there are other threads on this but a persistent connection is dangerous because it persists between requests. If, for example, you lock a table during a request and then fail to unlock then that table is going to stay locked indefinitely. Persistent connections are also pretty much useless for 99% of your apps because you have no way of knowing if the same connection will be used between different requests. Each web thread will have it's own set of persistent connections and you have no way of controlling which thread will handle which requests.
The procedural mysql library of PHP, has a feature whereby subsequent calls to mysql_connect will return the same link, rather than open a different connection (As one might expect). This has nothing to do with persistent connections and is specific to the mysql library. PDO does not exhibit such behaviour
Resource Link : link
In General you could use this as a rough "ruleset"::
YES, use persistent connections, if:
There are only few applications/users accessing the database, i.e.
you will not result in 200 open (but probably idle) connections,
because there are 200 different users shared on the same host.
The database is running on another server that you are accessing over
the network
An (one) application accesses the database very often
NO, don't use persistent connections, if:
Your application only needs to access the database 100 times an hour.
You have many, many webservers accessing one database server
Using persistent connections is considerable faster, especially if you are accessing the database over a network. It doesn't make so much difference if the database is running on the same machine, but it is still a little bit faster. However - as the name says - the connection is persistent, i.e. it stays open, even if it is not used.
The problem with that is, that in "default configuration", MySQL only allows 1000 parallel "open channels". After that, new connections are refused (You can tweak this setting). So if you have - say - 20 Webservers with each 100 Clients on them, and every one of them has just one page access per hour, simple math will show you that you'll need 2000 parallel connections to the database. That won't work.
Ergo: Only use it for applications with lots of requests.
On my tests I had a connection time of over a second to my localhost, thus assuming I should use a persistent connection. Further tests showed it was a problem with 'localhost':
Test results in seconds (measured by php microtime):
hosted web: connectDB: 0.0038912296295166
localhost: connectDB: 1.0214691162109 (over one second: do not use localhost!)
127.0.0.1: connectDB: 0.00097203254699707
Interestingly: The following code is just as fast as using 127.0.0.1:
$host = gethostbyname('localhost');
// echo "<p>$host</p>";
$db = new PDO("mysql:host=$host;dbname=" . DATABASE . ';charset=utf8', $username, $password,
array(PDO::ATTR_EMULATE_PREPARES => false,
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION));
Persistent connections should give a sizable performance boost. I disagree with the assement that you should "Avoid" persistence..
It sounds like the complaints above are driven by someone using MyIASM tables and hacking in their own versions of transactions by grabbing table locks.. Well of course you're going to deadlock! Use PDO's beginTransaction() and move your tables over to InnoDB..
seems to me having a persistent connection would eat up more system resources. Maybe a trivial amount, but still...
The explanation for using persistent connections is obviously reducing quantity of connects that are rather costly, despite the fact that they're considerably faster with MySQL compared to other databases.
The very first trouble with persistent connections...
If you are creating 1000's of connections per second you normally don't ensure that it stays open for very long time, but Operation System does. Based on TCP/IP protocol Ports can’t be recycled instantly and also have to invest a while in “FIN” stage waiting before they may be recycled.
The 2nd problem... using a lot of MySQL server connections.
Many people simply don't realize you are able to increase *max_connections* variable and obtain over 100 concurrent connections with MySQL others were beaten by older Linux problems of the inability to convey more than 1024 connections with MySQL.
Allows talk now about why Persistent connections were disabled in mysqli extension. Despite the fact that you can misuse persistent connections and obtain poor performance which was not the main reason. The actual reason is – you can get a lot more issues with it.
Persistent connections were put into PHP throughout occasions of MySQL 3.22/3.23 when MySQL was not so difficult which means you could recycle connections easily with no problems. In later versions quantity of problems however came about – Should you recycle connection that has uncommitted transactions you take into trouble. If you recycle connections with custom character set configurations you’re in danger again, as well as about possibly transformed per session variables.
One trouble with using persistent connections is it does not really scale that well. For those who have 5000 people connected, you'll need 5000 persistent connections. For away the requirement for persistence, you may have the ability to serve 10000 people with similar quantity of connections because they are in a position to share individuals connections when they are not with them.
I was just wondering whether a partial solution would be to have a pool of use-once connections. You could spend time creating a connection pool when the system is at low usage, up to a limit, hand them out and kill them when either they've completed or timed out. In the background you're creating new connections as they're being taken. At worst case this should only be as slow as creating the connection without the pool, assuming that establishing the link is the limiting factor?
So I was wondering whether I should or should not ping the mysql server (mysqli_ping) to ensure that the server is always alive before running query?
You shouldn't ping MySQL before a query for three reasons:
Its not a reliable way of checking the server will be up when you attempt to execute your query, it could very well go down in the time between the ping response and query.
Your query may fail even if the server is up.
As the amount traffic to your website scales up, you will be adding a lot of extra overhead to the database. Its not uncommon in enterprise apps that have used this method to see a huge amount of the database's resources getting wasted on pings.
The best way to deal with database connections is error handling (try/catch), retries and transactions.
More on this on the MySQL performance blog:
Checking for a live database connection considered harmful
In that blog post you'll see 73% of the load on that instance of MySQL was caused by applications checking if the DB was up.
I don't do this. I rely on the fact that I'll have a connection error if the server's gone and I try to do something.
Doing the ping might save you a bit of time and appear to be more responsive to the user, but a fast connection error isn't much better than waiting a few seconds followed by a connection error. Either way, the user can't do anything about it.
No.
Do you ping SO before you navigate there in a browser, just to be sure the server is running?
So I was wondering whether I should or
should not ping the mysql server
(mysqli_ping) to ensure that the
server is always alive before running
query?
Not really. If it is not live, you will come to know through the error messages coming through your queries or when connecting to the database. You can get mysql error with:
mysql_error()
Example:
mysql_connect(......) or die(mysql_error());
This is not the standard way of dealing with it... If there's an exception, you'll deal with it then.
It's somewhat similar to the difference between checking that a file exists before trying to open it, or catching the file-not-found exception when it occurs... If it's a very, very common and likely error it may be worth it to check before, but usually execution should try to go normally and exceptions should be caught and handled when they occur.
Generally speaking, no.
However, if you have a long-running script, for example some back-end process that's called as a cron job where that may be a time span between connecting and subsequent queries, mysqli_ping() maybe useful.
Setting mysqli.reconnect to true in php.ini is useful in this case.
No.
Just because the ping succeeds doesn't mean the query will. What if the server becomes unavailable between the time you ping it and the time you execute the query?
For this reason, you'll have to have proper error-catching around the query anyway.
And if you do, you might as well simply rely on this as your primary error trap.
Adding the ping just adds unnecessary round-trips, ultimately slowing down your code.
The only time I can think of to do this is if the database is
1. non-critical to the functioning of your app, and,
2. it has a tendency to be offline.
Other than that, no.
The only time in which it would be worthwhile to use ping would be if you were implementing your own db connection pooling system. Even in that case, I wouldn't ping before every query, just on each "connect" / checkout from the pool.