Best practice for running multiple PHP processes using PDO objects? - php

I'm running 100 PHP scripts simultaneously on a Linux machine with MySQL.
I'm using PDO, with the ATTR_PERSISTENT parameter set to false. Process typically execute a few SQL commands and sleeps for 30 seconds.
Looking at the process list using top, I see a lot of mysqld processes, each taking a substantial amount of memory space.
I understand this problem can be solved by redesigning, to use queues and\or shared connections, but I'm looking for a temporary fix until I'm ready with a better setup.
What will be the best remedy for handling such a setup?
Should I destroy and recreate each PDO object while the process sleeps?
Am I missing some basic configuration option either in PDO, or MySQL?

In terms of PDO and sleeping, if you do not plan on re-using the connection, kill it as early as you can. The reason being is that, while it is sleeping (for nothing, effectively), the worker thread on MySQL is still in existence.
If you are NOT going to waste the sleeping processes, I recommend going for a persistent connection and to share it between PHP processes. This, however, opens a new can of worms: deadlocks.
The best way to close the connection is to destroy the PDO object - i.e. $yourPDOOBject = null.

Related

Should I $mysqli->close a connection after each page load, if PHP runs via FCGI?

I run PHP via FCGI - that is my web server spawns several PHP processes and they keep running for like 10,000 requests until they get recycled.
My question is - if I've a $mysqli->connect at the top of my PHP script, do I need to call $mysqli->close in when I'm about to end running the script?
Since PHP processes are open for a long time, I'd image each $mysqli->connect would leak 1 connection, because the process keeps running and no one closes the connection.
Am I right in my thinking or not? Should I call $mysqli->close?
When PHP exits it closes the database connections gracefully.
The only reason to use the close method is when you want to terminate a database connection that you´ll not use anymore, and you have lots of things to do: Like processing and streaming the data, but if this is quick, you can forget about the close statement.
Putting it in the end of a script means redundancy, no performance or memory gain.
In a bit more detail, specifically about FastCGI:
FastCGI keeps PHP processing running between requests. FastCGI is good at reducing CPU usage by leveraging your server's available RAM to keep PHP scripts in memory instead of having to start up a separate PHP process for each and every PHP request.
FastCGI will start a master process and as many forks of that master process as you have defined and yes those forked processes might life for a long time. This means in effect that the process doesn't have to start-up the complete PHP process each time it needs to execute a script. But it's not like you think that your scripts are now running all the time. There is still a start-up and shutdown phase each time a script has to be executed. At this point things like global variables (e.g. $_POST and $_GET) are populated etc. You can execute functions each time your process shuts down via register_shutdown_function().
If you aren't using persistent database connections and aren't closing database connections, nothing bad will happen. As Colin Schoen explained, PHP will eventually close them during shutdown.
Still, I highly encourage you to close your connections because a correctly crafted program knows when the lifetime of an object is over and cleans itself up. It might give you exactly the milli- or nanosecond that you need to deliver something in time.
Simply always create self-contained objects that are also cleaning up after they are finished with whatever they did.
I've never trusted FCGI to close my database connections for me. One habit I learned in a beginners book many years ago is to always explicitly close my database connections.
Is not typing sixteen keystrokes worth the possible memory and connection leak? As far as I'm concerned its cheap insurance.
If you have long running FastCGI processes, via e.g. php-fpm, you can gain performance by reusing your database connection inside each process and avoiding the cost of opening one.
Since you are most likely opening a connection at some point in your code, you should read up on how to have mysqli open a persistent connection and return it to you on subsequent requests managed by the same process.
http://php.net/manual/en/mysqli.quickstart.connections.php
http://php.net/manual/en/mysqli.persistconns.php
In this case you don't want to close the connection, else you are defeating the purpose of keeping it open. Also, be aware that each PHP process will use a separate connection so your database should allow for at least that number of connections to be opened simultaneously.
You're right in your way of thinking. It is still important to close connection to prevent memory/data leaks and corruption.
You can also lower the amount of scipts recycled each cycle, to stop for a connection close.
For example: each 2500 script runs, stop and close and reopen connection.
Also recommended: back up data frequently.
Hope I helped. Phantom

MySQL(i) "Too many connections" what to do?

I'm writing a hugh MySQLi/PHP application and experience problems with my database, it seems that there are too many connections open (250) after running for a couple of hours.
I'm using a very fast external database server in my network. I'm reaching like 1000 questions per second and the server does not seem impressed (the load is close to 0).
In my application the MySQLi link is closed by the destructor of the database class (this seems to work properly).
I'm using prepared statements and have also a couple of daemons running with infinite while loops and some queries inside it (the loops are delayed with usleep() to prevent overuse and I have to notice that mysqli_connect() is only called once starting the daemon).
But it seems that I never close my prepared statements with stmt->close(). Under query stats in my database I can see that the number of stmt->close() questions is equal to the number of stmt->execute(). So can this be the problem and when do I have to close my stmt for example? I don't know where to find a solution for this problem.
Software versions
PHP 5.5 under CentOS 6.5 with MySQL 5.6
Here are some things to try:
First: in your infinite-loop daemon processes: close your connections before sleeping and open them again upon waking. Don't try to hold database connections open for a long time. There's all kinds of timeout logic in the client-server connection that may activate when you don't want it to and therefore give you unpredictable failures. Opening connections, using them, then closing them will avoid that.
Second: try using so-called persistent connections. In mysqli you can prepend p: to your hostname to do this. Read this: http://www.php.net/manual/en/mysqli.persistconns.php
Third: It is good practice to close() your prepared statements explicitly when you're done with them, and to reset() them between uses if you reuse them. The mysqli dtor is supposed to do this automatically, but it's still good practice
Fourth: You may want to configure your Apache or ngnix server software to spawn fewer instances and threads. These instances and/or threads are serially resuable resources, and Linux's TCP stack does a good job of queueing up connect requests for them. This should reduce the number of connections MySQL needs to handle.
Fifth: Do you need to change you MySQL's configuration to allow more than 250 connections? If you're loadbalancing your web traffic to lots of web servers, you may need to do that.
Congratulations on getting a lot of traffic! Now for some real fun. bwahahaha.

Recommended practice for multiple MySQL Connections with PHP

I am currently accessing a SOAP Web Service through a PHP script using SoapClient. My script calls multiple subscripts (~30 a second) that each send a request and then pushes the response to a MySQL Database. My process attempts to emulate an "asynchronous" request/response mechanism.
In my subscript I connect to mysql and close the connection once it is complete. I'm running about 30 subscripts per second. I'm running into an issue where I am maxing out my MySQL connections.
I don't want to increase the maximum number of connections as I feel this is bad practice. Is there a better way to approach this problem? I am thinking I can somehow share a single mysql connection between the subscript and script.
If all subscripts are run in sequence, in one thread, then you can connect to MySQL once and pass this connection to all of them.
If subscripts are run in parallel, then it depend whether your MySQL library is thread safe or not. If it is, then you can pass one connection to all of them. But if not, you have no choice than one connection per script. This information should be mentioned in it's documentation.
If you need to run only some of the scripts in parallel and some can wait a while, then you can prepare pool of few connections (10 or so) and run only 10 scripts at once. When one script ends, you launch next and reuse old connection.
You can try connection pooling. I am not sure whether this is possible in php and whether there are frameworks already available for that.
If its not available You can use a Singleton class which contains a list of connections. Let the connections be closed by this class if its idle for N seconds. This means that your 30 subscripts can reuse connections which are not used by other scripts.
Did you try mysqli_pconnect? How do you spawn your sub processes? Can't you open the database connection in the main process and pass it to the sub processes? Any code examples of what you are doing?

What are the disadvantages of using persistent connection in PDO

In PDO, a connection can be made persistent using the PDO::ATTR_PERSISTENT attribute. According to the php manual -
Persistent connections are not closed at the end of the script, but
are cached and re-used when another script requests a connection using
the same credentials. The persistent connection cache allows you to
avoid the overhead of establishing a new connection every time a
script needs to talk to a database, resulting in a faster web
application.
The manual also recommends not to use persistent connection while using PDO ODBC driver, because it may hamper the ODBC Connection Pooling process.
So apparently there seems to be no drawbacks of using persistent connection in PDO, except in the last case. However., I would like to know if there is any other disadvantages of using this mechanism, i.e., a situation where this mechanism results in performance degradation or something like that.
Please be sure to read this answer below, which details ways to mitigate the problems outlined here.
The same drawbacks exist using PDO as with any other PHP database interface that does persistent connections: if your script terminates unexpectedly in the middle of database operations, the next request that gets the left over connection will pick up where the dead script left off. The connection is held open at the process manager level (Apache for mod_php, the current FastCGI process if you're using FastCGI, etc), not at the PHP level, and PHP doesn't tell the parent process to let the connection die when the script terminates abnormally.
If the dead script locked tables, those tables will remain locked until the connection dies or the next script that gets the connection unlocks the tables itself.
If the dead script was in the middle of a transaction, that can block a multitude of tables until the deadlock timer kicks in, and even then, the deadlock timer can kill the newer request instead of the older request that's causing the problem.
If the dead script was in the middle of a transaction, the next script that gets that connection also gets the transaction state. It's very possible (depending on your application design) that the next script might not actually ever try to commit the existing transaction, or will commit when it should not have, or roll back when it should not have.
This is only the tip of the iceberg. It can all be mitigated to an extent by always trying to clean up after a dirty connection on every single script request, but that can be a pain depending on the database. Unless you have identified creating database connections as the one thing that is a bottleneck in your script (this means you've done code profiling using xdebug and/or xhprof), you should not consider persistent connections as a solution to anything.
Further, most modern databases (including PostgreSQL) have their own preferred ways of performing connection pooling that don't have the immediate drawbacks that plain vanilla PHP-based persistent connections do.
To clarify a point, we use persistent connections at my workplace, but not by choice. We were encountering weird connection behavior, where the initial connection from our app server to our database server was taking exactly three seconds, when it should have taken a fraction of a fraction of a second. We think it's a kernel bug. We gave up trying to troubleshoot it because it happened randomly and could not be reproduced on demand, and our outsourced IT didn't have the concrete ability to track it down.
Regardless, when the folks in the warehouse are processing a few hundred incoming parts, and each part is taking three and a half seconds instead of a half second, we had to take action before they kidnapped us all and made us help them. So, we flipped a few bits on in our home-grown ERP/CRM/CMS monstrosity and experienced all of the horrors of persistent connections first-hand. It took us weeks to track down all the subtle little problems and bizarre behavior that happened seemingly at random. It turned out that those once-a-week fatal errors that our users diligently squeezed out of our app were leaving locked tables, abandoned transactions and other unfortunate wonky states.
This sob-story has a point: It broke things that we never expected to break, all in the name of performance. The tradeoff wasn't worth it, and we're eagerly awaiting the day we can switch back to normal connections without a riot from our users.
In response to Charles' problem above,
From : http://www.php.net/manual/en/mysqli.quickstart.connections.php -
A common complain about persistent connections is that their state is
not reset before reuse. For example, open and unfinished transactions
are not automatically rolled back. But also, authorization changes
which happened in the time between putting the connection into the
pool and reusing it are not reflected. This may be seen as an unwanted
side-effect. On the contrary, the name persistent may be understood as
a promise that the state is persisted.
The mysqli extension supports both interpretations of a persistent
connection: state persisted, and state reset before reuse. The default
is reset. Before a persistent connection is reused, the mysqli
extension implicitly calls mysqli_change_user() to reset the state.
The persistent connection appears to the user as if it was just
opened. No artifacts from previous usages are visible.
The mysqli_change_user() function is an expensive operation. For
best performance, users may want to recompile the extension with the
compile flag MYSQLI_NO_CHANGE_USER_ON_PCONNECT being set.
It is left to the user to choose between safe behavior and best
performance. Both are valid optimization goals. For ease of use, the
safe behavior has been made the default at the expense of maximum
performance.
Persistent connections are a good idea only when it takes a (relatively) long time to connect to your database. Nowadays that's almost never the case. The biggest drawback to persistent connections is that it limits the number of users you can have browsing your site: if MySQL is configured to only allow 10 concurrent connections at once then when an 11th person tries to browse your site it won't work for them.
PDO does not manage the persistence. The MySQL driver does. It reuses connections when a) they are available and the host/user/password/database match. If any change then it will not reuse a connection. The best case net effect is that these connections you have will be started and stopped so often because you have different users on the site and making them persistent doesn't do any good.
The key thing to understand about persistent connections is that you should NOT use them in most web applications. They sound enticing but they are dangerous and pretty much useless.
I'm sure there are other threads on this but a persistent connection is dangerous because it persists between requests. If, for example, you lock a table during a request and then fail to unlock then that table is going to stay locked indefinitely. Persistent connections are also pretty much useless for 99% of your apps because you have no way of knowing if the same connection will be used between different requests. Each web thread will have it's own set of persistent connections and you have no way of controlling which thread will handle which requests.
The procedural mysql library of PHP, has a feature whereby subsequent calls to mysql_connect will return the same link, rather than open a different connection (As one might expect). This has nothing to do with persistent connections and is specific to the mysql library. PDO does not exhibit such behaviour
Resource Link : link
In General you could use this as a rough "ruleset"::
YES, use persistent connections, if:
There are only few applications/users accessing the database, i.e.
you will not result in 200 open (but probably idle) connections,
because there are 200 different users shared on the same host.
The database is running on another server that you are accessing over
the network
An (one) application accesses the database very often
NO, don't use persistent connections, if:
Your application only needs to access the database 100 times an hour.
You have many, many webservers accessing one database server
Using persistent connections is considerable faster, especially if you are accessing the database over a network. It doesn't make so much difference if the database is running on the same machine, but it is still a little bit faster. However - as the name says - the connection is persistent, i.e. it stays open, even if it is not used.
The problem with that is, that in "default configuration", MySQL only allows 1000 parallel "open channels". After that, new connections are refused (You can tweak this setting). So if you have - say - 20 Webservers with each 100 Clients on them, and every one of them has just one page access per hour, simple math will show you that you'll need 2000 parallel connections to the database. That won't work.
Ergo: Only use it for applications with lots of requests.
On my tests I had a connection time of over a second to my localhost, thus assuming I should use a persistent connection. Further tests showed it was a problem with 'localhost':
Test results in seconds (measured by php microtime):
hosted web: connectDB: 0.0038912296295166
localhost: connectDB: 1.0214691162109 (over one second: do not use localhost!)
127.0.0.1: connectDB: 0.00097203254699707
Interestingly: The following code is just as fast as using 127.0.0.1:
$host = gethostbyname('localhost');
// echo "<p>$host</p>";
$db = new PDO("mysql:host=$host;dbname=" . DATABASE . ';charset=utf8', $username, $password,
array(PDO::ATTR_EMULATE_PREPARES => false,
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION));
Persistent connections should give a sizable performance boost. I disagree with the assement that you should "Avoid" persistence..
It sounds like the complaints above are driven by someone using MyIASM tables and hacking in their own versions of transactions by grabbing table locks.. Well of course you're going to deadlock! Use PDO's beginTransaction() and move your tables over to InnoDB..
seems to me having a persistent connection would eat up more system resources. Maybe a trivial amount, but still...
The explanation for using persistent connections is obviously reducing quantity of connects that are rather costly, despite the fact that they're considerably faster with MySQL compared to other databases.
The very first trouble with persistent connections...
If you are creating 1000's of connections per second you normally don't ensure that it stays open for very long time, but Operation System does. Based on TCP/IP protocol Ports can’t be recycled instantly and also have to invest a while in “FIN” stage waiting before they may be recycled.
The 2nd problem... using a lot of MySQL server connections.
Many people simply don't realize you are able to increase *max_connections* variable and obtain over 100 concurrent connections with MySQL others were beaten by older Linux problems of the inability to convey more than 1024 connections with MySQL.
Allows talk now about why Persistent connections were disabled in mysqli extension. Despite the fact that you can misuse persistent connections and obtain poor performance which was not the main reason. The actual reason is – you can get a lot more issues with it.
Persistent connections were put into PHP throughout occasions of MySQL 3.22/3.23 when MySQL was not so difficult which means you could recycle connections easily with no problems. In later versions quantity of problems however came about – Should you recycle connection that has uncommitted transactions you take into trouble. If you recycle connections with custom character set configurations you’re in danger again, as well as about possibly transformed per session variables.
One trouble with using persistent connections is it does not really scale that well. For those who have 5000 people connected, you'll need 5000 persistent connections. For away the requirement for persistence, you may have the ability to serve 10000 people with similar quantity of connections because they are in a position to share individuals connections when they are not with them.
I was just wondering whether a partial solution would be to have a pool of use-once connections. You could spend time creating a connection pool when the system is at low usage, up to a limit, hand them out and kill them when either they've completed or timed out. In the background you're creating new connections as they're being taken. At worst case this should only be as slow as creating the connection without the pool, assuming that establishing the link is the limiting factor?

Is closing the mysql connection important?

Is it crucial to close mysql connections efficiency wise, or does it automatically close after php file has run?
From the documentation:
Note: The link to the server will be closed as soon as the execution of the script ends, unless it's closed earlier by explicitly calling mysql_close().
If your script has a fair amount of processing to perform after fetching the result and has retrieved the full result set, you definitely should close the connection. If you don't, there's a chance the MySQL server will reach it's connection limit when the web server is under heavy usage. If you can't close the MySQL connection until near the end of the script, it's cleaner though unnecessary to do so explicitly.
I'm not certain how fastcgi affects things. One page claims that a build of PHP that supports fastcgi will create persistent connections, even for mysql_connect. This contradicts the documentation in that the connection is closed when the process, rather than the script, ends. Rather than testing it, I'm going to recommend using mysql_close(). Actually, I recommend using PDO, if it's available.
Is it crucial? Not so much
Is it considered to be a good practice to follow? Yes.
I don't see why you wouldn't want to close it.
When using something like cgi, it's completely unnecessary to close your mysql connections since they close automatically at the end of script execution. When using persistent technologies like mod_perl and others, which maintain your connections between requests, then it's important to keep track of connections, global variables, etc..
Basically, for persistent data, clean up after yourself. For trivial, non-persistent data, it'll all go away when the request finishes anyway. Either way, best practice is to always close your connections.
Gets closed as soon as the script completes execution. Unless you've opened a persistent connection.
Ideally you should release a resource (a connection here) as soon as you are done with it. Unless there is a good chance that you will be needing it again very soon in the execution.
Connection pooling or using persistent connections (if that's what you meant) is a good idea if you are behind a single database server.
However if there are more servers and you are load balancing, it might hurt the distribution of work. Typically some clients run heavy queries while others run lighter ones. So if the same connection is used over n over, some servers would hit heavy load while others would be under utilized.
Consider using smaller ttls and variable connection pool size.
Most CMSs close the MySQL connection at the end of the request, which is really meaningless, because PHP will do it anyway.
However, if you have a script where the connection is no longer needed say towards the middle of the script, and then other heavy activities take place, then it's a good idea to explicitly close the connection. This will free some resources.
Now, much has been said about the benefits of closing a connection, but nearly nothing has been said about the benefits of not closing it. Essentially, if you do not close the connection at the end of a script, then you really are saving some resources. Imagine a web application (or any application) receiving 100 pageviews/second. So, every second, you will need to invoke mysqli_close 100 times - which means that in every second, you have 100 unnecessary roundtrips to the database server to close the open connections. From a performance perspective, this is pure overhead, since PHP will check for open connections when the script is finished anyway and will close those connections, and it might be that, because everything happens so quickly, that PHP doesn't see that you have closed those connections and will try to close them again.
Note: the answer above assumes that you are not using persistent connections (persistent connections are not used in any of the major CMSs).

Categories