Is it possibile to have something like "keepalive" for php-mysql connection to avoid many and many connections to db?
I need to log some events to database, these events can be thrown very often from command line script, so I think could be better to keep a connection alive for some time instead of open a new connection for every event.
Please mind that I'm asking for a pure php script, not a web server script, called by a shell command:
> php /var/scripts/log.php
Is is possibile with standard php?
If you are using PHP5.3 or later then based on php.net documentation the connection is by default persistent and also provides cleanup.
Related
Is it possible to maintain single DB connection or object for separate cron jobs. Script is written in PHP.
I have multiple independent cron jobs running to insert/update DB. That also every 15 mins running. Recently max db connection exceeded.
Is there any service to maintain db connection? like using node.js or javascript
Or is it possible connection-pooling using php?
I tried persistent connection using PHP like this,
$link = mysqli_connect('p:localhost', 'fake_user', 'my_password', 'my_db');
But then not working as expected. Each cron job generate separate connection to mysql.
There is no connection pooling feature in PHP but some short you can achieve it by using "mysql_pconnect". At the same time, you have to be very careful while using mysql_pconnect.
According to PHP mysql_pconnect:
mysql_pconnect() acts very much like mysql_connect() with two major differences.
First, when connecting, the function would first try to find a (persistent) link that's already open with the same host, username, and password. If one is found, an identifier for it will be returned instead of opening a new connection.
Second, the connection to the SQL server will not be closed when the execution of the script ends. Instead, the link will remain open for future use (mysql_close() will not close links established by mysql_pconnect()).
This type of link is therefore called 'persistent'.
More info click here
You can do one more thing that you have to close all DB connection which is ideal for more than 30 sec or 1 minute and this can be done through cron itself.
Secondly, you have to open a single connection in your cron script and have to close it at the end after execution of the script.
I am currently accessing a SOAP Web Service through a PHP script using SoapClient. My script calls multiple subscripts (~30 a second) that each send a request and then pushes the response to a MySQL Database. My process attempts to emulate an "asynchronous" request/response mechanism.
In my subscript I connect to mysql and close the connection once it is complete. I'm running about 30 subscripts per second. I'm running into an issue where I am maxing out my MySQL connections.
I don't want to increase the maximum number of connections as I feel this is bad practice. Is there a better way to approach this problem? I am thinking I can somehow share a single mysql connection between the subscript and script.
If all subscripts are run in sequence, in one thread, then you can connect to MySQL once and pass this connection to all of them.
If subscripts are run in parallel, then it depend whether your MySQL library is thread safe or not. If it is, then you can pass one connection to all of them. But if not, you have no choice than one connection per script. This information should be mentioned in it's documentation.
If you need to run only some of the scripts in parallel and some can wait a while, then you can prepare pool of few connections (10 or so) and run only 10 scripts at once. When one script ends, you launch next and reuse old connection.
You can try connection pooling. I am not sure whether this is possible in php and whether there are frameworks already available for that.
If its not available You can use a Singleton class which contains a list of connections. Let the connections be closed by this class if its idle for N seconds. This means that your 30 subscripts can reuse connections which are not used by other scripts.
Did you try mysqli_pconnect? How do you spawn your sub processes? Can't you open the database connection in the main process and pass it to the sub processes? Any code examples of what you are doing?
I regularly have the following error:
PHP Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000] [1129] Host 'MY SERVER' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'
It is easy to solve the problem with a regular (like crontab) mysqladmin flush-hosts command or increasing the max_connect_errors system variable, as written here.
BUT ! What are "many successive interrupted connection requests", why is this happening?
I'd rather prevent the problem upstream, rather than correcting blocking.
MySQL version : 5.5.12. I'm using Zend Framework 1.11.10 and Doctrine 2.1.6.
There are no mysql_close() nor mysqli_close() in my PHP Code.
max_connect_errors has the default value, 10, and I don't want to increase it yet, I want to understand why I've got the errors. I use a cron, every 5 minutes which does a mysqladmin flush-hosts command.
This response is by design as a security measure, and is the result of reaching the max_connection_errors value for mysql. Here's a link Oracle provides which details most of the possible causes and solutions.
Ultimately this means that there are so many successive connection failures that MySql stops responding to connection attempts.
I use a cron, every 5 minutes which does a mysqladmin flush-hosts
command.
As you are reaching this limit so quickly, there are only a few likely culprits:
Server is not correctly configured to use PDO.
Running code includes very frequently creating new connections.
Results in quickly reaching the max_connections value, causing all subsequent connection attempts to fail... thus quickly reaching the max_connection_errors limit.
Code is hitting an infinite loop, or cascading failure.
Obvious possibility, but must be mentioned.
(i.e: pageA calls pageB and pageC, and pageC calls PageA)
PDO is running fine, but some scripts take a long time to run, or never end.
Easiest way to catch this is turn down the max_execution_time.
It is likely that whatever the case, this will be difficult to track down.
Log a stack-trace of every mysql connection attempt to find what code is causing this.
Check the mysql.err logfile
While PDO does not require explicitly closing mysql connections, for cases like this there's a few practices that can prevent such ServerAdmin hunts.
Always explicitly close mysql connections.
Build a simple Class to handle all connections. Open, return array, close.
The only time you need to keep a connection open is for cursors.
Always define connection arguments in one and only one file included everywhere it is needed.
Never increase max_execution_time unless you know you need it and you know the server can handle it. IF you need it, explicitly increase the value only for the script that needs it. php.net/manual/en/function.set-time-limit.php
If you increase max_execution_time, increase max_connections.
dev.mysql.com/doc/refman/5.0/en/cursors.html
It means that mysqld has received many connection requests from the given host that were interrupted in the middle. Check out this link from the documentation for more info.
PHP supports the use of persistent SQLite connections. However, a problem arises in trying to run maintenance scripts (like a file backup), with such a connection opened. These scripts are likely to be run without a server restart every now and then, in periods of low traffic.
How can I check if there is currently an SQLite persistent connection, without opening / calling one (hence creating the connection)?
See: php SQLite persistent connection
If you have access to shell_exec() or exec, you can run a shell command to check to see if a SQLite process is running using something like top or maybe a command like lsof -i -n -P | grep sqlite assuming sqlite is the name of the process.
If you use PDO you can just check if the handler is null or not before you run the maintenance scripts. That way you wont be creating a connection like with sqlite_popen()
Create a persistent connection with PDO if you want: $handler = new PDO('sqlite:test.db', NULL, NULL, array(PDO::ATTR_PERSISTENT => TRUE));
...Then you can just close the connection before the maintenance script is called, assuming it is on some sort of schedule:
if(!is_null($handler)){
$handler = null;
//run maintenance script, recreate connection once finished
}
To quote the manual at https://www.php.net/manual/en/features.persistent-connections.php:
'When a persistent connection is requested, PHP checks if there's already an identical persistent connection (that remained open from earlier) - and if it exists, it uses it. If it does not exist, it creates the link. An 'identical' connection is a connection that was opened to the same host, with the same username and the same password (where applicable).'
That seems to imply that there no need to check prior to trying to connect.
Note that the article goes into a lot of considerations about how persistent connections work out with web servers, particularly about how subsequent processes attempting to use the connection may be blocked by what a fault in an earlier process created, implying that persistent connections may not be reliable in a web server environment given how web sessions can terminate at any time.
In particular, it recommends that persistent connections are not used for scripts that use table locks or transactions.
Usually closing a connection is simply done by oci_close($connection); or in a worse case when the php script ends the connection pass away.
In my case however, I face a different behavior.
If I access my application which uses PHP 5.2.8, Apache 2.2.11 and oci8 1.2.5, the connection is kept during several minutes.
Actually it seems to: if I launch netstat -b I see that the process httpd.exe remains with the ESTABLISHED status on the database's URL during a while (a few minutes).
Could someone enlighten me on that behavior?
P.S. I do not use persistent connections.
P.P.S. As asked here is the code used to connect and close (this is a legacy application):
connection: a function is called whose connection related code is $connection = #ocilogon ( "$username", "$password", "$database" );
closing: responsability of every pages we develop but typically it'd be oci_close($connection)
From the docs on oci_connect() here (ocilogon() calls the same function):
http://www.php.net/manual/en/oci8.connection.php
It implies that you can close a connection explicitly via oci_close() or that it is closed automatically at the end of the page being rendered. I would imagine if you aren't closing explicitly that it might take it some time to timeout. Is it possible, that some of the pages that don't have oci_close() calls are causing the open connections you see?
If you create a standalone page with only an oci_connect() and an oci_close() and then execute it multiple times, do you see the connection count rise directly with how many times you executed the page and stay up before eventually coming back down?
Also, what indicator are you looking at to see that the connection is remaining open?
if you were on higher versions, then it might be Oracle 11g Database Resident Connection Pooling but that doesn't exist on your current versions you are using.