Is it possible to maintain single DB connection or object for separate cron jobs. Script is written in PHP.
I have multiple independent cron jobs running to insert/update DB. That also every 15 mins running. Recently max db connection exceeded.
Is there any service to maintain db connection? like using node.js or javascript
Or is it possible connection-pooling using php?
I tried persistent connection using PHP like this,
$link = mysqli_connect('p:localhost', 'fake_user', 'my_password', 'my_db');
But then not working as expected. Each cron job generate separate connection to mysql.
There is no connection pooling feature in PHP but some short you can achieve it by using "mysql_pconnect". At the same time, you have to be very careful while using mysql_pconnect.
According to PHP mysql_pconnect:
mysql_pconnect() acts very much like mysql_connect() with two major differences.
First, when connecting, the function would first try to find a (persistent) link that's already open with the same host, username, and password. If one is found, an identifier for it will be returned instead of opening a new connection.
Second, the connection to the SQL server will not be closed when the execution of the script ends. Instead, the link will remain open for future use (mysql_close() will not close links established by mysql_pconnect()).
This type of link is therefore called 'persistent'.
More info click here
You can do one more thing that you have to close all DB connection which is ideal for more than 30 sec or 1 minute and this can be done through cron itself.
Secondly, you have to open a single connection in your cron script and have to close it at the end after execution of the script.
Related
Is it possibile to have something like "keepalive" for php-mysql connection to avoid many and many connections to db?
I need to log some events to database, these events can be thrown very often from command line script, so I think could be better to keep a connection alive for some time instead of open a new connection for every event.
Please mind that I'm asking for a pure php script, not a web server script, called by a shell command:
> php /var/scripts/log.php
Is is possibile with standard php?
If you are using PHP5.3 or later then based on php.net documentation the connection is by default persistent and also provides cleanup.
I have an issue concerning PDO persistent connection. Now this may not be an actual problem, but I can't seem to find any post addressing this behavior.
I'm using the good old PDO in a persistent connection mode for my web app. Now I'm creating a new connection via new PDO(...).
When I run this script a new connection (C#1) is getting established and a MySql process (P#1) to accommodate the persistent connection.
So, I run the script again creating a new conction (C#2) and expecting C#2 to use the P#1 from the last connection. Every time I run this script a new process appears while the last one is still alive (in sleep mode).
On my production server there are about 350 prossers (in sleep) at any given time from 3 defrent users (all users connect from the same apache server).
The question: is this situation valid?
found my answer
They cause the child process to simply connect only once for its entire lifespan, instead of every time it processes a page that requires connecting to the SQL server. This means that for every child that opened a persistent connection will have its own open persistent connection to the server. For example, if you had 20 different child processes that ran a script that made a persistent connection to your SQL server, you'd have 20 different connections to the SQL server, one from each child.
http://php.net/manual/en/features.persistent-connections.php
A framework or an application automatically connect the database and we have to just use the database object for DB related operation. In CMS or framework, a term "connection pooling" is very popular. You can opt CMS or framework of PHP.
What is connection pooling?
Can someone describe this with an example?
What is the advantage of connection pooling?
Without connection pooling:
Every time you want to talk to the database, you have to open a connection, use it, then close it again.
With connection pooling:
The connections are kept open all the time (in a pool). When you want to talk to the database, you take an already connection, that isn't already in a use, use it, then put it back.
This is more efficient then opening and closing them all the time.
Connection pooling generally refers to, well, having a pool of connections which is being reused. To contrast this with non-pooled connections: typically every program instance connects to the database by itself every time it is run. In a PHP program, you just have the line $db = new PDO(...), which connects to the database. If you have 100 simultaneous visitors, 100 separate instances of that script will be run simultaneously, and 100 separate connections will be established to the database simultaneously. This may be very inefficient and/or temporarily overwhelm the database server.
A connection pool works by establishing, say, 50 permanent connections to the database which stay open the whole time. A PHP script would then simply pick one of these open connections to talk to the database and drop it back into the pool when it's done. If suddenly more than 50 PHP scripts try to use connections from this pool at once, the first 50 will succeed, and the rest will have to wait in line until an unused connection becomes available. This is more efficient, because connections aren't opened and torn down all the time, and it doesn't overwhelm the database server when sudden spikes occur.
I am currently accessing a SOAP Web Service through a PHP script using SoapClient. My script calls multiple subscripts (~30 a second) that each send a request and then pushes the response to a MySQL Database. My process attempts to emulate an "asynchronous" request/response mechanism.
In my subscript I connect to mysql and close the connection once it is complete. I'm running about 30 subscripts per second. I'm running into an issue where I am maxing out my MySQL connections.
I don't want to increase the maximum number of connections as I feel this is bad practice. Is there a better way to approach this problem? I am thinking I can somehow share a single mysql connection between the subscript and script.
If all subscripts are run in sequence, in one thread, then you can connect to MySQL once and pass this connection to all of them.
If subscripts are run in parallel, then it depend whether your MySQL library is thread safe or not. If it is, then you can pass one connection to all of them. But if not, you have no choice than one connection per script. This information should be mentioned in it's documentation.
If you need to run only some of the scripts in parallel and some can wait a while, then you can prepare pool of few connections (10 or so) and run only 10 scripts at once. When one script ends, you launch next and reuse old connection.
You can try connection pooling. I am not sure whether this is possible in php and whether there are frameworks already available for that.
If its not available You can use a Singleton class which contains a list of connections. Let the connections be closed by this class if its idle for N seconds. This means that your 30 subscripts can reuse connections which are not used by other scripts.
Did you try mysqli_pconnect? How do you spawn your sub processes? Can't you open the database connection in the main process and pass it to the sub processes? Any code examples of what you are doing?
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
close mysql connection important?
How important is it close an mysql connection in a website and why?
Every database engine has a limit on a maximum number of simultaneous connections. So if you don't close the connection, mysql can run out of available connections (default max_connections is 100). In addition, each connection you hold consumes server's resources (memory, a thread to listen, might be open filehandles).
CAVEAT
This does NOT hold true if the only things opening connection are web apps from ONE server and they use pooled connections. In such a case, you don't risk opening more and more new connections (since every time your app needs a new one, it picks the ones available from the pool); and closing and re-opening the pool's connections just wastes resources.
I don't know about other languages, but php closes the connection automatically on the end of script execution.
In general case, you close connection so that sql server doesn't have to wait for more commands from the website (which it won't receive, because the script has finished execution), and doesn't hit connection quota (if it has any).
PHP will automatically close the connection when the script exits, so I wouldn't normally worry about it too much.
The database server will have a finite possible number of simultaneous connections, so on a very heavily loaded site it might help to free the connection as soon as possible.
If you don't properly close your connections you could get into trouble. You will probably receive Too many connections errors. See here
Like so often, it depends.
What could happen:
The connection could stay open and you can run out of the maximum open connections
Your changes aren't committed
Depending on program language, middle ware, ...
Because there is a limit to the number of sockets connections that can be opened.
I'm not THE programmer, but I think it's better to close it as soon as you don't need it. Say, for example, you open the connection, execute the query and for some reason, working with that data takes a long time. If you haven't closed the connection yet (and the program/script hasn't ended either, so it hasn't automatically closed the connection -if there's such a chance), there will be a busy resource which is doing nothing in MySQL. A connection is open but you're not using it, that means another client using the service probably will see the great "Error: Can't connect to the DB" message.
So, in my humble opinion, it's better to connect to the DB, gather the requiered data, close the connection and then process the data. That at least will leave one available connection to future clients (which is really important if you have high-concurrency apps).
Nevertheless, that kind of behaviour depends on the programming language but all I know can retrieve data to a variable and it's connection-independent, so I can just close it and keep working with the data while leaving the connection available to another client.