I have an unusual case, my website runs two totally different PHP process through a sandbox. I have the normal website running through fastcgi and in the middle that fastcgi process executes one sandboxed script through cli. Both those processes require a MySQL connection and I was wandering if there is a way to share that connection since when the sandboxed script is running the fastcgi is just waiting for it to finish so there would be no concurrency.
This would greatly improve my hardware capability since I would only need one MySQL connection per client unlike the two connections that I need at the moment.
I could always code some kind of multiplexing proxy for this effect but is there any run of the mill solution? I would really appreciate.
Regards.
use database connection pooling middleware or proxy
sqlrelay
mysql-proxy
proxysql
It might be worth having a look at Persistent Connections for this.
Basically the connection would be automatically re-used if it exists. Note that this is referring to the resource itself, it does not persist any state from one process to the other.
Before making the decision to use Persistent Connections you should be aware of the pitfalls when used incorrectly. See this question.
Related
I read some threads here about PDO::lastInsertId() and its safety. It returns last inserted ID from current connection (so it's safe for multiuser app while there is only one connection per user/script run).
I'm just wondering if there is a possibility to get invalid ID if there is only one DB connection per one long script (lots of SQL requests) in multicore server system? The question is more likely to be theoretical.
I think PHP script run is linear but maybe I'm wrong.
PDO itself is not thread safe. You must provide your own thread safety if you use PDO connections from a threaded application.
The best, and in my opinion the only maintainable, way to do this is to make your connections thread-private.
If you try to use one connection from more than one thread, your MySQL server will probably throw Packet Out of Order errors.
The Last Insert ID functionality ensures multiple connections to MySQL get their own ID values even if multiple connections do insert operations to the same table.
For a typical php web application, using a multicore server allows it to handle more web-browser requests. A multicore server doesn’t make the php programs multithreaded. Each php program, to handle each web request, allocates is own PDO connections. As you put it, each php script run is “linear”. The multiple cores allow multiple scripts to run at the same time, but independently.
Last Insert ID is designed to be safe for that scenario.
Under some circumstances a php program may leave the MySQL connection open when it's done so another php program may use it. This is is called a persistent connection or connection pooling. It helps performance when a web site has many users connecting to it. The generic term for a reusable connection is "serially reusable resource.*
Some php programs may use threads. In this case the program must avoid allowing more than one thread to use the same connection at the same time, or get the dreaded Packet Out of Order errors.
(Virtually all machines have multiple cores.)
I run PHP via FCGI - that is my web server spawns several PHP processes and they keep running for like 10,000 requests until they get recycled.
My question is - if I've a $mysqli->connect at the top of my PHP script, do I need to call $mysqli->close in when I'm about to end running the script?
Since PHP processes are open for a long time, I'd image each $mysqli->connect would leak 1 connection, because the process keeps running and no one closes the connection.
Am I right in my thinking or not? Should I call $mysqli->close?
When PHP exits it closes the database connections gracefully.
The only reason to use the close method is when you want to terminate a database connection that you´ll not use anymore, and you have lots of things to do: Like processing and streaming the data, but if this is quick, you can forget about the close statement.
Putting it in the end of a script means redundancy, no performance or memory gain.
In a bit more detail, specifically about FastCGI:
FastCGI keeps PHP processing running between requests. FastCGI is good at reducing CPU usage by leveraging your server's available RAM to keep PHP scripts in memory instead of having to start up a separate PHP process for each and every PHP request.
FastCGI will start a master process and as many forks of that master process as you have defined and yes those forked processes might life for a long time. This means in effect that the process doesn't have to start-up the complete PHP process each time it needs to execute a script. But it's not like you think that your scripts are now running all the time. There is still a start-up and shutdown phase each time a script has to be executed. At this point things like global variables (e.g. $_POST and $_GET) are populated etc. You can execute functions each time your process shuts down via register_shutdown_function().
If you aren't using persistent database connections and aren't closing database connections, nothing bad will happen. As Colin Schoen explained, PHP will eventually close them during shutdown.
Still, I highly encourage you to close your connections because a correctly crafted program knows when the lifetime of an object is over and cleans itself up. It might give you exactly the milli- or nanosecond that you need to deliver something in time.
Simply always create self-contained objects that are also cleaning up after they are finished with whatever they did.
I've never trusted FCGI to close my database connections for me. One habit I learned in a beginners book many years ago is to always explicitly close my database connections.
Is not typing sixteen keystrokes worth the possible memory and connection leak? As far as I'm concerned its cheap insurance.
If you have long running FastCGI processes, via e.g. php-fpm, you can gain performance by reusing your database connection inside each process and avoiding the cost of opening one.
Since you are most likely opening a connection at some point in your code, you should read up on how to have mysqli open a persistent connection and return it to you on subsequent requests managed by the same process.
http://php.net/manual/en/mysqli.quickstart.connections.php
http://php.net/manual/en/mysqli.persistconns.php
In this case you don't want to close the connection, else you are defeating the purpose of keeping it open. Also, be aware that each PHP process will use a separate connection so your database should allow for at least that number of connections to be opened simultaneously.
You're right in your way of thinking. It is still important to close connection to prevent memory/data leaks and corruption.
You can also lower the amount of scipts recycled each cycle, to stop for a connection close.
For example: each 2500 script runs, stop and close and reopen connection.
Also recommended: back up data frequently.
Hope I helped. Phantom
I am currently accessing a SOAP Web Service through a PHP script using SoapClient. My script calls multiple subscripts (~30 a second) that each send a request and then pushes the response to a MySQL Database. My process attempts to emulate an "asynchronous" request/response mechanism.
In my subscript I connect to mysql and close the connection once it is complete. I'm running about 30 subscripts per second. I'm running into an issue where I am maxing out my MySQL connections.
I don't want to increase the maximum number of connections as I feel this is bad practice. Is there a better way to approach this problem? I am thinking I can somehow share a single mysql connection between the subscript and script.
If all subscripts are run in sequence, in one thread, then you can connect to MySQL once and pass this connection to all of them.
If subscripts are run in parallel, then it depend whether your MySQL library is thread safe or not. If it is, then you can pass one connection to all of them. But if not, you have no choice than one connection per script. This information should be mentioned in it's documentation.
If you need to run only some of the scripts in parallel and some can wait a while, then you can prepare pool of few connections (10 or so) and run only 10 scripts at once. When one script ends, you launch next and reuse old connection.
You can try connection pooling. I am not sure whether this is possible in php and whether there are frameworks already available for that.
If its not available You can use a Singleton class which contains a list of connections. Let the connections be closed by this class if its idle for N seconds. This means that your 30 subscripts can reuse connections which are not used by other scripts.
Did you try mysqli_pconnect? How do you spawn your sub processes? Can't you open the database connection in the main process and pass it to the sub processes? Any code examples of what you are doing?
Apache/PHP/MySQL persistent connections have such a bad reputation because Apache handles each request as a child php process, each having 1 persistent connection. When visitors scale, MySQL reachs max connections limit from all the apache/php child processes, each with 1 persistent connection. There's also the issues of temporary tables, user variables, charsets, transactions and last-insert-id.
In my situation, we don't have to deal with the latter issues, because we are only READING from MySQL. There are no updates to the db: these are handled by another set of scripts on a different machine. The scripts we want to have persistent connections are the server-end of AJAX connections, returning JSON data.
Each pageview has 5 AJAX requests, so 6 different php child processes are started on the server for each page requested (5 ajax, 1 html). Ideally, I could have ONLY 1 connection from the php/ajax server to MySQL. This connection would be shared by all php child processes.
So, how can I do this?
Should I use another webserver software other than apache? ngynx?
cheers
UPDATE: In this situation, the right way to connect to the MySQL server (http://bit.ly/15rBDP):
using mysql native driver (mysqlnd)
and mysqli
each client reconnecting using the
mysql-change-user function
using
MYSQLI-NO-CHANGE-USER-ON-PCONNECT in
php config ('coz we don't need to
cleanup).
UPDATE 2:
To clarify my question, what i want to have is: ALL php client processes, connecting through only ONE persistent connection. This connection is defined, ran and stored some how (my question), but all new php client processes know it and can use it. The problem with apache/php is that each php client process has 1 connection. If I serve 20,000 pages per minute, there will be 20,000 persistent connections. I want the 20,000 php child processes connecting to 1 unique, central, persistent connection to mysql.
You do realize that having only one (persistent) connection for all your requests, effectively serializes all requests to your server. So request C has to wait for request B to finish, which has to wait for request A to finish etc.
So having one connection turns your multi-threaded/multi-process webserver into a single-threaded application.
Read the accepted answer on this post : Which is better: mysql_connect or mysql_pconnect
Simply, using mysql persistent connections may be good or bad, depending on the hardware resources that you have as well as the way you code your applications.
A php native mysql connector is included in php 5.3 which has improved support for persistent connections.
http://dev.mysql.com/downloads/connector/php-mysqlnd/
http://blog.ulf-wendel.de/?p=211
My understanding is that PHP's p* connections is that it keeps a connection persistent between page loads to the service (be it memcache, or a socket etc). But are these connections thread safe? What happens when two pages try to access the same connection at the same time?
In the typical unix deployment, PHP is installed as a module that runs inside the apache web server, which in turn is configured to dispatch HTTP requests to one of a number of spawned children.
For the sake of efficiency, apache will often spawn these processes ahead of time (pre-forking them) and maintain them, so that they can dispatch more than one request, and save the overhead of starting up a process for every request that comes in.
PHP works on the principle of starting every request with a clean environment; no script variables persist between page loads. (Contrast this with mod_perl or python, where applications often manifest subtle bugs due to unexpected state hangovers).
This means that the typical resource allocated by a PHP script, be it an image handle for GD or a database connection, will be released at the end of a request.
Some resources, particularly Oracle database connections, have quite a high cost to establish, so it is desirable to somehow cache that connection between dispatched web requests.
Enter persistent resources.
The way these work is that any given apache child process may maintain a resource beyond the scope of a request by registering it in a "persistent list" of resources. The persistent list is not cleaned up at the end of the request (known as RSHUTDOWN internally). When you use a pconnect function, it will look up the persistent list entry for a given set of unique credentials and return that, if it exists, or establish a new connection with those credentials.
If you have configured apache to maintain 200 child processes, you should expect to see that many connections established from your web server to your database machine.
If you have many web servers and a single database machine, you may end loading your database machine much more than you anticipated.
With a threaded SAPI, the persistent list is maintained per thread, so it should be thread safe and have similar benefits, but the usual caveat about PHP not being recommended to run in threaded SAPI applies--while PHP is itself thread safe, so many libraries that it uses may have thread safety problems of their own and cause you a good number of headaches.
The manual's page Persistent Database Connections might get you a couple of informations about persistent connections.
It doesn't say anything specific about thread safety, still ; I've quite never seen anything about that anywhere, as far as I remember, so I suppose it "just works OK". My guess would be a connection is re-used only if not already used by another thread at the same time, but it's just some kind of (logical) wild guess...
Generally speaking, PHP will make one persistent connection per process or thread running on the webserver. Because of this, a process or thread will not access the connection of another process or thread.
Instead, when you make a database connection PHP will check to see if one is already open (in the process or thread that is handling the page request) and if it is then it will use it, otherwise it will just initialize a new one.
So to answer your question, they aren't necessarily thread safe but because of how they operate there isn't a situation where two threads or processes will access the same connection.
Generally speaking, when a PHP script requests a persistent connection, PHP will look for one in the connection pool with the same connection parameters.
If one is found that is NOT being used, it is given to the script, and returned to the pool at the end of the script.