The situation is: I have one Debian Server running LAMP with one Virtual Host with one Website. My MySQL has only one user from that website.
In this case would I benefit from using a persistent connection?
The PHP documentation seems to advise against persistent connections in any case.
Thanks
Edit: Yes, the MySQL server is on the same machine.
There's a discussion here http://groups.google.com/group/comp.databases.mysql/browse_thread/thread/4ae68befe1b488e7/e843f0b9e59ad710?#e843f0b9e59ad710 :
"No, it is not (better). Contrary, using mysql_pconnect() is considered harmful, as it tends to hog the MySQL server with idle connections."
If you connect via 'localhost', the connection will automatically be established via the MySQL socket, which is really cheap anyways.
(Groups link taken from MySQL Persistent Connections)
While you can get some performance benefits from using a persistent connection, but if the mysql server is on the same machine and you're not experiencing problems then it is probably not worth it. It is too easy to accidentally leave connections open, and the actual performance benefit is only going to be noticeable at high volumes.
Related
I'm using a self-made customer system in PHP running with a local mySQL Database.
Now i have a second computer on a different location which has to use this Database too. So i gave this mysql Database on a Server reachable through internet.
My problem is now, that the first one has often problems with the internet connection and then the program will not work. But it has to work every time!
Now i do not know how i should handle this problem?
A local Database and one in the internet, but how should i merge them?
Should i make a local DB per computer and match them together in one?
I also want to change the framework behind this system to symfony2 so is there a way to solve this problem with this framework (e.g. doctrine?)
Thanks for your help!
Update:
My limitation is the Internet connection on the first computer which could not be eliminated.
If you really have limitations of (1) not being able to move the database off of the machine with a bad connection and (2) not being able to fix the bad connection; you are going to have to keep some sort of local instance on the second machine.
I would try to setup master-master replication from the first machine with the bad connection to the second machine. I'm not sure how reliable this will be considering the replication will be failing often due to the first machine's bad connection. This problem may be extrapolated if one or both machines are using old versions of MySQL. MySQL 5.5, for example, can be configured to actively monitor replication connectivity.
If the majority of your application does READS instead of WRITES, perhaps you could install Memcached (or something similar) on the second machine so that the application can pull data from local memory without requiring a connection to the MySQL server.
There are a few ways to achieve what you want (although maybe not exactly how you described), but the best way is definitely do host the database on a server that doesn't have Internet connectivity problems. Look for hosting that allows remote MySQL connections.
If MySQL is dropping connections in a PHP application, and MySQL connection limit is set above the number of concurrent users in the application, which other factors can contribute to this behavior? Also, analyzing moodle logs (the only app running at these servers), yesterday I had 4 times more activity and it didn't drop anything, but today there were times where it was frustrating the number of dropped connections.
My main question here is why the database is rejecting connections when it didn't before with 4 times more activity (and everything it's the same, I didn't change anything in between).
Some background: I've got 2 servers contracted at my hosting:
shared server running Debian Linux/PHP 5.3 (FastCGI)
VPS running Debian Linux/MySQL 5.1.39
On this environment I'm running only moodle 1.9.12 (for the database connection using adoDB and persistent connections), php part on the shared server, database on the VPS. I suspect that, by PHP running on a shared server, other hosting accounts are affecting me (the database rejecting connections I mean, I really don't care about RAM/CPU).
Reading about the issue I've seen in places that persistent connections don't work well with PHP as CGI/FastCGI and that if both servers are in the local lan it really doesn't matter using persistent or not persistent connections because of the connection is going to be quick anyway. So now I'm using not persistent connections. I guess that may be part of the problem, but I fail to understand why it worked with more load. Which PHP/Apache settings are involved here?
Since your database and web server are on two different machines, there are two other possible causes: the network in between, and the network layer of the operating system.
If you did not change anything in your configuration and it worked with higher loads before, it is more likely to be an issue with the network connectivity between the two machines. Since the database machine is a VPS you also don't know how much real network load it is handling. If your ISP has competent support personnel (which unfortunately isn't always the case) it can't hurt to ask them if they have an explanation.
The same goes for your "shared" web-server. While it is unlikely, it is not impossible that it is a an issue of too many connections on that machine.
It would also help to know how exactly you are measuring dropped connections. If you are looking at the aborted-connections counter of mySQL, it is not necessarily a measure of an actual problem: http://dev.mysql.com/doc/refman/5.1/en/communication-errors.html. A user aborting a page load already may increase this counter.
If PHP throws an error, because it could not connect to the server or lost connection to the server during a query, then it is an issue.
Hello I have a database engine sitting on a remote server, while my webserver is present locally. I have worked pretty much with client-server architecture, where the server has both the webserver and the database engine. Now I need to connect to an Oracle database which is situated on a different server.
Can anybody give me any suggestions?? I believe ODBC_CONNECT might not work. Do I use OCI8 drivers?? How would I connect to my database server.
Also I would have a very high number of database calls going back and forth, so is it good to go with persistent connection or do I still use individual database calls?
If you're using ODBC, then you need to use the PHP's ODBC driver rather than the OCI8 driver. Otherwise, you need the Oracle client installed on your webserver (even if it's just Oracle's Instant Client) and then you can use OCI8.
EDIT
Personally I wouldn't recommend persistent connections. While there is a slowdown when connecting to a database (especially a remote database), persistent connections can cause more issues if you have a high hit count (exceeding the number of persistent connections available), or if there's a network hiccup of any kind that leaves orphaned connections on the database, and potentially orphaned pconnectiosn as well.
Oracle client comes for each platform. In summary it is collection of needed files to talk to oracle and a command line utility for oracle. Just go to oracle.com and downloads
I read specifications of mysql_pconnect of the function of PHP..
"mysql_pconnect is that connection with the SQL server is not closed even if the practice of the script is finished".
There is it unless connection is closed, but will it be that connection stays until a timing of the reboot of mysql?
I think, and I may be wrong, but...
I think the pconnect is a persistant connection held not for the running of the script, but for the duration of the PHP session / MySQL session, that there is a socket connection maintained by PHP regardless of the which script is being run. A bit like having multiple documents open in word instead of multiple instances of notepad running. by using a common database link, the processing power for creating individual links is not needed.
However, after reading, I think that it only seems to be of benefit if PHP is being run as an Apache module, not in CGI mode.
Call me out on my misinformation.
mysql_pconnect allows Apache's mod_php module to do connection pooling. You can still call mysql_close, but mod_php will not actually close it; it will just invalidate your resource handle. Unfortunately, the module doesn't have any configuration for this, so pooled connections are reaped by the MySQL server via its wait_timeout parameter. The default value of this is quite high so if you want to take advantage of it, you will probably want to lower that variable.
Connection pooling saves two things: connection setup and MySQL thread creation. Connection setup is very fast with MySQL compared to other databases, but a highly in-demand web site could still see a benefit from reducing this step. The cost of thread creation in MySQL is more dependant on the underlying OS but it can still be a win for a busy site.
Both aspects need to be looked at in the bigger picture of website speed and the load it presents on your database. It is possible to run out of connection threads on the database with a busy enough site using connection pooling. There is also the aspect that your application needs to do its best to leave the connection in a consistent state, as you can no longer rely on closing the connection to do things like unlock tables and rollback transactions.
There is more information in the PHP documentation.
yes, but you can not close it using mysql___close.
It will close when you reboot mysql server, or when the connection stays idle (not used) for a specific amount of time, defined in the configuration variable wait_timeout.
I'm using php::memcache module to connect a local memcached server (#127.0.0.1), but I don't know that which one I should use, memcache::connect() or memcache::pconnect ? Does memcache::pconnect will consume many resource of the server?
Thank you very much for your answer!
Memcached uses a TCP connection (handshake is 3 extra packets, closing is usually 4 packets) and doesn't require any authentication. Therefore, the only upside to using a persistent connection is that you don't need to send those extra 7 packets and don't have to worry about having a leftover TIME-WAIT port for a few seconds.
Sadly, the downside of sacrificing those resources is far greater than the minor upsides. So I recommend not using persistent connections in memcached.
pconnect stands for persistant connection. This means that the client (in your case the script) will constantly have a connection open to your server which might not be a resouces problem - more a lack of connections available.
You should probably be wanting the standard connect unless you know you need to use persistant connections.
As far as I know, the same rules that govern persistent vs. regular connections when connecting to MySQL apply to memcached as well. The upshot is, you probably shouldn't use persistent connections in either case.
"Consumes" TCP port.
In application I'm developing I use pconnect as it uses connection pool and from the view of hardware - one server keeps one connection to memcache. I don't know exactly how it works but I think memcached is smart enough to track IP of memcached client machine.
I've played with memcached for a long time and found that using memcache::getStatus shows that connections count doesn't increased when using pconnect.
You can use debug page which show memcached stats and try to tweak pconnect or connect and see what's going on.
One downside is that PHP gets no blatant error or warning if one or all of the persistently-connected memcached daemons vanish(es). That's a pretty darn big downside.