I'm currently trying to fix an issue with our production server not being able to handle SQL queries.
Looking at the process list, MySQL is taking 120 seconds plus to complete processes that are running queries, that when I run them myself through Heidi, are completing in less than a second. So, why would queries that are being processed coming from PHP take significantly longer (and in most cases timing out) than when the same query goes through straight away from HeidiSQL?
You are probably using a persistent connection, and it can cause such problems, in case the previous PHP code that used this connection had been stopped in the middle and never ended.
read more here: What are the disadvantages of using persistent connection in PDO
Turns out the problem was that the server where PHP was running (on a different hosting provider as we're migrating to cloud) had a throttled network connection and was unable to handle all of the data being sent back from MySQL. Turning on caching on the PHP side solved the problem.
Related
I have a PHP Socket Server that I can connect to via Telnet. Once connected, I am able to send messages in a certain format and they're saved in the database.
What happens is that when the PHP Socket receives a message, it opens a Database connection, executes the query, then closes the connection. When it receives another message, it opens the connection again, executes the query, then closes the connection.
So far, this works when I'm sending messages in an interval of 5-10 minutes. However, when the interval increases for over an hour or so, I get a MySQL Server has gone away error.
Upon doing some research, the common solution seems to be increasing the wait time, which is not an option for me. The PHP Socket server is supposed to be open 24/7, and I doubt there's a way to increase the wait time to infinity.
The other option is to check in PHP itself if the MySQL Server has gone away or not, and depending on the result, reset the MySQL Server, and try to establish the connection again.
Does anyone know how to do this in PHP? Or if anyone has other methods in keeping a MySQL Server constantly alive in a PHP Socket server, I would also be open to that idea.
You'll get this error if the database connection has timed out (perhaps from a long-running process since the last time the connection was used).
You can easily check the connection and restore it if necessary with a single command:
$mysqli->ping()
This error means that the connection has been established to the DBMS but has subsequently been lost. I've run sites with hundreds of concurrent connections to the same database instance handling thousands of queries per minute and have only seen this error when a query has been deliberately killed by an admin. You should check your config for the interactive timeout and read your server logs.
Given your description of the error (you really need to gather more data and characterized the circumstances of the error) the only explanation which springs to mind is that there is a deadlock somewhere.
I would question whether there is any benefit to closing the connection after each message (depending on the usage of the system). Artistic phoenix's comment is somewhat confused. But if there are capacity issues then I'd suggest using persistent connections with your existing open/close model, but I doubt that is relevant to the problem you describe here.
An Apache Bench test revealed that high throughput on the database caused the following...
Apache threads stuck in "Sending Reply" state, all related to a particular PHP file (as seen in Apache extended status).
MySQL sleeping connections with the same user as used by the PHP file.
Note: Apache Bench was used from a remote location as a mechanism for stressing the database only, ie a script just connected to the DB and ran 5 queries per load.
Running the same Apache Bench tests, but introducing a mysql_close() at the end of the script solved the problem. What I'd like to understand is why this happens.
A popular theory internally we have is that:
The increased throughput on the database somehow prevented Apache from serving its requests properly. A buffer somewhere, either in MySQL or at the OS level got filled up.
The Apache requests therefore got stuck in a Sending Reply state, and because there was no mysql_close() at the end of the PHP file a database connection remained open - probably only to expire once MySQL's connection timeout limit was reached and the MySQL connection was closed. To be clear, we are not using mysql_pconnect(), but we are using mysql_connect().
As above, whilst we've 'solved' the problem we'd love to get to the bottom of this with a solid answer rather than an educated guess.
Anyone experienced this before and know of a solution? Any tricks/thoughts/methods of identifying this.
We have developed chat module using node.js() and mongo sharding and gone live to production server. But today its reached 20000 connection in mongodb and getting error "Too many connection" in logs. After that we have restarted the node server and started again. now its comes normal. But we have to know how will solve this problem immediately.
Any configuration are there to set it in mongodb to kill the connection if not used or set the expire time while establish the connection.
Please help us to close this issue.
Regards,
Kumaran
You're probably not running into a MongoDB issue. There's a cap to the amount of connections you can make to MongoDB that's usually roughly equal to the maximum number of file descriptors available to it.
It sounds like there is a bug in your code (likely) or mongoose (less likely) that either creates more connections than it closes or never closes connections in the first place. In Java for example creating a new "Mongo" class instance for each query would result in this sort of problem but I don't work with node.js/mongoose so I do not know what the JS equivalent of that is.
Keep an eye on mongostat and check to see if the connection count always increases or if it decreases sometimes. If it's the former your code never releases connections for whatever reason. If it's the latter you're simply creating them faster than idle connections are disconnected. That's usually due to doing something heavy weight (like the driver initialising it's connection pool) for every query rather than once.
I am having a problem with a website, connecting to a MySQL database using two types of connection on different parts: some PDO, some mysql_connect().
The first part of the website is requesting MySQL using the very classic 'mysql_query()' PHP function. This part makes some heavy queries on geographical data. Some of these requests (already optimized) take a long time.
Another part of the site is more recent, and made using Doctrine via a PDO connection.
The problem is, when one of the big processes is being ran in one browser page (can take around 1minute to process and return the page), if a user opens another page the PDO connection is in sleep mode, and holds the whole page from loading. After 60s (wait_timeout of mysql) the connection is killed, and the PDO gets an exception "The MySQL Server has gone away".
What is strange is that other pages with only classical mysql_connect() and mysql_query() can be run without a problem in parallel, only PDO queries are holding back and eventually dying.
Any input would be really appreciated.
Closing this question, it was in fact related to the php session being held up on write, preventing the other process from running. session_write_close() resolved it.
There are various reasons that a connection gets closed.
Reference:
https://dev.mysql.com/doc/refman/5.0/en/gone-away.html
I too faced the similar problem on using PDO where the hosting administrator kills the connection if it sleeps more than a minute. Hence I came up with my own class which will wrap the PDO class. This will detect whether a connection is closed and will try to reconnect on query execution.
Answer Below
PDO: MySQL server has gone away
I'm trying to debug an error I got on a production server. Sometimes MySQL gives up and my web app can't connect to the database (I'm getting the "too many connections" error). The server has a few thousand visitors a day and on the night I'm running a few cron jobs which sometimes does some heavy mysql work (Looping through 50 000 rows, inserting and deletes duplicates etc)
The server runs both apache and mysql on the same machine
MySQL has a pretty standard based configuration (max connections)
The web app is using PHP
How do I debug this issue? Which log files should I read? How do I find the "evil" script? The strange this is that if I restart the MySQL server it starts working again.
Edit:
Different apps/scripts is using different connectors to its database (mostly mysqli but also Zend_Db)
First, use innotop (Google for it) to monitor your connections. It's mostly geared to InnoDB statistics, but it can bet set to show all connections including those not in a transaction.
Otherwise, the following are helpful: Use persistent connections / connection pools in your web apps. Increase your max connections.
It's not necessarily a long-running SQL query.
If you open a connection at the start of a page, it won't be released until the PHP script terminates - even if there is no query running.
You should add some stats to your pages to find out the slowest ones, and the most-hit ones. Closing the connection early would help, if possible.
Try using persistent connections (mysql_pconnect), it will help reduce the server load caused by constantly opening and closing MySQL connections.
The starting point is probably to use mysqladmin processlist to get a list of the processes on the mysql server. The next step depends on what you find.