MYSQL - failed to acquire exclusive database access - php

Getting the error sporadically:
"Failed to acquire exclusive database access"
in my PHP application on the live server.
Can't see anything in the logs.
Checked the server settings and:
max_connections: 100
max_user_connections: 0
As I understand from the documentation the 0 means no limit and is the default.
Anyone got any ideas?
Thanks.

I used to see an error similar to this all the time in Access but haven't seen it with MySQL before. I don't think your max_user_connections is the problem because the key word in that error message is "exclusive" meaning you need to have the ONLY access to a table.
My guess would be it's something with InnoDB and its locking mechanism. If you don't require transaction safe records, try switching your table to MyISAM and see if the error persists.
Take a look here for InnoDB locking:
http://dev.mysql.com/doc/refman/5.0/en/innodb-locks-set.html

Related

Laravel SQL Chunk gives -902: Error reading data from the connection

I'm currently querying a huge Firebird (v2.5) table (with millions of rows) in order to perform some row-level operations. To achieve that, the code is using chunking from Laravel 5.1, somewhat like this:
DB::connection('USER_DB')
->table($table->name)
->chunk(min(5000, floor(65500/count($table->fields))), function($data) {
// running code and saving
});
For some reason, I keep receiving the following error:
SQLSTATE[HY000]: General error: -902 Error reading data from the connection.
I've already tried changing chunk size, and different codes, but the error still appears. Sometime it happens at the beginning of the table, and sometimes after parsing several hundred-thousands or even millions rows. The thing is that I need to parse only the rows in this transaction (so I can't stop and reopen the script).
Tested for memory on the server (running on different place than the database), and it is not using nearly anything of it.
While writing this, I rechecked the Firebird log and found the following entry:
INET/inet_error: read errno = 10054
As far as I could find, this isn't actually a Firebird problem, but a winsock reset error, is that correct? If so, how could I prevent this from happening during the chunk query? And how can I check if that is a problem with windows or the firewall?
Update I
Digging on the firebird2.5.log on the PHP server, found this errors:
INET/inet_error: send errno = 104
REMOTE INTERFACE/gds__detach: Unsuccesful detach from database.
I have found the root of my problem. The thing is that the server was resetting the connection. In order to avoid that, I added a "heartbeat" query to run every few minutes. With this strategy I was able to prevent the connection from being reset.

PHP and SQL - Database overload

I'm building a web app that uses lots of requests to my database. Every thing was working perfectly smooth until a half an hour ago when the requests weren't returning... I checked the PHP file directly and it displays the following:
<br />
<b>Warning</b>: mysql_connect() [<a href='function.mysql-connect'>function.mysql-connect</a>]: Too many connections in <b>/home/sanity/public_html/dev/forest/js/database.php</b> on line <b>7</b><br />
Unable to connect to MySQL
So I figured let's check phpMyAdmin, but it's not showing me ANYTHING except for a big red box that says:
SQL query: Edit Edit
SET CHARACTER SET 'utf8';
MySQL said: Documentation
#1045 - Access denied for user 'root'#'localhost' (using password: NO)
Between the last time it worked and now I haven't changed any configurations or code.. How do I begin to fix this?
Could this be caused by the fact my PHP files don't close the connection after using it? If so should I be closing the connection after every query? I figured the connection would close automatically when the user leaves the web site.
EDIT: The requests are sending through now and phpMyAdmin is back up, but how do I prepare this site for heavier traffic?
When I started my job, one of my first tasks was to continue working on what one of the directors had started coding. In his code, I saw this monstrosity:
function getTicket($id) {
mysql_connect("localhost","username","password");
mysql_select_db("database");
$sql = mysql_query("select * from tickets where id = ".intval($id));
return mysql_fetch_assoc($sql);
}
In other words, he was creating a whole new database connection every single time he wanted something from the database, and never closing any of them (instead letting them be closed at the end of the script automatically)
This was fine for basic testing, but as soon as I started writing more advanced stuff (before I'd discovered this piece of code) things majorly screwed up with the same "too many connections" error as you.
The solution was simple: restart the server to clear all pending connections, and fix the code to only connnect once per script execution.
This is what you should do. There should only ever be one call to mysql_connect (or indeed any database library's connect function) in your script (and I don't mean in a function that gets called several times!)
You should also check the database's configuration, just in case someone accidentally set the maximum connections too low, but generally this shouldn't be a problem if you manage your connections properly.
Though the mysql_* functions are deprecated (use a modern driver like PDO for instance), you should take a look at the mysql_close($con) function.
Here is the doc.
EDIT
If you are not using mysql_pconnect function, then your connection should be closed at the end of the execution of your script.
Apparently, one reason for such error is shared hostings. If you are using a shared hosting, generally speaking, the maximum connections to the server allowed by the hosting is not the greatest.
If you can change the max_connections system variable then try to change it to a greater number:
max_connections = 200

PHP Redis timeout, read error on connection?

"PHP Fatal error: Uncaught exception 'RedisException' with message 'read error on connection'"
The driver here is phpredis
$redis->blpop('a', 0);
This always times out after ~1 minute. My redis.conf says timeout 0 and $redis->getOption(Redis::OPT_READ_TIMEOUT) returns double(0)
If I do this it has never timed out $redis->setOption(Redis::OPT_READ_TIMEOUT, -1);
Why do I need -1? Redis documentation says timeout 0 in redis.conf should never time me out.
"By default recent versions of Redis don't close the connection with the client if the client is idle for many seconds: the connection will remain open forever."
The current solution I know of is to disable persistent connections for phpredis, as they have been reported as buggy since October 2011. If you’re using php-fpm or other threaded models, the library specifically disables persistent connections.
Reducing the frequency of this error might be possible by adjusting the php.ini default_socket_timeout value.
Additionally, read timeout configurations in phpredis are not universally supported. The feature (look for OPT_READ_TIMEOUT) was introduced in tag 2.2.3.
$redis->connect(host, port, timeout1);
.....
$redis->blpop($key, timeout2);
In which timeout1 must be longer than timeout2.
After a lot of study of articles and doing my own strace's of redis and php, it seemed the issue was easily fixed by this solution. The main issue in my use case was that redis server is not able to fork a process towards saving the in-memory writes to the on-disk db.
I have left all the timeout values in php.ini and redis.conf as they were without making the hacky changes suggested and then tried the above solution alone, and this issue 'read error on connection' that was unfixable using all the suggestions around changing timeout values across php and redis conf files went away.
I also saw some suggestions around increasing limit on file descriptors to 100000 etc. I am running my use case on a cloud server with file descriptor limit at 1024 and my use case runs even with that limit perfectly.
I added the code ini_set(‘default_socket_timeout’, -1) in my php program, but I found it didn't work immediately.
However after 3 minutes when I started to run the php program again, at last I found the reason: the redis connection is not persistent
So I set timeout=0 in my redis.conf, and the problem is solved!

MySQL: User some_user_name already has more than 'max_user_connections' active connections

I'm using Zend Freamwork for my website. And sometimes i get the following exception from my website:
Message: SQLSTATE[42000] [1203] User elibrary_books already has more than 'max_user_connections' active connections
As I know "Zend Freamwork" uses PDO to connect to the database.
How i can resolve this problem?
Always close your connection. If you are using Sql class it looks like:
$sql->getAdapter()->getDriver()->getConnection()->disconnect();
Sometimes MySQL connection thread is not thrown away even though you've torn down the socket cleanly; it still hangs around waiting for it to be reaped.
Check your settings for wait_timeout. Default value is unreasonably long. Optimal value might be around 20 seconds. You will probably also want it this low if you're using persistent connections.
Try setting the persistent flag in your database driver configuration to be false:
resources.db.params.persistent = 0

identify mysql problem

I've write a web-application in PHP which has 30 tables+views. From time to time my application doesn't work and think this is related to the mysql db.
Unfortunately I can't see the errors from the browser on that server because php.ini says so. Also when I try to connect to mysql db using phpMyAdmin the connection fails when I try to select my DB.
How can i see which is the problem to my mysql DB. It works from time to time but I don't understand why.
You could look in the MySQL error log - it's under /var/log/mysqld.log on my setup...
Your PHP errors may be getting sent to another log file - try the Apache / IIS error log (global or for the particular vhost, depending on your config for Apache - I can't say for IIS) or to the system log - /var/log/messages

Categories