MySQL Error "Too many connections" - php

I am using MySQL 5.0 for a site that is hosted by GoDaddy (linux).
I was doing some testing on my web app, and suddenly I noticed that the pages were refreshing really slowly. Finally, after a long wait, I got to a page that said something along the lines of "MySQL Error, Too many connections...", and it pointed to my config.php file which connects to the database.
It has just been me connecting to the database, no other users. On each of my pages, I include the config.php file at the top, and close the mysql connection at the end of the page. There may be several queries in between. I fear that I am not closing mysql connections enough (mysql_close()).
However, when I try to close them after running a query, I receive connection errors on the page. My pages are PHP and HTML. When I try to close a query, it seems that the next one won't connect. Would I have to include config.php again after the close in order to connect?
This error scared me because in 2 weeks, about 84 people start using this web application.
Thanks.
EDIT:
Here is some pseudo-code of my page:
require_once('../scripts/config.php');
<?php
mysql_query..
if(this button is pressed){
mysql_query...
}
if(this button is pressed){
mysql_query...
}
if(this button is pressed){
mysql_query...
}
?>
some html..
..
..
..
..
<?php
another mysql_query...
?>
some more html..
..
..
<?php mysql_close(); ?>
I figured that this way, each time the page opens, the connection opens, and then the connection closes when the page is done loading. Then, the connection opens again when someone clicks a button on the page, and so on...
EDIT:
Okay, so I just got off the phone with GoDaddy. Apparently, with my Economy Package, I'm limited to 50 connections at a time. While my issue today happened with only me accessing the site, they said that they were having some server problems earlier. However, seeing as how I am going to have 84 users for my web app, I should probably upgrade to "Deluxe", which allows for 100 connections at a time. On a given day, there may be around 30 users accessing my site at a time, so I think the 100 would be a safer bet. Do you guys agree?

Shared-hosting providers generally allow a pretty small amount of simultaneous connections for the same user.
What your code does is :
open a connection to the MySQL server
do it's stuff (generating the page)
close the connection at the end of the page.
The last step, when done at the end of the page is not mandatory : (quoting mysql_close's manual) :
Using mysql_close() isn't usually
necessary, as non-persistent open
links are automatically closed at the
end of the script's execution.
But note you probably shouldn't use persistent connections anyway...
Two tips :
use mysql_connect insead of mysql_pconnect (already OK for you)
Set the fourth parameter of mysql_connect to false (already OK for you, as it's the default value) : (quoting the manual) :
If a second call is made to
mysql_connect() with the same
arguments, no new link will be
established, but instead, the link
identifier of the already opened link
will be returned.
The new_link
parameter modifies this behavior and
makes mysql_connect() always open a
new link, even if mysql_connect() was
called before with the same
parameters.
What could cause the problem, then ?
Maybe you are trying to access several pages in parallel (using multiple tabs in your browser, for instance), which will simulate several users using the website at the same time ?
If you have many users using the site at the same time and the code between mysql_connect and the closing of the connection takes lots of time, it will mean many connections being opened at the same time... And you'll reach the limit :-(
Still, as you are the only user of the application, considering you have up to 200 simultaneous connections allowed, there is something odd going on...
Well, thinking about "too many connections" and "max_connections"...
If I remember correctly, max_connections does not limit the number of connections you can open to the MySQL Server, but the total number of connections that can bo opened to that server, by anyone connecting to it.
Quoting MySQL's documentation on Too many connections :
If you get a Too many connections
error when you try to connect to the
mysqld server, this means that all
available connections are in use by
other clients.
The number of connections allowed is
controlled by the max_connections
system variable. Its default value is
100. If you need to support more connections, you should set a larger
value for this variable.
So, actually, the problem might not come from you nor your code (which looks fine, actually) : it might "just" be that you are not the only one trying to connect to that MySQL server (remember, "shared hosting"), and that there are too many people using it at the same time...
... And if I'm right and it's that, there's nothing you can do to solve the problem : as long as there are too many databases / users on that server and that max_connection is set to 200, you will continue suffering...
As a sidenote : before going back to GoDaddy asking them about that, it would be nice if someone could validate what I just said ^^

I had about 18 months of dealing with this (http://ianchanning.wordpress.com/2010/08/25/18-months-of-dealing-with-a-mysql-too-many-connections-error/)
The solutions I had (that would apply to you) in the end were:
tune the database according to MySQLTuner.
defragment the tables weekly based on this post
Defragmenting bash script from the post:
#!/bin/bash
# Get a list of all fragmented tables
FRAGMENTED_TABLES="$( mysql -e `use information_schema; SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN ('information_schema','mysql') AND
Data_free > 0` | grep -v '^+' | sed 's,t,.,' )"
for fragment in $FRAGMENTED_TABLES; do
database="$( echo $fragment | cut -d. -f1 )"
table="$( echo $fragment | cut -d. -f2 )"
[ $fragment != "TABLE_SCHEMA.TABLE_NAME" ] && mysql -e "USE $database;
OPTIMIZE TABLE $table;" > /dev/null 2>&1
done

Make sure you are not using persistent connections. This is usually a bad idea..
If you've got that .. At the very most you will need to support just as much connections as you have apache processes. Are you able to change the max_connections setting?

Are you completely sure that the database server is completely dedicated to you?
Log on to the datbase as root and use "SHOW PROCESSLIST" to see who's connected. Ideally hook this into your monitoring system to view how many connections there are over time and alert if there are too many.
The maximum database connections can be configured in my.cnf, but watch out for running out of memory or address space.

If you have shell access, use netstat to see how many sockets are opened to your database and where they come from.
On Linux, type:
netstat -n -a |grep 3306
On windows, type:
netstat -n -a |findstr 3306

The solution could one of these, i came across this in a MCQA test, even i did not understood which one is right!
Set this in my.cnf "set-variable=max_connections=200"
Execute the command "SET GLOBALmax_connections = 200"
Use always mysql_connect() function in order to connect to the mysql server
Use always mysql_pconnect() function in order to connect to the mysql server

Followings are possible solutions:
1) Increase the max connection setting by setting the global variable in mysql.
set global max_connection=200;
Note: It will increase the server load.
2) Empty your connection pool as below :
FLUSH HOSTS;
3) check your processList and kill specific processlist if you don't want any of them.
You may refer this :-
article link

Related

PDOException (2006) SQLSTATE[HY000] [2006] MySQL server has gone away [duplicate]

I get this error when I try to source a large SQL file (a big INSERT query).
mysql> source file.sql
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 2
Current database: *** NONE ***
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 3
Current database: *** NONE ***
Nothing in the table is updated. I've tried deleting and undeleting the table/database, as well as restarting MySQL. None of these things resolve the problem.
Here is my max-packet size:
+--------------------+---------+
| Variable_name | Value |
+--------------------+---------+
| max_allowed_packet | 1048576 |
+--------------------+---------+
Here is the file size:
$ ls -s file.sql
79512 file.sql
When I try the other method...
$ ./mysql -u root -p my_db < file.sql
Enter password:
ERROR 2006 (HY000) at line 1: MySQL server has gone away
max_allowed_packet=64M
Adding this line into my.cnf file solves my problem.
This is useful when the columns have large values, which cause the issues, you can find the explanation here.
On Windows this file is located at: "C:\ProgramData\MySQL\MySQL Server
5.6"
On Linux (Ubuntu): /etc/mysql
You can increase Max Allowed Packet
SET GLOBAL max_allowed_packet=1073741824;
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet
The global update and the my.cnf settings didn't work for me for some reason. Passing the max_allowed_packet value directly to the client worked here:
mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql
In general the error:
Error: 2006 (CR_SERVER_GONE_ERROR) - MySQL server has gone away
means that the client couldn't send a question to the server.
mysql import
In your specific case while importing the database file via mysql, this most likely mean that some of the queries in the SQL file are too large to import and they couldn't be executed on the server, therefore client fails on the first occurred error.
So you've the following possibilities:
Add force option (-f) for mysql to proceed and execute rest of the queries.
This is useful if the database has some large queries related to cache which aren't relevant anyway.
Increase max_allowed_packet and wait_timeout in your server config (e.g. ~/.my.cnf).
Dump the database using --skip-extended-insert option to break down the large queries. Then import it again.
Try applying --max-allowed-packet option for mysql.
Common reasons
In general this error could mean several things, such as:
a query to the server is incorrect or too large,
Solution: Increase max_allowed_packet variable.
Make sure the variable is under [mysqld] section, not [mysql].
Don't afraid to use large numbers for testing (like 1G).
Don't forget to restart the MySQL/MariaDB server.
Double check the value was set properly by:
mysql -sve "SELECT ##max_allowed_packet" # or:
mysql -sve "SHOW VARIABLES LIKE 'max_allowed_packet'"
You got a timeout from the TCP/IP connection on the client side.
Solution: Increase wait_timeout variable.
You tried to run a query after the connection to the server has been closed.
Solution: A logic error in the application should be corrected.
Host name lookups failed (e.g. DNS server issue), or server has been started with --skip-networking option.
Another possibility is that your firewall blocks the MySQL port (e.g. 3306 by default).
The running thread has been killed, so retry again.
You have encountered a bug where the server died while executing the query.
A client running on a different host does not have the necessary privileges to connect.
And many more, so learn more at: B.5.2.9 MySQL server has gone away.
Debugging
Here are few expert-level debug ideas:
Check the logs, e.g.
sudo tail -f $(mysql -Nse "SELECT ##GLOBAL.log_error")
Test your connection via mysql, telnet or ping functions (e.g. mysql_ping in PHP).
Use tcpdump to sniff the MySQL communication (won't work for socket connection), e.g.:
sudo tcpdump -i lo0 -s 1500 -nl -w- port mysql | strings
On Linux, use strace. On BSD/Mac use dtrace/dtruss, e.g.
sudo dtruss -a -fn mysqld 2>&1
See: Getting started with DTracing MySQL
Learn more how to debug MySQL server or client at: 26.5 Debugging and Porting MySQL.
For reference, check the source code in sql-common/client.c file responsible for throwing the CR_SERVER_GONE_ERROR error for the client command.
MYSQL_TRACE(SEND_COMMAND, mysql, (command, header_length, arg_length, header, arg));
if (net_write_command(net,(uchar) command, header, header_length,
arg, arg_length))
{
set_mysql_error(mysql, CR_SERVER_GONE_ERROR, unknown_sqlstate);
goto end;
}
I solved the error ERROR 2006 (HY000) at line 97: MySQL server has gone away and successfully migrated a >5GB sql file by performing these two steps in order:
Created /etc/my.cnf as others have recommended, with the following contents:
[mysql]
connect_timeout = 43200
max_allowed_packet = 2048M
net_buffer_length = 512M
debug-info = TRUE
Appending the flags --force --wait --reconnect to the command (i.e. mysql -u root -p -h localhost my_db < file.sql --verbose --force --wait --reconnect).
Important Note: It was necessary to perform both steps, because if I didn't bother making the changes to /etc/my.cnf file as well as appending those flags, some of the tables were missing after the import.
System used: OSX El Capitan 10.11.5; mysql Ver 14.14 Distrib 5.5.51 for osx10.8 (i386)
Just in case, to check variables you can use
$> mysqladmin variables -u user -p
This will display the current variables, in this case max_allowed_packet, and as someone said in another answer you can set it temporarily with
mysql> SET GLOBAL max_allowed_packet=1072731894
In my case the cnf file was not taken into account and I don't know why, so the SET GLOBAL code really helped.
You can also log into the database as root (or SUPER privilege) and do
set global max_allowed_packet=64*1024*1024;
doesn't require a MySQL restart as well. Note that you should fix your my.cnf file as outlined in other solutions:
[mysqld]
max_allowed_packet=64M
And confirm the change after you've restarted MySQL:
show variables like 'max_allowed_packet';
You can use the command-line as well, but that may require updating the start/stop scripts which may not survive system updates and patches.
As requested, I'm adding my own answer here. Glad to see it works!
The solution is increasing the values given the wait_timeout and the connect_timeout parameters in your options file, under the [mysqld] tag.
I had to recover a 400MB mysql backup and this worked for me (the values I've used below are a bit exaggerated, but you get the point):
[mysqld]
port=3306
explicit_defaults_for_timestamp = TRUE
connect_timeout = 1000000
net_write_timeout = 1000000
wait_timeout = 1000000
max_allowed_packet = 1024M
interactive_timeout = 1000000
net_buffer_length = 200M
net_read_timeout = 1000000
set GLOBAL delayed_insert_timeout=100000
Blockquote
I had the same problem but changeing max_allowed_packet in the my.ini/my.cnf file under [mysqld] made the trick.
add a line
max_allowed_packet=500M
now restart the MySQL service once you are done.
A couple things could be happening here;
Your INSERT is running long, and client is disconnecting. When it reconnects it's not selecting a database, hence the error. One option here is to run your batch file from the command line, and select the database in the arguments, like so;
$ mysql db_name < source.sql
Another is to run your command via php or some other language. After each long - running statement, you can close and re-open the connection, ensuring that you're connected at the start of each query.
If you are on Mac and installed mysql through brew like me, the following worked.
cp $(brew --prefix mysql)/support-files/my-default.cnf /usr/local/etc/my.cnf
Source: For homebrew mysql installs, where's my.cnf?
add max_allowed_packet=1073741824 to /usr/local/etc/my.cnf
mysql.server restart
I had the same problem in XAMMP
Metode-01: I changed max_allowed_packet in the D:\xampp\mysql\bin\my.ini file like that below:
max_allowed_packet=500M
Finally restart the MySQL service once and done.
Metode-02:
the easier way if you are using XAMPP. Open the XAMPP control panel, and click on the config button in mysql section.
Now click on the my.ini and it will open in the editor. Update the max_allowed_packet to your required size.
Then restart the mysql service. Click on stop on the Mysql service click start again. Wait for a few minutes.
Then try to run your Mysql query again. Hope it will work.
I encountered this error when I use Mysql Cluster, I do not know this question is from a cluster usage or not. As the error is exactly the same, so give my solution here.
Getting this error because the data nodes suddenly crash. But when the nodes crash, you can still get the correct result using cmd:
ndb_mgm -e 'ALL REPORT MEMORYUSAGE'
And the mysqld also works correctly.So at first, I can not understand what is wrong. And about 5 mins later, ndb_mgm result shows no data node working. Then I realize the problem. So, try to restart all the data nodes, then the mysql server is back and everything is OK.
But one thing is weird to me, after I lost mysql server for some queries, when I use cmd like show tables, I can still get the return info like 33 rows in set (5.57 sec), but no table info is displayed.
This error message also occurs when you created the SCHEMA with a different COLLATION than the one which is used in the dump. So, if the dump contains
CREATE TABLE `mytab` (
..
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
you should also reflect this in the SCHEMA collation:
CREATE SCHEMA myschema COLLATE utf8_unicode_ci;
I had been using utf8mb4_general_ci in the schema, cause my script came from a fresh V8 installation, now loading a DB on old 5.7 crashed and drove me nearly crazy.
So, maybe this helps you saving some frustating hours... :-)
(MacOS 10.3, mysql 5.7)
Add max_allowed_packet=64M to [mysqld]
[mysqld]
max_allowed_packet=64M
Restart the MySQL server.
If it's reconnecting and getting connection ID 2, the server has almost definitely just crashed.
Contact the server admin and get them to diagnose the problem. No non-malicious SQL should crash the server, and the output of mysqldump certainly should not.
It is probably the case that the server admin has made some big operational error such as assigning buffer sizes of greater than the architecture's address-space limits, or more than virtual memory capacity. The MySQL error-log will probably have some relevant information; they will be monitoring this if they are competent anyway.
This is more of a rare issue but I have seen this if someone has copied the entire /var/lib/mysql directory as a way of migrating their DB to another server. The reason it doesn't work is because the database was running and using log files. It doesn't work sometimes if there are logs in /var/log/mysql. The solution is to copy the /var/log/mysql files as well.
For amazon RDS (it's my case), you can change the max_allowed_packet parameter value to any numeric value in bytes that makes sense for the biggest data in any insert you may have (e.g.: if you have some 50mb blob values in your insert, set the max_allowed_packet to 64M = 67108864), in a new or existing parameter-group. Then apply that parameter-group to your MySQL instance (may require rebooting the instance).
For Drupal 8 users looking for solution for DB import failure:
At end of sql dump file there can commands inserting data to "webprofiler" table.
That's I guess some debug log file and is not really important for site to work so all this can be removed. I deleted all those inserts including LOCK TABLES and UNLOCK TABLES (and everything between). It's at very bottom of the sql file. Issue is described here:
https://www.drupal.org/project/devel/issues/2723437
But there is no solution for it beside truncating that table.
BTW I tried all solutions from answers above and nothing else helped.
I've tried all of above solutions, all failed.
I ended up with using -h 127.0.0.1 instead of using default var/run/mysqld/mysqld.sock.
If you have tried all these solutions, esp. increasing max_allowed_packet up to the maximum supported amount of 1GB and you are still seeing these errors, it might be that your server literally does not have enough free RAM memory available...
The solution = upgrade your server to more RAM memory, and try again.
Note: I'm surprised this simple solution has not been mentioned after 8+ years of discussion on this thread... sometimes we developers tend to overthink things.
Eliminating the errors which triggered Warnings was the final solution for me. I also changed the max_allowed_packet which helped with smaller files with errors. Eliminating the errors also sped up the process incredibly.
if none of this answers solves you the problem, I solved it by removing the tables and creating them again automatically in this way:
when creating the backup, first backup structure and be sure of add:
DROP TABLE / VIEW / PROCEDURE / FUNCTION / EVENT
CREATE PROCEDURE / FUNCTION / EVENT
IF NOT EXISTS
AUTO_INCREMENT
then just use this backup with your db and it will remove and recreate the tables you need.
Then you backup just data, and do the same, and it will work.
How about using the mysql client like this:
mysql -h <hostname> -u username -p <databasename> < file.sql

PHP mysql persistent connection not reused ( opens more than one connection per fpm-worker )

I'm facing a really weird behaviour while testing persistent connections from php to mysql. I have a small script that looks like this:
<?php
$db = new mysqli('p:db-host','user','pass','schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
My setup is :
OS: CentOS 7.3
PHP/7.1.18
php-fpm
nginx/1.10.2
MySQL-5.6.30
I tried to do some requests with ab:
$ ab -c 100 -n 500 http://mysite/my_test_script.php
PHP-FPM was configured to have 150 workers ready, and i saw what i was expecting, 150 established connections to mysql, which stayed open after the ab finished. I launched ab once again, and the behaviour was still the same, 150 connections, no new connections where opened. All fine. Then i created a script which did the the same exact requests, same IP, same HTTP headers, but used curl to make the request, and BOOM i had 300 connections on mysql instead of 150. I launched the script again, i got still 300 connections. Subsequent runs of the same script didn't increase the number of connections. Did anyone ever faced anything like this? Does anyone know what could make php open more connections than needed? Am I missing something obvious?
If it's not clear what i'm asking, please comment below and i will try to better my explain problem.
P.S. I tried this with PDO too, same behaviour.
EDIT: My tests where not accurate
After further testing i noticed that my first tests where not accurate. I was in a multi-tenant environment and different connections ( different schema ) where initialized when i launched ab. In my case the php documentation was a bit missleading, it says:
PHP checks if there's already an identical persistent connection (that remained open from earlier) - and if it exists, it uses it. If it does not exist, it creates the link. An 'identical' connection is a connection that was opened to the same host, with the same username and the same password (where applicable).
http://php.net/manual/en/features.persistent-connections.php
Maybe its i obvious to everyone, I don't know, it was not for me. Passing the 4th parameter to mysqli made php consider connections not identical. Once i changed my code to something like this:
<?php
$db = new mysqli('p:db-host','user','pass');
$db->select_db('schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
The application started to behave as i expected, one connection per worker.

General error: 5 database is locked in PDO using sqlite [duplicate]

When I enter this query:
sqlite> DELETE FROM mails WHERE (id = 71);
SQLite returns this error:
SQL error: database is locked
How do I unlock the database so this query will work?
In windows you can try this program http://www.nirsoft.net/utils/opened_files_view.html to find out the process is handling db file. Try closed that program for unlock database
In Linux and macOS you can do something similar, for example, if your locked file is development.db:
$ fuser development.db
This command will show what process is locking the file:
> development.db: 5430
Just kill the process...
kill -9 5430
...And your database will be unlocked.
I caused my sqlite db to become locked by crashing an app during a write. Here is how i fixed it:
echo ".dump" | sqlite old.db | sqlite new.db
Taken from: http://random.kakaopor.hu/how-to-repair-an-sqlite-database
The SQLite wiki DatabaseIsLocked page offers an explanation of this error message. It states, in part, that the source of contention is internal (to the process emitting the error). What this page doesn't explain is how SQLite decides that something in your process holds a lock and what conditions could lead to a false positive.
This error code occurs when you try to do two incompatible things with a database at the same time from the same database connection.
Changes related to file locking introduced in v3 and may be useful for future readers and can be found here: File Locking And Concurrency In SQLite Version 3
If you want to remove a "database is locked" error then follow these steps:
Copy your database file to some other location.
Replace the database with the copied database. This will dereference all processes which were accessing your database file.
Deleting the -journal file sounds like a terrible idea. It's there to allow sqlite to roll back the database to a consistent state after a crash. If you delete it while the database is in an inconsistent state, then you're left with a corrupted database. Citing a page from the sqlite site:
If a crash or power loss does occur and a hot journal is left on the disk, it is essential that the original database file and the hot journal remain on disk with their original names until the database file is opened by another SQLite process and rolled back. [...]
We suspect that a common failure mode for SQLite recovery happens like this: A power failure occurs. After power is restored, a well-meaning user or system administrator begins looking around on the disk for damage. They see their database file named "important.data". This file is perhaps familiar to them. But after the crash, there is also a hot journal named "important.data-journal". The user then deletes the hot journal, thinking that they are helping to cleanup the system. We know of no way to prevent this other than user education.
The rollback is supposed to happen automatically the next time the database is opened, but it will fail if the process can't lock the database. As others have said, one possible reason for this is that another process currently has it open. Another possibility is a stale NFS lock, if the database is on an NFS volume. In that case, a workaround is to replace the database file with a fresh copy that isn't locked on the NFS server (mv database.db original.db; cp original.db database.db). Note that the sqlite FAQ recommends caution regarding concurrent access to databases on NFS volumes, because of buggy implementations of NFS file locking.
I can't explain why deleting a -journal file would let you lock a database that you couldn't before. Is that reproducible?
By the way, the presence of a -journal file doesn't necessarily mean that there was a crash or that there are changes to be rolled back. Sqlite has a few different journal modes, and in PERSIST or TRUNCATE modes it leaves the -journal file in place always, and changes the contents to indicate whether or not there are partial transactions to roll back.
the SQLite db files are just files, so the first step would be to make sure it isn't read-only. The other thing to do is to make sure that you don't have some sort of GUI SQLite DB viewer with the DB open. You could have the DB open in another shell, or your code may have the DB open. Typically you would see this if a different thread, or application such as SQLite Database Browser has the DB open for writing.
My lock was caused by the system crashing and not by a hanging process. To resolve this, I simply renamed the file then copied it back to its original name and location.
Using a Linux shell that would be:
mv mydata.db temp.db
cp temp.db mydata.db
If a process has a lock on an SQLite DB and crashes, the DB stays locked permanently. That's the problem. It's not that some other process has a lock.
I had this problem just now, using an SQLite database on a remote server, stored on an NFS mount. SQLite was unable to obtain a lock after the remote shell session I used had crashed while the database was open.
The recipes for recovery suggested above did not work for me (including the idea to first move and then copy the database back). But after copying it to a non-NFS system, the database became usable and not data appears to have been lost.
Some functions, like INDEX'ing, can take a very long time - and it locks the whole database while it runs. In instances like that, it might not even use the journal file!
So the best/only way to check if your database is locked because a process is ACTIVELY writing to it (and thus you should leave it the hell alone until its completed its operation) is to md5 (or md5sum on some systems) the file twice.
If you get a different checksum, the database is being written, and you really really REALLY don't want to kill -9 that process because you can easily end up with a corrupt table/database if you do.
I'll reiterate, because it's important - the solution is NOT to find the locking program and kill it - it's to find if the database has a write lock for a good reason, and go from there. Sometimes the correct solution is just a coffee break.
The only way to create this locked-but-not-being-written-to situation is if your program runs BEGIN EXCLUSIVE, because it wanted to do some table alterations or something, then for whatever reason never sends an END afterwards, and the process never terminates. All three conditions being met is highly unlikely in any properly-written code, and as such 99 times out of 100 when someone wants to kill -9 their locking process, the locking process is actually locking your database for a good reason. Programmers don't typically add the BEGIN EXCLUSIVE condition unless they really need to, because it prevents concurrency and increases user complaints. SQLite itself only adds it when it really needs to (like when indexing).
Finally, the 'locked' status does not exist INSIDE the file as several answers have stated - it resides in the Operating System's kernel. The process which ran BEGIN EXCLUSIVE has requested from the OS a lock be placed on the file. Even if your exclusive process has crashed, your OS will be able to figure out if it should maintain the file lock or not!! It is not possible to end up with a database which is locked but no process is actively locking it!!
When it comes to seeing which process is locking the file, it's typically better to use lsof rather than fuser (this is a good demonstration of why: https://unix.stackexchange.com/questions/94316/fuser-vs-lsof-to-check-files-in-use). Alternatively if you have DTrace (OSX) you can use iosnoop on the file.
I added "Pooling=true" to connection string and it worked.
This error can be thrown if the file is in a remote folder, like a shared folder. I changed the database to a local directory and it worked perfectly.
I found the documentation of the various states of locking in SQLite to be very helpful. Michael, if you can perform reads but can't perform writes to the database, that means that a process has gotten a RESERVED lock on your database but hasn't executed the write yet. If you're using SQLite3, there's a new lock called PENDING where no more processes are allowed to connect but existing connections can sill perform reads, so if this is the issue you should look at that instead.
I have such problem within the app, which access to SQLite from 2 connections - one was read-only and second for writing and reading. It looks like that read-only connection blocked writing from second connection. Finally, it is turns out that it is required to finalize or, at least, reset prepared statements IMMEDIATELY after use. Until prepared statement is opened, it caused to database was blocked for writing.
DON'T FORGET CALL:
sqlite_reset(xxx);
or
sqlite_finalize(xxx);
I just had something similar happen to me - my web application was able to read from the database, but could not perform any inserts or updates. A reboot of Apache solved the issue at least temporarily.
It'd be nice, however, to be able to track down the root cause.
lsof command on my Linux environment helped me to figure it out that a process was hanging keeping the file open.
Killed the process and problem was solved.
This link solve the problem. : When Sqlite gives : Database locked error
It solved my problem may be useful to you.
And you can use begin transaction and end transaction to not make database locked in future.
Should be a database's internal problem...
For me it has been manifested after trying to browse database with "SQLite manager"...
So, if you can't find another process connect to database and you just can't fix it,
just try this radical solution:
Provide to export your tables (You can use "SQLite manager" on Firefox)
If the migration alter your database scheme delete the last failed migration
Rename your "database.sqlite" file
Execute "rake db:migrate" to make a new working database
Provide to give the right permissions to database for table's importing
Import your backed up tables
Write the new migration
Execute it with "rake db:migrate"
In my experience, this error is caused by: You opened multiple connections.
e.g.:
1 or more sqlitebrowser (GUI)
1 or more electron thread
rails thread
I am nore sure about the details of SQLITE3 how to handle the multiple thread/request, but when I close the sqlitebrowser and electron thread, then rails is running well and won't block any more.
I ran into this same problem on Mac OS X 10.5.7 running Python scripts from a terminal session. Even though I had stopped the scripts and the terminal window was sitting at the command prompt, it would give this error the next time it ran. The solution was to close the terminal window and then open it up again. Doesn't make sense to me, but it worked.
I just had the same error.
After 5 minets google-ing I found that I didun't closed one shell witch were using the db.
Just close it and try again ;)
I had the same problem. Apparently the rollback function seems to overwrite the db file with the journal which is the same as the db file but without the most recent change. I've implemented this in my code below and it's been working fine since then, whereas before my code would just get stuck in the loop as the database stayed locked.
Hope this helps
my python code
##############
#### Defs ####
##############
def conn_exec( connection , cursor , cmd_str ):
done = False
try_count = 0.0
while not done:
try:
cursor.execute( cmd_str )
done = True
except sqlite.IntegrityError:
# Ignore this error because it means the item already exists in the database
done = True
except Exception, error:
if try_count%60.0 == 0.0: # print error every minute
print "\t" , "Error executing command" , cmd_str
print "Message:" , error
if try_count%120.0 == 0.0: # if waited for 2 miutes, roll back
print "Forcing Unlock"
connection.rollback()
time.sleep(0.05)
try_count += 0.05
def conn_comit( connection ):
done = False
try_count = 0.0
while not done:
try:
connection.commit()
done = True
except sqlite.IntegrityError:
# Ignore this error because it means the item already exists in the database
done = True
except Exception, error:
if try_count%60.0 == 0.0: # print error every minute
print "\t" , "Error executing command" , cmd_str
print "Message:" , error
if try_count%120.0 == 0.0: # if waited for 2 miutes, roll back
print "Forcing Unlock"
connection.rollback()
time.sleep(0.05)
try_count += 0.05
##################
#### Run Code ####
##################
connection = sqlite.connect( db_path )
cursor = connection.cursor()
# Create tables if database does not exist
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS fix (path TEXT PRIMARY KEY);''')
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS tx (path TEXT PRIMARY KEY);''')
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS completed (fix DATE, tx DATE);''')
conn_comit( connection )
One common reason for getting this exception is when you are trying to do a write operation while still holding resources for a read operation. For example, if you SELECT from a table, and then try to UPDATE something you've selected without closing your ResultSet first.
I was having "database is locked" errors in a multi-threaded application as well, which appears to be the SQLITE_BUSY result code, and I solved it with setting sqlite3_busy_timeout to something suitably long like 30000.
(On a side-note, how odd that on a 7 year old question nobody found this out already! SQLite really is a peculiar and amazing project...)
Before going down the reboot option, it is worthwhile to see if you can find the user of the sqlite database.
On Linux, one can employ fuser to this end:
$ fuser database.db
$ fuser database.db-journal
In my case I got the following response:
philip 3556 4700 0 10:24 pts/3 00:00:01 /usr/bin/python manage.py shell
Which showed that I had another Python program with pid 3556 (manage.py) using the database.
An old question, with a lot of answers, here's the steps I've recently followed reading the answers above, but in my case the problem was due to cifs resource sharing. This case is not reported previously, so hope it helps someone.
Check no connections are left open in your java code.
Check no other processes are using your SQLite db file with lsof.
Check the user owner of your running jvm process has r/w permissions over the file.
Try to force the lock mode on the connection opening with
final SQLiteConfig config = new SQLiteConfig();
config.setReadOnly(false);
config.setLockingMode(LockingMode.NORMAL);
connection = DriverManager.getConnection(url, config.toProperties());
If your using your SQLite db file over a NFS shared folder, check this point of the SQLite faq, and review your mounting configuration options to make sure your avoiding locks, as described here:
//myserver /mymount cifs username=*****,password=*****,iocharset=utf8,sec=ntlm,file,nolock,file_mode=0700,dir_mode=0700,uid=0500,gid=0500 0 0
I got this error in a scenario a little different from the ones describe here.
The SQLite database rested on a NFS filesystem shared by 3 servers. On 2 of the servers I was able do run queries on the database successfully, on the third one thought I was getting the "database is locked" message.
The thing with this 3rd machine was that it had no space left on /var. Everytime I tried to run a query in ANY SQLite database located in this filesystem I got the "database is locked" message and also this error over the logs:
Aug 8 10:33:38 server01 kernel: lockd: cannot monitor 172.22.84.87
And this one also:
Aug 8 10:33:38 server01 rpc.statd[7430]: Failed to insert: writing /var/lib/nfs/statd/sm/other.server.name.com: No space left on device
Aug 8 10:33:38 server01 rpc.statd[7430]: STAT_FAIL to server01 for SM_MON of 172.22.84.87
After the space situation was handled everything got back to normal.
If you're trying to unlock the Chrome database to view it with SQLite, then just shut down Chrome.
Windows
%userprofile%\Local Settings\Application Data\Google\Chrome\User Data\Default\Web Data
or
%userprofile%\Local Settings\Application Data\Google\Chrome\User Data\Default\Chrome Web Data
Mac
~/Library/Application Support/Google/Chrome/Default/Web Data
From your previous comments you said a -journal file was present.
This could mean that you have opened and (EXCLUSIVE?) transaction and have not yet committed the data. Did your program or some other process leave the -journal behind??
Restarting the sqlite process will look at the journal file and clean up any uncommitted actions and remove the -journal file.
As Seun Osewa has said, sometimes a zombie process will sit in the terminal with a lock aquired, even if you don't think it possible. Your script runs, crashes, and you go back to the prompt, but there's a zombie process spawned somewhere by a library call, and that process has the lock.
Closing the terminal you were in (on OSX) might work. Rebooting will work. You could look for "python" processes (for example) that are not doing anything, and kill them.

PHP / MYSQL connection failures under heavy load through mysql.sock

I've done quite a bit of reading before asking this, so let me preface by saying I am not running out of connections, or memory, or cpu, and from what I can tell, I am not running out of file descriptors either.
Here's what PHP throws at me when MySQL is under heavy load:
Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (11 "Resource temporarily unavailable")
This happens randomly under load - but the more I push, the more frequently php throws this at me. While this is happening I can always connect locally through the console and from PHP through 127.0.0.1 instead of "localhost" which uses the faster unix socket.
Here's a few system variables to weed out the usual problems:
cat /proc/sys/fs/file-max = 4895952
lsof | wc -l = 215778 (during "outages")
Highest usage of available connections: 26% (261/1000)
InnoDB buffer pool / data size: 10.0G/3.7G (plenty o room)
soft nofile 999999
hard nofile 999999
I am actually running MariaDB (Server version: 10.0.17-MariaDB MariaDB Server)
These results are generated both under normal load, and by running mysqlslap during off hours, so, slow queries are not an issue - just high connections.
Any advice? I can report additional settings/data if necessary - mysqltuner.pl says everything is a-ok
and again, the revealing thing here is that connecting via IP works just fine and is fast during these outages - I just can't figure out why.
Edit: here is my my.ini (some values may seem a bit high from my recent troubleshooting changes, and please keep in mind that there are no errors in the MySQL logs, system logs, or dmesg)
socket=/var/lib/mysql/mysql.sock
skip-external-locking
skip-name-resolve
table_open_cache=8092
thread_cache_size=16
back_log=3000
max_connect_errors=10000
interactive_timeout=3600
wait_timeout=600
max_connections=1000
max_allowed_packet=16M
tmp_table_size=64M
max_heap_table_size=64M
sort_buffer_size=1M
read_buffer_size=1M
read_rnd_buffer_size=8M
join_buffer_size=1M
innodb_log_file_size=256M
innodb_log_buffer_size=8M
innodb_buffer_pool_size=10G
[mysql.server]
user=mysql
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
open-files-limit=65535
Most likely it is due to net.core.somaxconn
What is the value of /proc/sys/net/core/somaxconn
net.core.somaxconn
# The maximum number of "backlogged sockets". Default is 128.
Connections in the queue which are not yet connected. Any thing above that queue will be rejected. I suspect this in your case. Try increasing it according to your load.
as root user run
echo 1024 > /proc/sys/net/core/somaxconn
This is something that can and should be solved by analysis. Learning how to do this is a great skill to have.
Analysis to find out just what is happening under a heavy load...number of queries, execution time should be your first step. Determine the load and then make the proper db config settings. You might find you need to optimize the sql queries instead!
Then make sure the PHP db driver settings are in alignment as well to fully utilize the database connections.
Here is a link to the MariaDB threadpool documentation. I know it says version 5.5, but its still relevant and the page does reference version 10. There are settings listed that may not be in your .cnf file that you can use.
https://mariadb.com/kb/en/mariadb/threadpool-in-55/
From the top of my head, I can think of max_connections as a possible source of the problem. I'd increase the limit, to at least eliminate the possibility.
Hope it helps.

PHP: Long page not displayed / Connection was reset

I have page listing few records from db.
After upgrading to PHP 5.3 site printing long records list is not displayed - Explorer says "Connection was reset"
I've changed SQL query in code to limit records and then page was shown correctly
So it seems to be some kind of timeout set.
I've tried find some settings in PHP.ini , HTTPD.conf - changed all sounds similar to timeout but nothing happened.
Any idea how to make it working ?
EDIT
Page resets after ~2 secs - so there is no extremely long time....
EDIT-2
I've tried set php vars: max_execution_time, max_input_time, memory_limit
WAMPServer 2 (PHP 5.3, Apache 2.2.11)
At the top of your .php file, insert something like:
set_time_limit(120);
That sets the timeout for the script to 2 minutes. Increase it as you need.
I would recommend avoiding this problem by paginating your results, otherwise you're opening yourself up to a world of trouble. Slow pages are an open door for a denial of service attack by resource exhaustion.

Categories