How to monitor MySQL queries - php

Say I browse a php page, how do I know what database queries are run?
I think if I can log all the queries to a .txt file that would solve my problem. I tried to log, but failed. I just want to know the queries (sql strings) sent to it.
I'm using WinXP and Apache.

One way to do it is going into your my.cnf configuration file and activate the general log. As hinted is a performance killer, so never activate it in production. For development is perfectly OK, though. On my laptop is on all the time.
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
general_log_file = /var/log/mysql/mysql.log
general_log = 1

Find your mysql log files directory - mine was:
C:\Program Files\MySQL\MySQL Server 5.0\data
Look for the last updated file. Should be NAME OF YOUR COMPUTER.log - that will be your default logfile ** - Mysql is generally shipped with basic logging enabled, IIRC.
Else, as has been said, set up logging in your cnf file and restart the Mysql service.
** Make yourself a shortcut to this file on your desktop, you should be looking at it often.
When the file gets too big, delete it and restart the Mysql service and it'll start a new one with the same name.

just copy and paste the string in mysql_query() into an echo just before the query.

Related

PDOException (2006) SQLSTATE[HY000] [2006] MySQL server has gone away [duplicate]

I get this error when I try to source a large SQL file (a big INSERT query).
mysql> source file.sql
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 2
Current database: *** NONE ***
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 3
Current database: *** NONE ***
Nothing in the table is updated. I've tried deleting and undeleting the table/database, as well as restarting MySQL. None of these things resolve the problem.
Here is my max-packet size:
+--------------------+---------+
| Variable_name | Value |
+--------------------+---------+
| max_allowed_packet | 1048576 |
+--------------------+---------+
Here is the file size:
$ ls -s file.sql
79512 file.sql
When I try the other method...
$ ./mysql -u root -p my_db < file.sql
Enter password:
ERROR 2006 (HY000) at line 1: MySQL server has gone away
max_allowed_packet=64M
Adding this line into my.cnf file solves my problem.
This is useful when the columns have large values, which cause the issues, you can find the explanation here.
On Windows this file is located at: "C:\ProgramData\MySQL\MySQL Server
5.6"
On Linux (Ubuntu): /etc/mysql
You can increase Max Allowed Packet
SET GLOBAL max_allowed_packet=1073741824;
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet
The global update and the my.cnf settings didn't work for me for some reason. Passing the max_allowed_packet value directly to the client worked here:
mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql
In general the error:
Error: 2006 (CR_SERVER_GONE_ERROR) - MySQL server has gone away
means that the client couldn't send a question to the server.
mysql import
In your specific case while importing the database file via mysql, this most likely mean that some of the queries in the SQL file are too large to import and they couldn't be executed on the server, therefore client fails on the first occurred error.
So you've the following possibilities:
Add force option (-f) for mysql to proceed and execute rest of the queries.
This is useful if the database has some large queries related to cache which aren't relevant anyway.
Increase max_allowed_packet and wait_timeout in your server config (e.g. ~/.my.cnf).
Dump the database using --skip-extended-insert option to break down the large queries. Then import it again.
Try applying --max-allowed-packet option for mysql.
Common reasons
In general this error could mean several things, such as:
a query to the server is incorrect or too large,
Solution: Increase max_allowed_packet variable.
Make sure the variable is under [mysqld] section, not [mysql].
Don't afraid to use large numbers for testing (like 1G).
Don't forget to restart the MySQL/MariaDB server.
Double check the value was set properly by:
mysql -sve "SELECT ##max_allowed_packet" # or:
mysql -sve "SHOW VARIABLES LIKE 'max_allowed_packet'"
You got a timeout from the TCP/IP connection on the client side.
Solution: Increase wait_timeout variable.
You tried to run a query after the connection to the server has been closed.
Solution: A logic error in the application should be corrected.
Host name lookups failed (e.g. DNS server issue), or server has been started with --skip-networking option.
Another possibility is that your firewall blocks the MySQL port (e.g. 3306 by default).
The running thread has been killed, so retry again.
You have encountered a bug where the server died while executing the query.
A client running on a different host does not have the necessary privileges to connect.
And many more, so learn more at: B.5.2.9 MySQL server has gone away.
Debugging
Here are few expert-level debug ideas:
Check the logs, e.g.
sudo tail -f $(mysql -Nse "SELECT ##GLOBAL.log_error")
Test your connection via mysql, telnet or ping functions (e.g. mysql_ping in PHP).
Use tcpdump to sniff the MySQL communication (won't work for socket connection), e.g.:
sudo tcpdump -i lo0 -s 1500 -nl -w- port mysql | strings
On Linux, use strace. On BSD/Mac use dtrace/dtruss, e.g.
sudo dtruss -a -fn mysqld 2>&1
See: Getting started with DTracing MySQL
Learn more how to debug MySQL server or client at: 26.5 Debugging and Porting MySQL.
For reference, check the source code in sql-common/client.c file responsible for throwing the CR_SERVER_GONE_ERROR error for the client command.
MYSQL_TRACE(SEND_COMMAND, mysql, (command, header_length, arg_length, header, arg));
if (net_write_command(net,(uchar) command, header, header_length,
arg, arg_length))
{
set_mysql_error(mysql, CR_SERVER_GONE_ERROR, unknown_sqlstate);
goto end;
}
I solved the error ERROR 2006 (HY000) at line 97: MySQL server has gone away and successfully migrated a >5GB sql file by performing these two steps in order:
Created /etc/my.cnf as others have recommended, with the following contents:
[mysql]
connect_timeout = 43200
max_allowed_packet = 2048M
net_buffer_length = 512M
debug-info = TRUE
Appending the flags --force --wait --reconnect to the command (i.e. mysql -u root -p -h localhost my_db < file.sql --verbose --force --wait --reconnect).
Important Note: It was necessary to perform both steps, because if I didn't bother making the changes to /etc/my.cnf file as well as appending those flags, some of the tables were missing after the import.
System used: OSX El Capitan 10.11.5; mysql Ver 14.14 Distrib 5.5.51 for osx10.8 (i386)
Just in case, to check variables you can use
$> mysqladmin variables -u user -p
This will display the current variables, in this case max_allowed_packet, and as someone said in another answer you can set it temporarily with
mysql> SET GLOBAL max_allowed_packet=1072731894
In my case the cnf file was not taken into account and I don't know why, so the SET GLOBAL code really helped.
You can also log into the database as root (or SUPER privilege) and do
set global max_allowed_packet=64*1024*1024;
doesn't require a MySQL restart as well. Note that you should fix your my.cnf file as outlined in other solutions:
[mysqld]
max_allowed_packet=64M
And confirm the change after you've restarted MySQL:
show variables like 'max_allowed_packet';
You can use the command-line as well, but that may require updating the start/stop scripts which may not survive system updates and patches.
As requested, I'm adding my own answer here. Glad to see it works!
The solution is increasing the values given the wait_timeout and the connect_timeout parameters in your options file, under the [mysqld] tag.
I had to recover a 400MB mysql backup and this worked for me (the values I've used below are a bit exaggerated, but you get the point):
[mysqld]
port=3306
explicit_defaults_for_timestamp = TRUE
connect_timeout = 1000000
net_write_timeout = 1000000
wait_timeout = 1000000
max_allowed_packet = 1024M
interactive_timeout = 1000000
net_buffer_length = 200M
net_read_timeout = 1000000
set GLOBAL delayed_insert_timeout=100000
Blockquote
I had the same problem but changeing max_allowed_packet in the my.ini/my.cnf file under [mysqld] made the trick.
add a line
max_allowed_packet=500M
now restart the MySQL service once you are done.
A couple things could be happening here;
Your INSERT is running long, and client is disconnecting. When it reconnects it's not selecting a database, hence the error. One option here is to run your batch file from the command line, and select the database in the arguments, like so;
$ mysql db_name < source.sql
Another is to run your command via php or some other language. After each long - running statement, you can close and re-open the connection, ensuring that you're connected at the start of each query.
If you are on Mac and installed mysql through brew like me, the following worked.
cp $(brew --prefix mysql)/support-files/my-default.cnf /usr/local/etc/my.cnf
Source: For homebrew mysql installs, where's my.cnf?
add max_allowed_packet=1073741824 to /usr/local/etc/my.cnf
mysql.server restart
I had the same problem in XAMMP
Metode-01: I changed max_allowed_packet in the D:\xampp\mysql\bin\my.ini file like that below:
max_allowed_packet=500M
Finally restart the MySQL service once and done.
Metode-02:
the easier way if you are using XAMPP. Open the XAMPP control panel, and click on the config button in mysql section.
Now click on the my.ini and it will open in the editor. Update the max_allowed_packet to your required size.
Then restart the mysql service. Click on stop on the Mysql service click start again. Wait for a few minutes.
Then try to run your Mysql query again. Hope it will work.
I encountered this error when I use Mysql Cluster, I do not know this question is from a cluster usage or not. As the error is exactly the same, so give my solution here.
Getting this error because the data nodes suddenly crash. But when the nodes crash, you can still get the correct result using cmd:
ndb_mgm -e 'ALL REPORT MEMORYUSAGE'
And the mysqld also works correctly.So at first, I can not understand what is wrong. And about 5 mins later, ndb_mgm result shows no data node working. Then I realize the problem. So, try to restart all the data nodes, then the mysql server is back and everything is OK.
But one thing is weird to me, after I lost mysql server for some queries, when I use cmd like show tables, I can still get the return info like 33 rows in set (5.57 sec), but no table info is displayed.
This error message also occurs when you created the SCHEMA with a different COLLATION than the one which is used in the dump. So, if the dump contains
CREATE TABLE `mytab` (
..
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
you should also reflect this in the SCHEMA collation:
CREATE SCHEMA myschema COLLATE utf8_unicode_ci;
I had been using utf8mb4_general_ci in the schema, cause my script came from a fresh V8 installation, now loading a DB on old 5.7 crashed and drove me nearly crazy.
So, maybe this helps you saving some frustating hours... :-)
(MacOS 10.3, mysql 5.7)
Add max_allowed_packet=64M to [mysqld]
[mysqld]
max_allowed_packet=64M
Restart the MySQL server.
If it's reconnecting and getting connection ID 2, the server has almost definitely just crashed.
Contact the server admin and get them to diagnose the problem. No non-malicious SQL should crash the server, and the output of mysqldump certainly should not.
It is probably the case that the server admin has made some big operational error such as assigning buffer sizes of greater than the architecture's address-space limits, or more than virtual memory capacity. The MySQL error-log will probably have some relevant information; they will be monitoring this if they are competent anyway.
This is more of a rare issue but I have seen this if someone has copied the entire /var/lib/mysql directory as a way of migrating their DB to another server. The reason it doesn't work is because the database was running and using log files. It doesn't work sometimes if there are logs in /var/log/mysql. The solution is to copy the /var/log/mysql files as well.
For amazon RDS (it's my case), you can change the max_allowed_packet parameter value to any numeric value in bytes that makes sense for the biggest data in any insert you may have (e.g.: if you have some 50mb blob values in your insert, set the max_allowed_packet to 64M = 67108864), in a new or existing parameter-group. Then apply that parameter-group to your MySQL instance (may require rebooting the instance).
For Drupal 8 users looking for solution for DB import failure:
At end of sql dump file there can commands inserting data to "webprofiler" table.
That's I guess some debug log file and is not really important for site to work so all this can be removed. I deleted all those inserts including LOCK TABLES and UNLOCK TABLES (and everything between). It's at very bottom of the sql file. Issue is described here:
https://www.drupal.org/project/devel/issues/2723437
But there is no solution for it beside truncating that table.
BTW I tried all solutions from answers above and nothing else helped.
I've tried all of above solutions, all failed.
I ended up with using -h 127.0.0.1 instead of using default var/run/mysqld/mysqld.sock.
If you have tried all these solutions, esp. increasing max_allowed_packet up to the maximum supported amount of 1GB and you are still seeing these errors, it might be that your server literally does not have enough free RAM memory available...
The solution = upgrade your server to more RAM memory, and try again.
Note: I'm surprised this simple solution has not been mentioned after 8+ years of discussion on this thread... sometimes we developers tend to overthink things.
Eliminating the errors which triggered Warnings was the final solution for me. I also changed the max_allowed_packet which helped with smaller files with errors. Eliminating the errors also sped up the process incredibly.
if none of this answers solves you the problem, I solved it by removing the tables and creating them again automatically in this way:
when creating the backup, first backup structure and be sure of add:
DROP TABLE / VIEW / PROCEDURE / FUNCTION / EVENT
CREATE PROCEDURE / FUNCTION / EVENT
IF NOT EXISTS
AUTO_INCREMENT
then just use this backup with your db and it will remove and recreate the tables you need.
Then you backup just data, and do the same, and it will work.
How about using the mysql client like this:
mysql -h <hostname> -u username -p <databasename> < file.sql

How to make a copy of large database from phpmyadmin?

I want to create a dev environment of my website on the same server. But I have a 7Gb of database which contains 479 tables and I want to make a copy of that database to the new DB.
I have tried this with the help of PHPmyadmin >> Operations >> copy database to functionality. But every time it will fail and return the error
Error in processing request Error code: 500 Error text: Internal Error.
Please let me know there is any other method/ solution to copy this database to a new database from cpanel please advise
Create an export of your database. This should be easily done thru PhpMyAdmin interface. Once you downloaded the DB export, you need to create a new DB where you will put your exported data. This, too, should be easily done thru PhpMyAdmin user interface.
To upload it, we cannot use Import -> Browse your computer because it has a limit of 2MB. One solution is to use Import -> Select from the web server upload directory /var/lib/phpMyAdmin/upload/. Upload your exported data in this directory. After that, your uploaded data should be listed in the dropdown next to it.
If this fails too, you can use the command line import.
mysql -u user -p db_name < /path/to/file.sql
Limited to phpMyAdmin? Don't do it all at once
Large data-sets shouldn't be dumped (unless it's for a backup), instead, export the database without data, then copy one table at a time (DB to DB directly).
Export/Import Schema
First, export only the database schema via phpMyAdmin (uncheck data in the export options). Then import that onto a new database name.
Alternatively, you could use something like below to generate statements like below, once you've created the DB. The catch with this method is that you're likely to lose constraints, sprocs, and the like.
CREATE TABLE [devDB].[table] LIKE [prodDB].[table]
Copy data, one table at a time.
Use a good editor to create the 470 insert statements you need. Start with a list of table names, and use the good old find-and-replace.
INSERT INTO [devDB].[table] SELECT * FROM [prodDB].[table];
This may choke, depending on your environment. If it does, drop and recreate the dev database (or empty all tables via phpMyAdmin). Then, run the INSERT commands a few tables at a time.
Database Administration requires CLI
The real problem you're facing here is that you're trying to do database administration without access to the Command Line Interface. There are significant complicated details to migrating large sets of data efficiently, most of which can only be solved using tools like mysqldump.
NOTE: I have just read your comment, and as I can understand you don't have access to command line. Please check Solution Two, this will definitely work.
The only solution that will work for you (which work for me at 12GB database) is directly from the command line:
Solution One
mysql -u root -p
set global net_buffer_length=1000000; --Set network buffer length to a large byte number
set global max_allowed_packet=1000000000; --Set maximum allowed packet size to a large byte number
SET foreign_key_checks = 0; --Disable foreign key checking to avoid delays, errors and unwanted behavior
source file.sql --Import your sql dump file
SET foreign_key_checks = 1; --Remember to enable foreign key checks when the procedure is complete!
If you have root access you can create bash script:
#!/bin/sh
# store start date to a variable
imeron=`date`
echo "Import started: OK"
dumpfile="/home/bob/bobiras.sql"
ddl="set names utf8; "
ddl="$ddl set global net_buffer_length=1000000;"
ddl="$ddl set global max_allowed_packet=1000000000; "
ddl="$ddl SET foreign_key_checks = 0; "
ddl="$ddl SET UNIQUE_CHECKS = 0; "
ddl="$ddl SET AUTOCOMMIT = 0; "
# if your dump file does not create a database, select one
ddl="$ddl USE jetdb; "
ddl="$ddl source $dumpfile; "
ddl="$ddl SET foreign_key_checks = 1; "
ddl="$ddl SET UNIQUE_CHECKS = 1; "
ddl="$ddl SET AUTOCOMMIT = 1; "
ddl="$ddl COMMIT ; "
echo "Import started: OK"
time mysql -h 127.0.0.1 -u root -proot -e "$ddl"
# store end date to a variable
imeron2=`date`
echo "Start import:$imeron"
echo "End import:$imeron2"
Source
Solution Two
Also, there is another option which is very good for those who are on shared hosting and don't have command line access. This solution worked for me on 4-5GB files:
MySQL Dumper: Download (You will able to backup/restore SQL file directly from "MySQL Dumper" you don't need phpmyadmin anymore).
Big Dump: Download (Just restore from Compress file and SQL file, need BIGDUMP PHP file editing for big import $linespersession = 3000; Change to $linespersession = 30000;)
Solution Three:
This solution definitely works, it is slow but works.
Download Trial version of (32 or 64 bit): Navicat MySQL Version 12
Install -> and RUN as Trial.
After that Add your Computer IP (Internet IP, not local IP), to the MySQL Remote in cPanel (new database/hosting). You can use wildcard IP in cPanel to access MySQL from any IP.
Goto Navicat MySQL: click on Connection put a connection name.
In next "Hostname/IP" add your "Hosting IP Address" (don't use localhost).
Leave port as it is (if your hosting defined a different port put that one here).
add your Database Username and Password
Click Test Connection, If it's successful, click on "OK"
Now on the Main Screen you will see all the database connected with the username on the left side column.
Double click on your database where you want to import SQL file:
Icon color of the database will change and you will see "Tables/views/function etc..".
Now right click on database and select "Execute SQL file" (http://prntscr.com/gs6ef1).
choose the file, choose "continue on error" if you want to and finally run it. Its take some time depending on your network connection speed and computer performance.
The easiest way is to try exporting the data from phpmyadmin. It will create the backup of your data.
But Sometimes, transferring large amount of data via import/export does result into errors.
You can try mysqldump to backup the data as well.
I found a few links for you here and here.
This is the mysqldump database backup documentation.
Hope it helps. :D
You can use mysqldump as follow
mysqldump —user= —password= --default-character-set=utf8
You can also make use of my shell script, which actually wrote long back for creating back-up of MySQL database on regular basis using cron job.
#!/bin/sh
now="$(date +'%d_%m_%Y_%H_%M_%S')"
filename="db_backup_$now".gz
backupfolder=“"
fullpathbackupfile="$backupfolder/$filename"
logfile="$backupfolder/"backup_log_"$(date +'%Y_%m')".txt
echo "mysqldump started at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
mysqldump —user= —password= --default-character-set=utf8 | gzip > "$fullpathbackupfile"
echo "mysqldump finished at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
chown "$fullpathbackupfile"
chown "$logfile"
echo "file permission changed" >> "$logfile"
find "$backupfolder" -name db_backup_* -mtime +2 -exec rm {} \;
echo "old files deleted" >> "$logfile"
echo "operation finished at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
echo "*****************" >> "$logfile"
exit 0
I have already written an article on Schedule MySQL Database backup on CPanel or Linux.
Here's how I handled that problem when I faced it... Unfortunately this only works for Mac OS.
Download Sequel Pro - Completely free, and it has worked really well for me for over a year now.
Remotely connect to your server's database. You will probably need to add your ip address to the "Remote MYSQL" section in CPANEL. If you don't have the credentials, you can probably get them from your website's config file.
Once you're in the server, you can select all of your tables, secondary click, and select Export > As SQL Dump. You probably won't need to edit any of the settings. Click "Export".
Login to your local servers database, and select "Query" from the top menu.
Drag and drop the file that was downloaded from the export and it will automatically setup the database from the sql dump.
I hope this helps. It's a little work around, but it's worked really well for me, especially when PMA has failed.
Since the requirements include PHPMyAdmin, my suggestion is to:
select the database you need
go to the "Export" tab
click the "Custom - display all possible options" radio button
in the "Save output to a file" radio button options, select "gzipped" for "Compression:"
Remove the "Display comments" tick (to save some space)
Finish the export
Then try to import the generated file in the new Database you have (if you have sufficient resources - this should be possible).
Note: My previous experience shows that using compression allows larger DB exports/import operations but have not tested what is the upper limit in shared hosting environments (assuming this by your comment for cPanel).
Edit: When your export file is created, select the new database (assuming it is already created), go to the "Import" tab, select the file created from the export and start the import process.
If you have you database in your local server, you can export it and use BigDump to inserting to new database on the global server BigDump
I suspect that PHPMyAdmin will handle databases of that size (PHP upload/download limits, memory constraints, script execution time).
If you have acccess to the console, i would recommend doing export/import via the mysql command line:
Export:
$ mysqldump -u <user> -p<pass> <liveDatabase> | gzip > export.sql.gz
And Import:
$ gunzip < export.sql.gz | mysql -u <user> -p<pass> <devDatabase>
after you have created the new dev database in e.g. PHPMyAdmin or via command line.
Otherwise, if you only have access to an Apache/PHP environment, I would look for an export utility that splits export in smaller chunks. MySQLDumper comes to mind, but it's a few years old and AFAIK it is no longer actively maintained and is not compatible with PHP 7+.
But I think there is at least a pull request out there that makes it work with PHP7 (untested).
Edit based on your comment:
If the export already exists and the error occurs on import, you could try to increase the limits on your PHP environment, either via entries in .htaccess, changing php.ini or ini_set, whatever is available in your environment. The relevant settings are e.g. for setting via .htaccess (keep in mind, this will work only for apache environments with mod_php and also can be controlled by your hoster):
php_value max_execution_time 3600
php_value post_max_size 8000M
php_value upload_max_filesize 8000M
php_value max_input_time 3600
This may or may not work, depending on x32/x64 issues and/or your hosters restrictions.
Additionally, you need to adjust the PHPmyadmin settings for ExecTimeLimit - usually found in the config.default.php for your PHPMyAdmin installation:
Replace
$cfg['ExecTimeLimit'] = 300;
with
$cfg['ExecTimeLimit'] = 0;
And finally, you probably need to adjust your MySQL config to allow larger packets and get rid of the 'lost connection' error:
[mysqld] section in my.ini :
max_allowed_packet=256M

PDO connection error - server has gone away [duplicate]

I tried to import a large sql file through phpMyAdmin...But it kept showing error
'MySql server has gone away'
What to do?
As stated here:
Two most common reasons (and fixes) for the MySQL server has gone away
(error 2006) are:
Server timed out and closed the connection. How to fix:
check that wait_timeout variable in your mysqld’s my.cnf configuration file is large enough. On Debian: sudo nano
/etc/mysql/my.cnf, set wait_timeout = 600 seconds (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart. I didn't check, but the default value for
wait_timeout might be around 28800 seconds (8 hours).
Server dropped an incorrect or too large packet. If mysqld gets a packet that is too large or incorrect, it assumes that something has
gone wrong with the client and closes the connection. You can increase
the maximal packet size limit by increasing the value of
max_allowed_packet in my.cnf file. On Debian: sudo nano
/etc/mysql/my.cnf, set max_allowed_packet = 64M (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart.
Edit:
Notice that MySQL option files do not have their commands already available as comments (like in php.ini for instance). So you must type any change/tweak in my.cnf or my.ini and place them in mysql/data directory or in any of the other paths, under the proper group of options such as [client], [myslqd], etc. For example:
[mysqld]
wait_timeout = 600
max_allowed_packet = 64M
Then restart the server. To get their values, type in the mysql client:
> select ##wait_timeout;
> select ##max_allowed_packet;
For me this solution didn't work out so I executed
SET GLOBAL max_allowed_packet=1073741824;
in my SQL client.
If not able to change this with MYSql service running, you should stop the service and change the variable in "my.ini" file.
For example:
max_allowed_packet=20M
If you are working on XAMPP then you can fix the MySQL Server has gone away issue with following changes..
open your my.ini file
my.ini location is (D:\xampp\mysql\bin\my.ini)
change the following variable values
max_allowed_packet = 64M
innodb_lock_wait_timeout = 500
If you are running with default values then you have a lot of room to optimize your mysql configuration.
The first step I recommend is to increase the max_allowed_packet to 128M.
Then download the MySQL Tuning Primer script and run it. It will provide recommendations to several facets of your config for better performance.
Also look into adjusting your timeout values both in MySQL and PHP.
How big (file size) is the file you are importing and are you able to import the file using the mysql command line client instead of PHPMyAdmin?
If you are using MAMP on OS X, you will need to change the max_allowed_packet value in the template for MySQL.
You can find it at: File > Edit template > MySQL my.cnf
Then just search for max_allowed_packet, change the value and
save.
I had this error and other related ones, when I imported at 16 GB SQL file. For me, editing my.ini and setting the following (based on several different posts) in the [mysqld] section:
max_allowed_packet = 110M
innodb_buffer_pool_size=511M
innodb_log_file_size=500M
innodb_log_buffer_size = 800M
net_read_timeout = 600
net_write_timeout = 600
If you are running under Windows, go to the control panel, services, and look at the details for MySQL and you will see where my.ini is. Then after you edit and save my.ini, restart the mysql service (or restart the computer).
If you are using HeidiSQL, you can also set some or all of these using that.
I solved my issue with this short /etc/mysql/my.cnf file :
[mysqld]
wait_timeout = 600
max_allowed_packet = 100M
The other reason this can happen is running out of memory. Check /var/log/messages and make sure that your my.cnf is not set up to cause mysqld to allocate more memory than your machine has.
Your mysqld process can actually be killed by the kernel and then re-started by the "safe_mysqld" process without you realizing it.
Use top and watch the memory allocation while it's running to see what your headroom is.
make a backup of my.cnf before changing it.
I got same issue with
$image_base64 = base64_encode(file_get_contents($_FILES['file']['tmp_name']) );
$image = 'data:image/jpeg;base64,'.$image_base64;
$query = "insert into images(image) values('".$image."')";
mysqli_query($con,$query);
In \xampp\mysql\bin\my.ini file of phpmyadmin we get only
[mysqldump]
max_allowed_packet=110M
which is just for mysqldump -u root -p dbname . I resolved my issue by replacing above code with
max_allowed_packet=110M
[mysqldump]
max_allowed_packet=110M
I updated "max_allowed_packet" to 1024M, but it still wasn't working. It turns out my deployment script was running:
mysql --max_allowed_packet=512M --database=mydb -u root < .\db\db.sql
Be sure to explicitly specify a bigger number from the command line if you are donig it this way.
If your data includes BLOB data:
Note that an import of data from the command line seems to choke on BLOB data, resulting in the 'MySQL server has gone away' error.
To avoid this, re-create the mysqldump but with the --hex-blob flag:
http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_hex-blob
which will write out the data file with hex values rather than binary amongst other text.
PhpMyAdmin also has the option "Dump binary columns in hexadecimal notation (for example, "abc" becomes 0x616263)" which works nicely.
Note that there is a long-standing bug (as of December 2015) which means that GEOM columns are not converted:
Back up a table with a GEOMETRY column using mysqldump?
so using a program like PhpMyAdmin seems to be the only workaround (the option noted above does correctly convert GEOM columns).
If it takes a long time to fail, then enlarge the wait_timeout variable.
If it fails right away, enlarge the max_allowed_packet variable; it it still doesn't work, make sure the command is valid SQL. Mine had unescaped quotes which screwed everything up.
Also, if feasible, consider limiting the number of inserts of a single SQL command to, say, 1000. You can create a script that creates multiple statements out of a single one by reintroducing the INSERT... part every n inserts.
i got a similar error.. to solve this just open my.ini file..here at line no 36 change the value of maximum allowed packet size ie. max_allowed_packet = 20M
Make sure mysqld process does not restart because of service managers like systemd.
I had this problem in vagrant with centos 7. Configuration tweaks didn't help. Turned out it was systemd which killed mysqld service every time when it took too much memory.
I had similar error today when duplicating database (MySQL server has gone away...), but when I tried to restart mysql.server restart I got error
ERROR! The server quit without updating PID ...
This is how I solved it:
I opened up Applications/Utilities/ and ran Activity Monitor
quit mysqld
then was able to solve the error problem with
mysql.server restart
I am doing some large calculations which involves the mysql connection to stay long time and with heavy data. i was facing this "Mysql go away issue". So i tried t optimize the queries but that doen't helped me then i increased the mysql variables limit which is set to a lower value by default.
wait_timeout
max_allowed_packet
To the limit what ever suits to you it should be the Any Number * 1024(Bytes). you can login to terminal using 'mysql -u username - p' command and can check and change for these variable limits.
For GoDaddy shared hosting
On GoDaddy shared hosting accounts, it is tricky to tweak the PHP.ini etc files. However, there is another way and it just worked perfectly for me. (I just successfully uploaded a 3.8Mb .sql text file, containing 3100 rows and 145 cols. Using the IMPORT command in phpMyAdmin, I was getting the dreaded MySQL server has gone away error, and no further information.)
I found that Matt Butcher had the right answer. Like Matt, I had tried all kinds of tricks, from exporting MySQL databases in bite-sized chunks, to writing scripts that break large imports into smaller ones. But here is what worked:
(1) CPANEL ---> FILES (group) ---> BACKUP
(2a) Under "Partial Backups" heading...
(2b) Under "Download a MySQL Database Backup"
(2c) Choose your database and download a backup (this step optional, but wise)
(3a) Directly to the right of 2b, under heading "Restore a MySQL Database Backup"
(3b) Choose the .SQL import file from your local drive
(3c) True happiness will be yours (shortly....) Mine took about 5 seconds
I was able to use this method to import a single table. Nothing else in my database was affected -- but that is what step (2) above is intended to protect against.
Notes:
a. If you are unsure how to create a .SQL import file, use phpMyAdmin to export a table and modify that file structure.
SOURCE:
Matt Butcher 2010 Article
If increasing max_allowed_packet doesn't help.
I was getting the same error as you when importing a .sql file into my database via Sequel Pro.
The error still persisted after upping the max_allowed_packet to 512M so I ran the import in the command line instead with:
mysql --verbose -u root -p DatabaseName < MySQL.sql
It gave the following error:
ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled
I found a couple helpful StackOverflow questions:
Enable binary mode while restoring a Database from an SQL dump
Mysql ERROR: ASCII '\0' while importing sql file on linux server
In my case, my .sql file was a little corrupt or something. The MySQL dump we get comes in two zip files that need to be concatenated together and then unzipped. I think the unzipping was interrupted initially, leaving the file with some odd characters and encodings. Getting a fresh MySQL dump and unzipping it properly worked for me.
Just wanted to add this here in case others find that increasing the max_allowed_packet variable was not helping.
None of the solutions regarding packet size or timeouts made any difference for me. I needed to disable ssl
mysql -u -p -hmyhost.com --disable-ssl db < file.sql
https://dev.mysql.com/doc/refman/5.7/en/encrypted-connections.html

Php & Mysql // FATAL ERROR [duplicate]

I tried to import a large sql file through phpMyAdmin...But it kept showing error
'MySql server has gone away'
What to do?
As stated here:
Two most common reasons (and fixes) for the MySQL server has gone away
(error 2006) are:
Server timed out and closed the connection. How to fix:
check that wait_timeout variable in your mysqld’s my.cnf configuration file is large enough. On Debian: sudo nano
/etc/mysql/my.cnf, set wait_timeout = 600 seconds (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart. I didn't check, but the default value for
wait_timeout might be around 28800 seconds (8 hours).
Server dropped an incorrect or too large packet. If mysqld gets a packet that is too large or incorrect, it assumes that something has
gone wrong with the client and closes the connection. You can increase
the maximal packet size limit by increasing the value of
max_allowed_packet in my.cnf file. On Debian: sudo nano
/etc/mysql/my.cnf, set max_allowed_packet = 64M (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart.
Edit:
Notice that MySQL option files do not have their commands already available as comments (like in php.ini for instance). So you must type any change/tweak in my.cnf or my.ini and place them in mysql/data directory or in any of the other paths, under the proper group of options such as [client], [myslqd], etc. For example:
[mysqld]
wait_timeout = 600
max_allowed_packet = 64M
Then restart the server. To get their values, type in the mysql client:
> select ##wait_timeout;
> select ##max_allowed_packet;
For me this solution didn't work out so I executed
SET GLOBAL max_allowed_packet=1073741824;
in my SQL client.
If not able to change this with MYSql service running, you should stop the service and change the variable in "my.ini" file.
For example:
max_allowed_packet=20M
If you are working on XAMPP then you can fix the MySQL Server has gone away issue with following changes..
open your my.ini file
my.ini location is (D:\xampp\mysql\bin\my.ini)
change the following variable values
max_allowed_packet = 64M
innodb_lock_wait_timeout = 500
If you are running with default values then you have a lot of room to optimize your mysql configuration.
The first step I recommend is to increase the max_allowed_packet to 128M.
Then download the MySQL Tuning Primer script and run it. It will provide recommendations to several facets of your config for better performance.
Also look into adjusting your timeout values both in MySQL and PHP.
How big (file size) is the file you are importing and are you able to import the file using the mysql command line client instead of PHPMyAdmin?
If you are using MAMP on OS X, you will need to change the max_allowed_packet value in the template for MySQL.
You can find it at: File > Edit template > MySQL my.cnf
Then just search for max_allowed_packet, change the value and
save.
I had this error and other related ones, when I imported at 16 GB SQL file. For me, editing my.ini and setting the following (based on several different posts) in the [mysqld] section:
max_allowed_packet = 110M
innodb_buffer_pool_size=511M
innodb_log_file_size=500M
innodb_log_buffer_size = 800M
net_read_timeout = 600
net_write_timeout = 600
If you are running under Windows, go to the control panel, services, and look at the details for MySQL and you will see where my.ini is. Then after you edit and save my.ini, restart the mysql service (or restart the computer).
If you are using HeidiSQL, you can also set some or all of these using that.
I solved my issue with this short /etc/mysql/my.cnf file :
[mysqld]
wait_timeout = 600
max_allowed_packet = 100M
The other reason this can happen is running out of memory. Check /var/log/messages and make sure that your my.cnf is not set up to cause mysqld to allocate more memory than your machine has.
Your mysqld process can actually be killed by the kernel and then re-started by the "safe_mysqld" process without you realizing it.
Use top and watch the memory allocation while it's running to see what your headroom is.
make a backup of my.cnf before changing it.
I got same issue with
$image_base64 = base64_encode(file_get_contents($_FILES['file']['tmp_name']) );
$image = 'data:image/jpeg;base64,'.$image_base64;
$query = "insert into images(image) values('".$image."')";
mysqli_query($con,$query);
In \xampp\mysql\bin\my.ini file of phpmyadmin we get only
[mysqldump]
max_allowed_packet=110M
which is just for mysqldump -u root -p dbname . I resolved my issue by replacing above code with
max_allowed_packet=110M
[mysqldump]
max_allowed_packet=110M
I updated "max_allowed_packet" to 1024M, but it still wasn't working. It turns out my deployment script was running:
mysql --max_allowed_packet=512M --database=mydb -u root < .\db\db.sql
Be sure to explicitly specify a bigger number from the command line if you are donig it this way.
If your data includes BLOB data:
Note that an import of data from the command line seems to choke on BLOB data, resulting in the 'MySQL server has gone away' error.
To avoid this, re-create the mysqldump but with the --hex-blob flag:
http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_hex-blob
which will write out the data file with hex values rather than binary amongst other text.
PhpMyAdmin also has the option "Dump binary columns in hexadecimal notation (for example, "abc" becomes 0x616263)" which works nicely.
Note that there is a long-standing bug (as of December 2015) which means that GEOM columns are not converted:
Back up a table with a GEOMETRY column using mysqldump?
so using a program like PhpMyAdmin seems to be the only workaround (the option noted above does correctly convert GEOM columns).
If it takes a long time to fail, then enlarge the wait_timeout variable.
If it fails right away, enlarge the max_allowed_packet variable; it it still doesn't work, make sure the command is valid SQL. Mine had unescaped quotes which screwed everything up.
Also, if feasible, consider limiting the number of inserts of a single SQL command to, say, 1000. You can create a script that creates multiple statements out of a single one by reintroducing the INSERT... part every n inserts.
i got a similar error.. to solve this just open my.ini file..here at line no 36 change the value of maximum allowed packet size ie. max_allowed_packet = 20M
Make sure mysqld process does not restart because of service managers like systemd.
I had this problem in vagrant with centos 7. Configuration tweaks didn't help. Turned out it was systemd which killed mysqld service every time when it took too much memory.
I had similar error today when duplicating database (MySQL server has gone away...), but when I tried to restart mysql.server restart I got error
ERROR! The server quit without updating PID ...
This is how I solved it:
I opened up Applications/Utilities/ and ran Activity Monitor
quit mysqld
then was able to solve the error problem with
mysql.server restart
I am doing some large calculations which involves the mysql connection to stay long time and with heavy data. i was facing this "Mysql go away issue". So i tried t optimize the queries but that doen't helped me then i increased the mysql variables limit which is set to a lower value by default.
wait_timeout
max_allowed_packet
To the limit what ever suits to you it should be the Any Number * 1024(Bytes). you can login to terminal using 'mysql -u username - p' command and can check and change for these variable limits.
For GoDaddy shared hosting
On GoDaddy shared hosting accounts, it is tricky to tweak the PHP.ini etc files. However, there is another way and it just worked perfectly for me. (I just successfully uploaded a 3.8Mb .sql text file, containing 3100 rows and 145 cols. Using the IMPORT command in phpMyAdmin, I was getting the dreaded MySQL server has gone away error, and no further information.)
I found that Matt Butcher had the right answer. Like Matt, I had tried all kinds of tricks, from exporting MySQL databases in bite-sized chunks, to writing scripts that break large imports into smaller ones. But here is what worked:
(1) CPANEL ---> FILES (group) ---> BACKUP
(2a) Under "Partial Backups" heading...
(2b) Under "Download a MySQL Database Backup"
(2c) Choose your database and download a backup (this step optional, but wise)
(3a) Directly to the right of 2b, under heading "Restore a MySQL Database Backup"
(3b) Choose the .SQL import file from your local drive
(3c) True happiness will be yours (shortly....) Mine took about 5 seconds
I was able to use this method to import a single table. Nothing else in my database was affected -- but that is what step (2) above is intended to protect against.
Notes:
a. If you are unsure how to create a .SQL import file, use phpMyAdmin to export a table and modify that file structure.
SOURCE:
Matt Butcher 2010 Article
If increasing max_allowed_packet doesn't help.
I was getting the same error as you when importing a .sql file into my database via Sequel Pro.
The error still persisted after upping the max_allowed_packet to 512M so I ran the import in the command line instead with:
mysql --verbose -u root -p DatabaseName < MySQL.sql
It gave the following error:
ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled
I found a couple helpful StackOverflow questions:
Enable binary mode while restoring a Database from an SQL dump
Mysql ERROR: ASCII '\0' while importing sql file on linux server
In my case, my .sql file was a little corrupt or something. The MySQL dump we get comes in two zip files that need to be concatenated together and then unzipped. I think the unzipping was interrupted initially, leaving the file with some odd characters and encodings. Getting a fresh MySQL dump and unzipping it properly worked for me.
Just wanted to add this here in case others find that increasing the max_allowed_packet variable was not helping.
None of the solutions regarding packet size or timeouts made any difference for me. I needed to disable ssl
mysql -u -p -hmyhost.com --disable-ssl db < file.sql
https://dev.mysql.com/doc/refman/5.7/en/encrypted-connections.html

mysql into outfile creating error [duplicate]

For some reason my production DB decided to spew out this message. All application calls fail to the DB with the error:
PreparedStatementCallback; SQL [ /*long sql statement here*/ ];
Can't create/write to file '/tmp/#sql_3c6_0.MYI' (Errcode: 2);
nested exception is java.sql.SQLException: Can't create/write to file '/tmp/#sql_3c6_0.MYI' (Errcode: 2)
I have no idea, what this even means. There is no file #sql_3c6_0.MYI in /tmp and I can't create one with a # character for some reason. Has anyone heard about it or seen this error? What could be wrong and some possible things to look at?
The MySQL DB seems to be up and running and can be queried via the console but the application can't seem to get through to it. There was no change to the application code/files. It just happened out the blue. So I'm not even sure where to start look or what resolution tactics to apply. Any ideas?
I meet this error too when I run a wordpress on my Fedora system.
I googled it, and find a way to fix this.
Maybe this will help you too.
check mysql config : my.cnf
cat /etc/my.cnf | grep tmpdir
I can't see anything in my my.cnf
add tmpdir=/tmp to my.cnf under [mysqld]
restart web/app and mysql server
/etc/init.d/mysqld restart
Often this means your /tmp partition has run out of space and the file can't be created, or for whatever reason the mysqld process cannot write to that directory because of permission problems. Sometimes this is the case when selinux rains on your parade.
Any operation that requites a "temp file" will go into the /tmp directory by default. The name you're seeing is just some internal random name.
On Fedora with systemd MySQL gets private /tmp directory. In /proc/PID_of_MySQL/mountinfo you will find the line like:
156 129 8:1 /tmp/systemd-namespace-AN7vo9/private /tmp rw,relatime -
ext4 /dev/sda1 rw,seclabel,data=ordered
This means a temporary folder /tmp/systemd-namespace-AN7vo9/private is mounted as /tmp in private namespace of MySQL process. Unfortunately this folder is deleted by tmpwatch if not used frequently.
I modified /etc/cron.daily/tmpwatch and inserted the exclude pattern -X '/tmp/systemd-namespace*' like this:
/usr/sbin/tmpwatch "$flags" -x /tmp/.X11-unix -x /tmp/.XIM-unix \
-x /tmp/.font-unix -x /tmp/.ICE-unix -x /tmp/.Test-unix \
-X '/tmp/systemd-namespace*' \
-X '/tmp/hsperfdata_*' 10d /tmp
The side effect is that unused private namespace folders will not be deleted automatically.
The filename looks like a temporary table created by a query in MySQL. These files are often very short-lived, they're created during one specific query and cleaned up immediately afterwards.
Yet they can get very large, depending on the amount of data the query needs to process in a temp table. Or you may have multiple concurrent queries creating temp tables, and if enough of these queries run at the same time, they can exhaust disk space.
I do MySQL consulting, and I helped a customer who had intermittent disk full errors on his root partition, even though every time he looked, he had about 6GB free. After we examined his query logs, we discovered that he sometimes had four or more queries running concurrently, each creating a 1.5GB temp table in /tmp, which was on his root partition. Boom!
Solutions I gave him:
Increase the MySQL config variables tmp_table_size and max_heap_table_size so MySQL can create really large temp tables in memory. But it's not a good idea to allow MySQL to create 1.5GB temp tables in memory, because there's no way to limit how many of these are created concurrently. You can exhaust your memory pretty quickly this way.
Set the MySQL config variable tmpdir to a directory on another disk partition with more space.
Figure out which of your queries is creating such big temp tables, and optimize the query. For example, use indexes to help that query reduce its scan to a smaller slice of the table. Or else archive some of the data in the tale so the query doesn't have so many rows to scan.
Tremendous thanks to ArturZ for pointing me in the right direction on this. I don't have tmpwatch installed on my system so that isn't the cause of the problem in my case. But the end result is the same: The private /tmp that systemd creates is getting removed. Here's what happens:
systemd creates a new process via clone() with the CLONE_NEWNS
flag to obtain a private namespace. Or maybe it calls unshare()
with CLONE_NEWNS. Same thing.
systemd creates a subdirectory in /tmp (e.g.
/tmp/systemd-namespace-XRiWad/private) and mounts it on /tmp.
Because CLONE_NEWNS was set in #1, this mountpoint is invisible to
all other processes.
systemd then invokes mysqld in this private namespace.
Some specific database operations (e.g. "describe ;") create
& remove temporary files, which has the side effect of updating the
timestamp on /tmp/systemd-namespace-XRiWad/private. Other database
operations execute without using /tmp at all.
Eventually 10 days go by where even though the database itself
remains active, no operations occur that update the timestamp on
/tmp/systemd-namespace-XRiWad/private.
/bin/systemd-tmpfiles comes along and removes the "old"
/tmp/systemd-namespace-XRiWad/private directory, effectively
rendering the private /tmp unusable for mysqld while the public /tmp
remains available for everything else on the system.
Restarting mysqld works because this starts everything over again at step #1, with a brand new private /tmp directory. However, the problem eventually comes back again. And again.
The simple solution is to configure /bin/systemd-tmpfiles so that it preserves anything in /tmp with the name /tmp/systemd-namespace-*. I did this by creating /etc/tmpfiles.d/privatetmp.conf with the following contents:
x /tmp/systemd-namespace-*
x /tmp/systemd-namespace-*/private
Problem solved.
For me this issue came after a long period of not using mysql nor the webserver. So I was sure that my settings where correct; Simply restarting the service fixes this issue; The weird part about the issue is that one can still connect to the database, and even query/add tables using the mysql tool. for example :
mysql -u root -p
I restarted using :
systemctl start mysqld.service
or
service mysqld restart
or
/etc/init.d/mysqld restart
Note : depending on the machine/environment on of these commands should restart the service.
A better way worked for me.
chown root:root /tmp
chmod 1777 /tmp
/etc/init.d/mysqld restart
That is it.
See here :http://nixcraft.com/databases-servers/14260-error-1-hy000-cant-create-write-file-tmp-sql_9f3_0-myi-errcode-13-a.html
http://smashingweb.info/solved-mysql-tmp-error-cant-createwrite-to-file-tmpmykbo3bl-errcode-13/
it's very easy, you just grant the /tmp folder as 777 permission.
just type:
chmod -R 777 /tmp
On an Ubuntu box, I started getting this error after moving /tmp to a different volume (symlink). Even after setting the required permission 1777, the issue was not resolved.
MySQL is protected by AppArmor, which was disallowing writes to the new tmp location /mnt/tmp.
I had to add the following lines to /etc/apparmor.d/abstractions/user-tmp to fix this
owner /mnt/tmp/** rwkl,
/mnt/tmp/ rw,
On debian 7.5 I got the same error. I realized the /tmp folder owner and permissions were off. As another answer suggested I did as follows (must be root):
chown root:root /tmp && chmod 1777 /tmp
I did not even have to restart mysql daemon.
I'm using mariadb. When I try to put this line at /etc/my.cnf:
[mysqld]
tmpdir=/tmp
It solved the error generated from website frontend related to /tmp.
But, it has backend problem with /tmp. Example, when I try to rebuild mariadb from the backend, it couldn't read the /tmp dir, and then generated the similar error.
mysqldump: Couldn't execute 'show fields from `wp_autoupdate`': Can't create/write to file '/tmp/#sql_1680_0.MAI' (Errcode: 2 "No such file or directory") (1)
So this one work for both front end and back end:
1. mkdir /var/lib/mysql/tmp
2. chown mysql:mysql /var/lib/mysql/tmp
3. Add the following line into the [mysqld] section:
tmpdir = /var/lib/mysql/tmp
4. Restart mysqld (eg. Centos7: systemctl restart mysqld)
Its due to access control security policies specifically when SELinux is enabled it won't allow external executables to create temporary files in the system locations.
Disable SELinux by issuing below command:
echo 0 >/selinux/enforce
You can now start mysql it wont give any permission related errror while reading/writing to /tmp or system directories.
In case you wish to enable the SELinux security back change 0 to 1 in above command.
Check permission issues, mysql config.
Also check if you haven't reached disk space, quota limits.
Note: Some systems are limiting number of files (not just space), deleting some old session files helped fixed the issue in my case.
For those using VPS / virtual hosting.
I was using a VPS, getting errors with MySQL not being able to write to /tmp, and everything looked correct.
I had enough free space, enough free inodes, correct permissions. Turned out the problem was outside my VPS, it was the machine hosting the VPS that was full. I only had "virtual space" in my file system, but the machine in the background which hosted the VPS had no "physical space" left. I had to contact the VPS company any they fixed it.
If you think this might be your problem, you could test writing a larger file to /tmp (1GB):
dd if=/dev/zero of=/tmp/file.txt count=1024 bs=1048576
I got a No space left on device error message, which was a giveaway that it was a disk/volume in the background that was full.
I had the same issue and it was caused because our DB server run out of space. Clearing up some disk space solved the issue.

Categories