Importing large SQL file in phpMyadmin [duplicate] - php

This question already has answers here:
What is the easy way to import large database to mysql using phpmyadmin?
(4 answers)
Closed 8 years ago.
I am trying to import a 5MB file in phpMyadmin.
When I follow the normal procedure of importing it just makes me wait and nothing is happening. Since yesterday morning that file is getting imported and it shows that moving circle sign above the browse.
Now when I followed the other method -
1 C:\wamp\apps\phpmyadmin3.2.0.1\config.inc.php
2 Find the line with $cfg['UploadDir'] on it and update it to:
3 $cfg['UploadDir'] = 'upload';
4 Create a directory called ‘upload’ within the phpmyadmin directory.
5 C:\wamp\apps\phpmyadmin3.2.0.1\upload\
I got this error -
Fatal error: Maximum execution time of 300 seconds exceeded in C:\wamp\apps\phpmyadmin3.5.1\libraries\import\sql.php on line 135
I checked the C:\wamp\apps\phpmyadmin3.5.1\libraries\import\sql.php file but couldn't find anything like 300 or execution time.
Guys pls help me with how can I import this %MB file fast so that I can proceed with my tool development !!

Try this solution:Use below command
mysql -u root -p dbname
use dbname;
mysql> SHOW DATABASES;
mysql> show tables;
SOURCE path (D:/your_file_name.sql); // please give correct path

Use mysql console to import large sql file.
Type the below command in mysql console:
SOURCE "Path of your file located and mention your file name"
For Ex: SOURCE C:\Users\test\Desktop\.sql
OR : Update the value in php.ini
upload_max_filesize=128M
post_max_size = 128M
max_execution_time = 300
max_input_time = 300
Thanks

I think you are using windows OS. So following step may solve your problem
Open cmd.
Navigate mysql installation directory [type command cd "C:\wamp\bin\mysql\mysql5.6.12(May be different version)\bin"]
now write command mysql -u <user> -p <password> <dbmame> < C:/path/of/file.sql
This trick work for me.

Ensure that these options match your needs in php.ini file
upload_max_filesize=2M
max_execution_time=300
post_max_size=8M

Related

phpmyadmin not opening due to change in config file

I was trying to upload a large .sql file which have more than 14000 rows. which caused the error maximum execution time when importing .SQL data file . Then I searched on google to resolve the issue and followed answers on this question . So i made some changes in C:\xampp\phpMyAdmin\libraries\config.default.php file by changing $cfg['ExecTimeLimit'] = 0; .
Now the issue is that phpmyadmin is not opening on localhost. How can I resolved this issue?
Remove $cfg['ExecTimeLimit']
Then Restart XAMPP then Try This
Better way to import large DB :
copy the file in mysql -> bin folder
run command within folder and try command below :
mysql -u root -p -v databasename < dbfiletoimport.sql
this problem caught me out too...I was able to start phpmyadmin after opening config.default.php in wordpad and saving it again, seems using notepad stops it from working...I hope this saves someone hours of frustration!

phpMyAdmin - Error Incorrect format parameter

I tried to import a large sql file but its not getting imported.
And it shows the following error.
phpMyAdmin - Error
Incorrect format parameter
I have been using Xampp-php 5.6 in ubuntu16.04.
I have tried these links given below already but none of it worked.
https://www.webtrickshome.com/forum/how-to-fix-phpmyadmin-error-incorrect-format-parameter-that-appeared-while-importing-a-database
importing db phpMyAdmin - Error Incorrect format parameter
Importing large database file in MAMP phpMyAdmin
You can easily import using cmd promt
mysql -u username(of phpmyadmin) -p database_name < file_path/file_name.sql
you can also do the changes in php.ini file to increase upload size of phpmyadmin.
in php.ini of your PHP installation (note: depending if you want it for CLI, Apache, or Nginx, find the right php.ini to manipulate)
post_max_size=500M
upload_max_filesize=500M
memory_limit=900M
or set other values.
Restart/reload apache if you have apache installed or php-fpm for nginx if you use nginx.
Remote server?
increase max_execution_time as well, as it will take time to upload the file.
NGINX installation?
you will have to add: client_max_body_size 912M; in /etc/nginx/nginx.conf to the http{...} block
I was facing the same problem when I was trying to import a 90mb database. You can solve this problem by opening
xampp->phpMyadmin->libraries->config.default.php
after opening the file search for $cfg['ExecTimeLimit'] = 300;
Here you can set the time to 1000 or whatever you want after the equal sign. The default value is 300.
thanks for thi answer
post_max_size=500M
upload_max_filesize=500M
memory_limit=900M
There file format can be .sql.zip
Shown in the attached picture. It happened to me.

Php & Mysql // FATAL ERROR [duplicate]

I tried to import a large sql file through phpMyAdmin...But it kept showing error
'MySql server has gone away'
What to do?
As stated here:
Two most common reasons (and fixes) for the MySQL server has gone away
(error 2006) are:
Server timed out and closed the connection. How to fix:
check that wait_timeout variable in your mysqld’s my.cnf configuration file is large enough. On Debian: sudo nano
/etc/mysql/my.cnf, set wait_timeout = 600 seconds (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart. I didn't check, but the default value for
wait_timeout might be around 28800 seconds (8 hours).
Server dropped an incorrect or too large packet. If mysqld gets a packet that is too large or incorrect, it assumes that something has
gone wrong with the client and closes the connection. You can increase
the maximal packet size limit by increasing the value of
max_allowed_packet in my.cnf file. On Debian: sudo nano
/etc/mysql/my.cnf, set max_allowed_packet = 64M (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart.
Edit:
Notice that MySQL option files do not have their commands already available as comments (like in php.ini for instance). So you must type any change/tweak in my.cnf or my.ini and place them in mysql/data directory or in any of the other paths, under the proper group of options such as [client], [myslqd], etc. For example:
[mysqld]
wait_timeout = 600
max_allowed_packet = 64M
Then restart the server. To get their values, type in the mysql client:
> select ##wait_timeout;
> select ##max_allowed_packet;
For me this solution didn't work out so I executed
SET GLOBAL max_allowed_packet=1073741824;
in my SQL client.
If not able to change this with MYSql service running, you should stop the service and change the variable in "my.ini" file.
For example:
max_allowed_packet=20M
If you are working on XAMPP then you can fix the MySQL Server has gone away issue with following changes..
open your my.ini file
my.ini location is (D:\xampp\mysql\bin\my.ini)
change the following variable values
max_allowed_packet = 64M
innodb_lock_wait_timeout = 500
If you are running with default values then you have a lot of room to optimize your mysql configuration.
The first step I recommend is to increase the max_allowed_packet to 128M.
Then download the MySQL Tuning Primer script and run it. It will provide recommendations to several facets of your config for better performance.
Also look into adjusting your timeout values both in MySQL and PHP.
How big (file size) is the file you are importing and are you able to import the file using the mysql command line client instead of PHPMyAdmin?
If you are using MAMP on OS X, you will need to change the max_allowed_packet value in the template for MySQL.
You can find it at: File > Edit template > MySQL my.cnf
Then just search for max_allowed_packet, change the value and
save.
I had this error and other related ones, when I imported at 16 GB SQL file. For me, editing my.ini and setting the following (based on several different posts) in the [mysqld] section:
max_allowed_packet = 110M
innodb_buffer_pool_size=511M
innodb_log_file_size=500M
innodb_log_buffer_size = 800M
net_read_timeout = 600
net_write_timeout = 600
If you are running under Windows, go to the control panel, services, and look at the details for MySQL and you will see where my.ini is. Then after you edit and save my.ini, restart the mysql service (or restart the computer).
If you are using HeidiSQL, you can also set some or all of these using that.
I solved my issue with this short /etc/mysql/my.cnf file :
[mysqld]
wait_timeout = 600
max_allowed_packet = 100M
The other reason this can happen is running out of memory. Check /var/log/messages and make sure that your my.cnf is not set up to cause mysqld to allocate more memory than your machine has.
Your mysqld process can actually be killed by the kernel and then re-started by the "safe_mysqld" process without you realizing it.
Use top and watch the memory allocation while it's running to see what your headroom is.
make a backup of my.cnf before changing it.
I got same issue with
$image_base64 = base64_encode(file_get_contents($_FILES['file']['tmp_name']) );
$image = 'data:image/jpeg;base64,'.$image_base64;
$query = "insert into images(image) values('".$image."')";
mysqli_query($con,$query);
In \xampp\mysql\bin\my.ini file of phpmyadmin we get only
[mysqldump]
max_allowed_packet=110M
which is just for mysqldump -u root -p dbname . I resolved my issue by replacing above code with
max_allowed_packet=110M
[mysqldump]
max_allowed_packet=110M
I updated "max_allowed_packet" to 1024M, but it still wasn't working. It turns out my deployment script was running:
mysql --max_allowed_packet=512M --database=mydb -u root < .\db\db.sql
Be sure to explicitly specify a bigger number from the command line if you are donig it this way.
If your data includes BLOB data:
Note that an import of data from the command line seems to choke on BLOB data, resulting in the 'MySQL server has gone away' error.
To avoid this, re-create the mysqldump but with the --hex-blob flag:
http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_hex-blob
which will write out the data file with hex values rather than binary amongst other text.
PhpMyAdmin also has the option "Dump binary columns in hexadecimal notation (for example, "abc" becomes 0x616263)" which works nicely.
Note that there is a long-standing bug (as of December 2015) which means that GEOM columns are not converted:
Back up a table with a GEOMETRY column using mysqldump?
so using a program like PhpMyAdmin seems to be the only workaround (the option noted above does correctly convert GEOM columns).
If it takes a long time to fail, then enlarge the wait_timeout variable.
If it fails right away, enlarge the max_allowed_packet variable; it it still doesn't work, make sure the command is valid SQL. Mine had unescaped quotes which screwed everything up.
Also, if feasible, consider limiting the number of inserts of a single SQL command to, say, 1000. You can create a script that creates multiple statements out of a single one by reintroducing the INSERT... part every n inserts.
i got a similar error.. to solve this just open my.ini file..here at line no 36 change the value of maximum allowed packet size ie. max_allowed_packet = 20M
Make sure mysqld process does not restart because of service managers like systemd.
I had this problem in vagrant with centos 7. Configuration tweaks didn't help. Turned out it was systemd which killed mysqld service every time when it took too much memory.
I had similar error today when duplicating database (MySQL server has gone away...), but when I tried to restart mysql.server restart I got error
ERROR! The server quit without updating PID ...
This is how I solved it:
I opened up Applications/Utilities/ and ran Activity Monitor
quit mysqld
then was able to solve the error problem with
mysql.server restart
I am doing some large calculations which involves the mysql connection to stay long time and with heavy data. i was facing this "Mysql go away issue". So i tried t optimize the queries but that doen't helped me then i increased the mysql variables limit which is set to a lower value by default.
wait_timeout
max_allowed_packet
To the limit what ever suits to you it should be the Any Number * 1024(Bytes). you can login to terminal using 'mysql -u username - p' command and can check and change for these variable limits.
For GoDaddy shared hosting
On GoDaddy shared hosting accounts, it is tricky to tweak the PHP.ini etc files. However, there is another way and it just worked perfectly for me. (I just successfully uploaded a 3.8Mb .sql text file, containing 3100 rows and 145 cols. Using the IMPORT command in phpMyAdmin, I was getting the dreaded MySQL server has gone away error, and no further information.)
I found that Matt Butcher had the right answer. Like Matt, I had tried all kinds of tricks, from exporting MySQL databases in bite-sized chunks, to writing scripts that break large imports into smaller ones. But here is what worked:
(1) CPANEL ---> FILES (group) ---> BACKUP
(2a) Under "Partial Backups" heading...
(2b) Under "Download a MySQL Database Backup"
(2c) Choose your database and download a backup (this step optional, but wise)
(3a) Directly to the right of 2b, under heading "Restore a MySQL Database Backup"
(3b) Choose the .SQL import file from your local drive
(3c) True happiness will be yours (shortly....) Mine took about 5 seconds
I was able to use this method to import a single table. Nothing else in my database was affected -- but that is what step (2) above is intended to protect against.
Notes:
a. If you are unsure how to create a .SQL import file, use phpMyAdmin to export a table and modify that file structure.
SOURCE:
Matt Butcher 2010 Article
If increasing max_allowed_packet doesn't help.
I was getting the same error as you when importing a .sql file into my database via Sequel Pro.
The error still persisted after upping the max_allowed_packet to 512M so I ran the import in the command line instead with:
mysql --verbose -u root -p DatabaseName < MySQL.sql
It gave the following error:
ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled
I found a couple helpful StackOverflow questions:
Enable binary mode while restoring a Database from an SQL dump
Mysql ERROR: ASCII '\0' while importing sql file on linux server
In my case, my .sql file was a little corrupt or something. The MySQL dump we get comes in two zip files that need to be concatenated together and then unzipped. I think the unzipping was interrupted initially, leaving the file with some odd characters and encodings. Getting a fresh MySQL dump and unzipping it properly worked for me.
Just wanted to add this here in case others find that increasing the max_allowed_packet variable was not helping.
None of the solutions regarding packet size or timeouts made any difference for me. I needed to disable ssl
mysql -u -p -hmyhost.com --disable-ssl db < file.sql
https://dev.mysql.com/doc/refman/5.7/en/encrypted-connections.html

Importing 500MB .db file into phpmyadmin using a SQL dump

I have a ~450MB .db (sqllite3) file that I need to import into my local phpmyadmin server on XAMPP. I went through the following steps to make this happen but all of them failed.
-I used sqlitebrowser to create a SQL dump of the .db file.
-When directly importing into phpmyadmin did not work I edited the php.ini to have the following properties:
upload_max_filesize = 500MB
post_max_size = 500MB
memory_limit = 512MB
max_execution_time = 3600
then tried again, without success. Even though phpmyadmin says I can import files of 500MB it still gives me a error that my file might be too large.
-I tried using both BigDump and manual CLI script for importing the dump for it but both of them return an error stating 'the system cannot find the file specified' and 'Can't open eve.sql for import'
At this point I am out of ideas. I would prefer however to make the changes to my local server so that it can import files of such size in the future as I plan to parse some YAML files and use them to update the database using a php script.
How about using the mysql command line utility
mysql -p -u user -h 127.0.0.1 database < data.sql
It usually handles big imports without problems
If you just have one big database, use a sql-splitter or use mysqldumper for import, export and querys. Works on local and live environment without need of shell (SSH) access.

how to import the .sql file having size 150MB from terminal or phpmyadmin

I am trying the upload the .sql file having size 150MB using terminal or phpmyadmin but giving the errors
this is what I did before the importing the file
in php.ini file
1)post_max_size = 20000M
2)upload_max_filesize = 20000M
3)max_execution_time = 50000
4)max_input_time = 50000
5)memory_limit = 20000M
&
in /etc/mysql/my.cn
6)max_allowed_packet = 2G
& in the /usr/share/phpmyadmin/libraries/config.defalut.php
$cfg['ExecTimeLimit'] = '0'; //to make it unlimited, this was 300 default
even after these many setting I am getting errors:
when tried from the terminal as
mysql -u root -p dbname < mydbfile.sql & then entered the password & got the
error: ERROR 2006 (HY000) at line 23: MySql server has gone away
when I tried to import the database file using phpmyadmin after taking 3-4hrs it also results in the errors
like: No data received
is there any other way like reading the .sql file using the php & inserting into the database one by one. is it good way?
any idea what could be the problem?
Thanks in advance!
I think you should split the .sql insert commands and import them in multiple phases. Do check the "max connection time" setting.
You can also try http://www.ozerov.de/bigdump/
This may be because of max_allowed_packet
Change in the my.ini/my.cnf file. Include the single line under [mysqld] in your file
max_allowed_packet=500M
now restart the MySQL service once you are done.
You can see it's curent value in mysql like this:
SHOW VARIABLES LIKE 'max_allowed_packet'
You can try to change it like this, but it's unlikely this will work on shared hosting:
SET GLOBAL max_allowed_packet=16777216;
You can read about it here http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html

Categories