phpmyadmin import & overwrite table - php

this has been annoying me for weeks and i cant find a proper solution.
im running a VPS
centos 7 (and aapanel which has no relevance)
php 7.4
mysql 5.7
phpmyadmin 5.0
ive gone into phpmyadmin, exported a table, updated 5000 rows, and i want to import and overwrite the old data into the same table.
the 'browse' and import is not an option(as vps error 503/low ram to load) and so ive tried to add it by SQL tab:
LOAD DATA LOCAL INFILE '/database/links.csv'
INTO TABLE links
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';
with permission denied
*yes, ive added
[MySQLi]
mysqli.allow_local_infile = On
to my.conf and even tried to add to php.ini
and restarted apache and even tried removing LOCAL(Saw that on stack too) with no avail*
does anyone have an updated version, or know of a solid solution to this annoying, but should be easy solution?
EDIT
root user = fixes issue for permission denied... but...
LOAD DATA INFILE
error
#1290 - The MySQL server is running with the --secure-file-priv option so it cannot execute this statement
--secure-file-priv has been removed from my.conf and apache restarted and error still appears
LOAD DATA LOCAL INFILE
error
#2000 - LOAD DATA LOCAL INFILE is forbidden, check mysqli.allow_local_infile
mysqli.allow_local_infile = On is still in my.conf
file has full permissions(777) and tried changing owner (www/root/mysql)

Carefully read difference between LOCAL and non-LOCAL versions of LOAD DATA command: https://dev.mysql.com/doc/refman/5.7/en/load-data.html#load-data-local
You need to check state of secure_file_priv config variable for non-LOCAL version of command:
mysql> select ##secure_file_priv;
+-----------------------+
| ##secure_file_priv |
+-----------------------+
| /var/lib/mysql-files/ |
+-----------------------+
File must be located here.
For LOCAL version of the command you must check permission of client to read such file. In case of phpmyadmin it will be php as client.
In both cases double check that mysql server or php both have enough permissions to enter directory and read file. The easiest way is to login under system user for mysql or php and just try to read file:
sudo -u mysql_or_php_user /bin/bash
or
su -s /bin/bash mysql_or_php_user
then
head /database/links.csv

Related

load data infile is not allowed MariaDB

I created a PHP script that imports posts from a CSV file into a WordPress website.
To do this, I first bulk import the posts into a table of the WP website database and then the PHP script creates the posts.
The bulk insert MYSQL query I use is the following:
load data local infile '/var/www/vhosts/sitenamehere.test/test.csv' into table test_table character set latin1 fields terminated by ';' lines terminated by '\r\n' ignore 1 lines;
When I run the script from the server I get the following error:
"the used command is not allowed with this MariaDB version for the query load data local infile..."
The problem occurs only when I execute the script from the server, in fact if I run the same query from phpMyAdmin, it lets me import the file.
Since my scripts not only imports but also updates posts, the intention was to create a cron job so that the script is executed multiple times a day. Obviously this is not possible if I keep getting the same error.
I tried adding:
the line local-infile=1 under the section [client] and [mysqld] of my.cnf
the line mysql.allow_local_infile=On under the [mysql] section of
my.cnf
the line mysql.allow_local_infile=On under the [MySQLi] section of php.ini located at /opt/plesk/php/7.1/etc
But nothing helped. Any ideas?
You must add AllowLoadLocalInfile=true; to your MySQL/MariaDB server connection string when you want to load a local file.
If using something like a LOAD LOCAL INFILE command then add --local_infile=1 to the command itself and it should work.
In recent versions of both servers this functionality is disabled by default and should only be enabled when necessary.
The guide at
https://mariadb.com/kb/en/library/load-data-infile/
says
If the local_infile system variable is set to 0, attempts to perform a LOAD DATA LOCAL will fail with an error message.
You best bet is to change the my.ini file that's being used.
Moreover the used database user needs the FILE privilege.

PDOException (2006) SQLSTATE[HY000] [2006] MySQL server has gone away [duplicate]

I get this error when I try to source a large SQL file (a big INSERT query).
mysql> source file.sql
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 2
Current database: *** NONE ***
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 3
Current database: *** NONE ***
Nothing in the table is updated. I've tried deleting and undeleting the table/database, as well as restarting MySQL. None of these things resolve the problem.
Here is my max-packet size:
+--------------------+---------+
| Variable_name | Value |
+--------------------+---------+
| max_allowed_packet | 1048576 |
+--------------------+---------+
Here is the file size:
$ ls -s file.sql
79512 file.sql
When I try the other method...
$ ./mysql -u root -p my_db < file.sql
Enter password:
ERROR 2006 (HY000) at line 1: MySQL server has gone away
max_allowed_packet=64M
Adding this line into my.cnf file solves my problem.
This is useful when the columns have large values, which cause the issues, you can find the explanation here.
On Windows this file is located at: "C:\ProgramData\MySQL\MySQL Server
5.6"
On Linux (Ubuntu): /etc/mysql
You can increase Max Allowed Packet
SET GLOBAL max_allowed_packet=1073741824;
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet
The global update and the my.cnf settings didn't work for me for some reason. Passing the max_allowed_packet value directly to the client worked here:
mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql
In general the error:
Error: 2006 (CR_SERVER_GONE_ERROR) - MySQL server has gone away
means that the client couldn't send a question to the server.
mysql import
In your specific case while importing the database file via mysql, this most likely mean that some of the queries in the SQL file are too large to import and they couldn't be executed on the server, therefore client fails on the first occurred error.
So you've the following possibilities:
Add force option (-f) for mysql to proceed and execute rest of the queries.
This is useful if the database has some large queries related to cache which aren't relevant anyway.
Increase max_allowed_packet and wait_timeout in your server config (e.g. ~/.my.cnf).
Dump the database using --skip-extended-insert option to break down the large queries. Then import it again.
Try applying --max-allowed-packet option for mysql.
Common reasons
In general this error could mean several things, such as:
a query to the server is incorrect or too large,
Solution: Increase max_allowed_packet variable.
Make sure the variable is under [mysqld] section, not [mysql].
Don't afraid to use large numbers for testing (like 1G).
Don't forget to restart the MySQL/MariaDB server.
Double check the value was set properly by:
mysql -sve "SELECT ##max_allowed_packet" # or:
mysql -sve "SHOW VARIABLES LIKE 'max_allowed_packet'"
You got a timeout from the TCP/IP connection on the client side.
Solution: Increase wait_timeout variable.
You tried to run a query after the connection to the server has been closed.
Solution: A logic error in the application should be corrected.
Host name lookups failed (e.g. DNS server issue), or server has been started with --skip-networking option.
Another possibility is that your firewall blocks the MySQL port (e.g. 3306 by default).
The running thread has been killed, so retry again.
You have encountered a bug where the server died while executing the query.
A client running on a different host does not have the necessary privileges to connect.
And many more, so learn more at: B.5.2.9 MySQL server has gone away.
Debugging
Here are few expert-level debug ideas:
Check the logs, e.g.
sudo tail -f $(mysql -Nse "SELECT ##GLOBAL.log_error")
Test your connection via mysql, telnet or ping functions (e.g. mysql_ping in PHP).
Use tcpdump to sniff the MySQL communication (won't work for socket connection), e.g.:
sudo tcpdump -i lo0 -s 1500 -nl -w- port mysql | strings
On Linux, use strace. On BSD/Mac use dtrace/dtruss, e.g.
sudo dtruss -a -fn mysqld 2>&1
See: Getting started with DTracing MySQL
Learn more how to debug MySQL server or client at: 26.5 Debugging and Porting MySQL.
For reference, check the source code in sql-common/client.c file responsible for throwing the CR_SERVER_GONE_ERROR error for the client command.
MYSQL_TRACE(SEND_COMMAND, mysql, (command, header_length, arg_length, header, arg));
if (net_write_command(net,(uchar) command, header, header_length,
arg, arg_length))
{
set_mysql_error(mysql, CR_SERVER_GONE_ERROR, unknown_sqlstate);
goto end;
}
I solved the error ERROR 2006 (HY000) at line 97: MySQL server has gone away and successfully migrated a >5GB sql file by performing these two steps in order:
Created /etc/my.cnf as others have recommended, with the following contents:
[mysql]
connect_timeout = 43200
max_allowed_packet = 2048M
net_buffer_length = 512M
debug-info = TRUE
Appending the flags --force --wait --reconnect to the command (i.e. mysql -u root -p -h localhost my_db < file.sql --verbose --force --wait --reconnect).
Important Note: It was necessary to perform both steps, because if I didn't bother making the changes to /etc/my.cnf file as well as appending those flags, some of the tables were missing after the import.
System used: OSX El Capitan 10.11.5; mysql Ver 14.14 Distrib 5.5.51 for osx10.8 (i386)
Just in case, to check variables you can use
$> mysqladmin variables -u user -p
This will display the current variables, in this case max_allowed_packet, and as someone said in another answer you can set it temporarily with
mysql> SET GLOBAL max_allowed_packet=1072731894
In my case the cnf file was not taken into account and I don't know why, so the SET GLOBAL code really helped.
You can also log into the database as root (or SUPER privilege) and do
set global max_allowed_packet=64*1024*1024;
doesn't require a MySQL restart as well. Note that you should fix your my.cnf file as outlined in other solutions:
[mysqld]
max_allowed_packet=64M
And confirm the change after you've restarted MySQL:
show variables like 'max_allowed_packet';
You can use the command-line as well, but that may require updating the start/stop scripts which may not survive system updates and patches.
As requested, I'm adding my own answer here. Glad to see it works!
The solution is increasing the values given the wait_timeout and the connect_timeout parameters in your options file, under the [mysqld] tag.
I had to recover a 400MB mysql backup and this worked for me (the values I've used below are a bit exaggerated, but you get the point):
[mysqld]
port=3306
explicit_defaults_for_timestamp = TRUE
connect_timeout = 1000000
net_write_timeout = 1000000
wait_timeout = 1000000
max_allowed_packet = 1024M
interactive_timeout = 1000000
net_buffer_length = 200M
net_read_timeout = 1000000
set GLOBAL delayed_insert_timeout=100000
Blockquote
I had the same problem but changeing max_allowed_packet in the my.ini/my.cnf file under [mysqld] made the trick.
add a line
max_allowed_packet=500M
now restart the MySQL service once you are done.
A couple things could be happening here;
Your INSERT is running long, and client is disconnecting. When it reconnects it's not selecting a database, hence the error. One option here is to run your batch file from the command line, and select the database in the arguments, like so;
$ mysql db_name < source.sql
Another is to run your command via php or some other language. After each long - running statement, you can close and re-open the connection, ensuring that you're connected at the start of each query.
If you are on Mac and installed mysql through brew like me, the following worked.
cp $(brew --prefix mysql)/support-files/my-default.cnf /usr/local/etc/my.cnf
Source: For homebrew mysql installs, where's my.cnf?
add max_allowed_packet=1073741824 to /usr/local/etc/my.cnf
mysql.server restart
I had the same problem in XAMMP
Metode-01: I changed max_allowed_packet in the D:\xampp\mysql\bin\my.ini file like that below:
max_allowed_packet=500M
Finally restart the MySQL service once and done.
Metode-02:
the easier way if you are using XAMPP. Open the XAMPP control panel, and click on the config button in mysql section.
Now click on the my.ini and it will open in the editor. Update the max_allowed_packet to your required size.
Then restart the mysql service. Click on stop on the Mysql service click start again. Wait for a few minutes.
Then try to run your Mysql query again. Hope it will work.
I encountered this error when I use Mysql Cluster, I do not know this question is from a cluster usage or not. As the error is exactly the same, so give my solution here.
Getting this error because the data nodes suddenly crash. But when the nodes crash, you can still get the correct result using cmd:
ndb_mgm -e 'ALL REPORT MEMORYUSAGE'
And the mysqld also works correctly.So at first, I can not understand what is wrong. And about 5 mins later, ndb_mgm result shows no data node working. Then I realize the problem. So, try to restart all the data nodes, then the mysql server is back and everything is OK.
But one thing is weird to me, after I lost mysql server for some queries, when I use cmd like show tables, I can still get the return info like 33 rows in set (5.57 sec), but no table info is displayed.
This error message also occurs when you created the SCHEMA with a different COLLATION than the one which is used in the dump. So, if the dump contains
CREATE TABLE `mytab` (
..
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
you should also reflect this in the SCHEMA collation:
CREATE SCHEMA myschema COLLATE utf8_unicode_ci;
I had been using utf8mb4_general_ci in the schema, cause my script came from a fresh V8 installation, now loading a DB on old 5.7 crashed and drove me nearly crazy.
So, maybe this helps you saving some frustating hours... :-)
(MacOS 10.3, mysql 5.7)
Add max_allowed_packet=64M to [mysqld]
[mysqld]
max_allowed_packet=64M
Restart the MySQL server.
If it's reconnecting and getting connection ID 2, the server has almost definitely just crashed.
Contact the server admin and get them to diagnose the problem. No non-malicious SQL should crash the server, and the output of mysqldump certainly should not.
It is probably the case that the server admin has made some big operational error such as assigning buffer sizes of greater than the architecture's address-space limits, or more than virtual memory capacity. The MySQL error-log will probably have some relevant information; they will be monitoring this if they are competent anyway.
This is more of a rare issue but I have seen this if someone has copied the entire /var/lib/mysql directory as a way of migrating their DB to another server. The reason it doesn't work is because the database was running and using log files. It doesn't work sometimes if there are logs in /var/log/mysql. The solution is to copy the /var/log/mysql files as well.
For amazon RDS (it's my case), you can change the max_allowed_packet parameter value to any numeric value in bytes that makes sense for the biggest data in any insert you may have (e.g.: if you have some 50mb blob values in your insert, set the max_allowed_packet to 64M = 67108864), in a new or existing parameter-group. Then apply that parameter-group to your MySQL instance (may require rebooting the instance).
For Drupal 8 users looking for solution for DB import failure:
At end of sql dump file there can commands inserting data to "webprofiler" table.
That's I guess some debug log file and is not really important for site to work so all this can be removed. I deleted all those inserts including LOCK TABLES and UNLOCK TABLES (and everything between). It's at very bottom of the sql file. Issue is described here:
https://www.drupal.org/project/devel/issues/2723437
But there is no solution for it beside truncating that table.
BTW I tried all solutions from answers above and nothing else helped.
I've tried all of above solutions, all failed.
I ended up with using -h 127.0.0.1 instead of using default var/run/mysqld/mysqld.sock.
If you have tried all these solutions, esp. increasing max_allowed_packet up to the maximum supported amount of 1GB and you are still seeing these errors, it might be that your server literally does not have enough free RAM memory available...
The solution = upgrade your server to more RAM memory, and try again.
Note: I'm surprised this simple solution has not been mentioned after 8+ years of discussion on this thread... sometimes we developers tend to overthink things.
Eliminating the errors which triggered Warnings was the final solution for me. I also changed the max_allowed_packet which helped with smaller files with errors. Eliminating the errors also sped up the process incredibly.
if none of this answers solves you the problem, I solved it by removing the tables and creating them again automatically in this way:
when creating the backup, first backup structure and be sure of add:
DROP TABLE / VIEW / PROCEDURE / FUNCTION / EVENT
CREATE PROCEDURE / FUNCTION / EVENT
IF NOT EXISTS
AUTO_INCREMENT
then just use this backup with your db and it will remove and recreate the tables you need.
Then you backup just data, and do the same, and it will work.
How about using the mysql client like this:
mysql -h <hostname> -u username -p <databasename> < file.sql

How to make a copy of large database from phpmyadmin?

I want to create a dev environment of my website on the same server. But I have a 7Gb of database which contains 479 tables and I want to make a copy of that database to the new DB.
I have tried this with the help of PHPmyadmin >> Operations >> copy database to functionality. But every time it will fail and return the error
Error in processing request Error code: 500 Error text: Internal Error.
Please let me know there is any other method/ solution to copy this database to a new database from cpanel please advise
Create an export of your database. This should be easily done thru PhpMyAdmin interface. Once you downloaded the DB export, you need to create a new DB where you will put your exported data. This, too, should be easily done thru PhpMyAdmin user interface.
To upload it, we cannot use Import -> Browse your computer because it has a limit of 2MB. One solution is to use Import -> Select from the web server upload directory /var/lib/phpMyAdmin/upload/. Upload your exported data in this directory. After that, your uploaded data should be listed in the dropdown next to it.
If this fails too, you can use the command line import.
mysql -u user -p db_name < /path/to/file.sql
Limited to phpMyAdmin? Don't do it all at once
Large data-sets shouldn't be dumped (unless it's for a backup), instead, export the database without data, then copy one table at a time (DB to DB directly).
Export/Import Schema
First, export only the database schema via phpMyAdmin (uncheck data in the export options). Then import that onto a new database name.
Alternatively, you could use something like below to generate statements like below, once you've created the DB. The catch with this method is that you're likely to lose constraints, sprocs, and the like.
CREATE TABLE [devDB].[table] LIKE [prodDB].[table]
Copy data, one table at a time.
Use a good editor to create the 470 insert statements you need. Start with a list of table names, and use the good old find-and-replace.
INSERT INTO [devDB].[table] SELECT * FROM [prodDB].[table];
This may choke, depending on your environment. If it does, drop and recreate the dev database (or empty all tables via phpMyAdmin). Then, run the INSERT commands a few tables at a time.
Database Administration requires CLI
The real problem you're facing here is that you're trying to do database administration without access to the Command Line Interface. There are significant complicated details to migrating large sets of data efficiently, most of which can only be solved using tools like mysqldump.
NOTE: I have just read your comment, and as I can understand you don't have access to command line. Please check Solution Two, this will definitely work.
The only solution that will work for you (which work for me at 12GB database) is directly from the command line:
Solution One
mysql -u root -p
set global net_buffer_length=1000000; --Set network buffer length to a large byte number
set global max_allowed_packet=1000000000; --Set maximum allowed packet size to a large byte number
SET foreign_key_checks = 0; --Disable foreign key checking to avoid delays, errors and unwanted behavior
source file.sql --Import your sql dump file
SET foreign_key_checks = 1; --Remember to enable foreign key checks when the procedure is complete!
If you have root access you can create bash script:
#!/bin/sh
# store start date to a variable
imeron=`date`
echo "Import started: OK"
dumpfile="/home/bob/bobiras.sql"
ddl="set names utf8; "
ddl="$ddl set global net_buffer_length=1000000;"
ddl="$ddl set global max_allowed_packet=1000000000; "
ddl="$ddl SET foreign_key_checks = 0; "
ddl="$ddl SET UNIQUE_CHECKS = 0; "
ddl="$ddl SET AUTOCOMMIT = 0; "
# if your dump file does not create a database, select one
ddl="$ddl USE jetdb; "
ddl="$ddl source $dumpfile; "
ddl="$ddl SET foreign_key_checks = 1; "
ddl="$ddl SET UNIQUE_CHECKS = 1; "
ddl="$ddl SET AUTOCOMMIT = 1; "
ddl="$ddl COMMIT ; "
echo "Import started: OK"
time mysql -h 127.0.0.1 -u root -proot -e "$ddl"
# store end date to a variable
imeron2=`date`
echo "Start import:$imeron"
echo "End import:$imeron2"
Source
Solution Two
Also, there is another option which is very good for those who are on shared hosting and don't have command line access. This solution worked for me on 4-5GB files:
MySQL Dumper: Download (You will able to backup/restore SQL file directly from "MySQL Dumper" you don't need phpmyadmin anymore).
Big Dump: Download (Just restore from Compress file and SQL file, need BIGDUMP PHP file editing for big import $linespersession = 3000; Change to $linespersession = 30000;)
Solution Three:
This solution definitely works, it is slow but works.
Download Trial version of (32 or 64 bit): Navicat MySQL Version 12
Install -> and RUN as Trial.
After that Add your Computer IP (Internet IP, not local IP), to the MySQL Remote in cPanel (new database/hosting). You can use wildcard IP in cPanel to access MySQL from any IP.
Goto Navicat MySQL: click on Connection put a connection name.
In next "Hostname/IP" add your "Hosting IP Address" (don't use localhost).
Leave port as it is (if your hosting defined a different port put that one here).
add your Database Username and Password
Click Test Connection, If it's successful, click on "OK"
Now on the Main Screen you will see all the database connected with the username on the left side column.
Double click on your database where you want to import SQL file:
Icon color of the database will change and you will see "Tables/views/function etc..".
Now right click on database and select "Execute SQL file" (http://prntscr.com/gs6ef1).
choose the file, choose "continue on error" if you want to and finally run it. Its take some time depending on your network connection speed and computer performance.
The easiest way is to try exporting the data from phpmyadmin. It will create the backup of your data.
But Sometimes, transferring large amount of data via import/export does result into errors.
You can try mysqldump to backup the data as well.
I found a few links for you here and here.
This is the mysqldump database backup documentation.
Hope it helps. :D
You can use mysqldump as follow
mysqldump —user= —password= --default-character-set=utf8
You can also make use of my shell script, which actually wrote long back for creating back-up of MySQL database on regular basis using cron job.
#!/bin/sh
now="$(date +'%d_%m_%Y_%H_%M_%S')"
filename="db_backup_$now".gz
backupfolder=“"
fullpathbackupfile="$backupfolder/$filename"
logfile="$backupfolder/"backup_log_"$(date +'%Y_%m')".txt
echo "mysqldump started at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
mysqldump —user= —password= --default-character-set=utf8 | gzip > "$fullpathbackupfile"
echo "mysqldump finished at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
chown "$fullpathbackupfile"
chown "$logfile"
echo "file permission changed" >> "$logfile"
find "$backupfolder" -name db_backup_* -mtime +2 -exec rm {} \;
echo "old files deleted" >> "$logfile"
echo "operation finished at $(date +'%d-%m-%Y %H:%M:%S')" >> "$logfile"
echo "*****************" >> "$logfile"
exit 0
I have already written an article on Schedule MySQL Database backup on CPanel or Linux.
Here's how I handled that problem when I faced it... Unfortunately this only works for Mac OS.
Download Sequel Pro - Completely free, and it has worked really well for me for over a year now.
Remotely connect to your server's database. You will probably need to add your ip address to the "Remote MYSQL" section in CPANEL. If you don't have the credentials, you can probably get them from your website's config file.
Once you're in the server, you can select all of your tables, secondary click, and select Export > As SQL Dump. You probably won't need to edit any of the settings. Click "Export".
Login to your local servers database, and select "Query" from the top menu.
Drag and drop the file that was downloaded from the export and it will automatically setup the database from the sql dump.
I hope this helps. It's a little work around, but it's worked really well for me, especially when PMA has failed.
Since the requirements include PHPMyAdmin, my suggestion is to:
select the database you need
go to the "Export" tab
click the "Custom - display all possible options" radio button
in the "Save output to a file" radio button options, select "gzipped" for "Compression:"
Remove the "Display comments" tick (to save some space)
Finish the export
Then try to import the generated file in the new Database you have (if you have sufficient resources - this should be possible).
Note: My previous experience shows that using compression allows larger DB exports/import operations but have not tested what is the upper limit in shared hosting environments (assuming this by your comment for cPanel).
Edit: When your export file is created, select the new database (assuming it is already created), go to the "Import" tab, select the file created from the export and start the import process.
If you have you database in your local server, you can export it and use BigDump to inserting to new database on the global server BigDump
I suspect that PHPMyAdmin will handle databases of that size (PHP upload/download limits, memory constraints, script execution time).
If you have acccess to the console, i would recommend doing export/import via the mysql command line:
Export:
$ mysqldump -u <user> -p<pass> <liveDatabase> | gzip > export.sql.gz
And Import:
$ gunzip < export.sql.gz | mysql -u <user> -p<pass> <devDatabase>
after you have created the new dev database in e.g. PHPMyAdmin or via command line.
Otherwise, if you only have access to an Apache/PHP environment, I would look for an export utility that splits export in smaller chunks. MySQLDumper comes to mind, but it's a few years old and AFAIK it is no longer actively maintained and is not compatible with PHP 7+.
But I think there is at least a pull request out there that makes it work with PHP7 (untested).
Edit based on your comment:
If the export already exists and the error occurs on import, you could try to increase the limits on your PHP environment, either via entries in .htaccess, changing php.ini or ini_set, whatever is available in your environment. The relevant settings are e.g. for setting via .htaccess (keep in mind, this will work only for apache environments with mod_php and also can be controlled by your hoster):
php_value max_execution_time 3600
php_value post_max_size 8000M
php_value upload_max_filesize 8000M
php_value max_input_time 3600
This may or may not work, depending on x32/x64 issues and/or your hosters restrictions.
Additionally, you need to adjust the PHPmyadmin settings for ExecTimeLimit - usually found in the config.default.php for your PHPMyAdmin installation:
Replace
$cfg['ExecTimeLimit'] = 300;
with
$cfg['ExecTimeLimit'] = 0;
And finally, you probably need to adjust your MySQL config to allow larger packets and get rid of the 'lost connection' error:
[mysqld] section in my.ini :
max_allowed_packet=256M

form on this page has more than 1000 fields

I have database on my localhost when I tried to export database it gave me this error
Warning: a form on this page has more than 1000 fields. on submission,
some of the fields might be ignored, due to PHP's max_input_vars
configuration.
so I have changed max_input_vars = 1000 to max_input_vars = 10000
now I am able to download the database but it is taking lot of time to import on server and also it might be corrupt my database.
is there any other option to get my database working? I have around 450 tables and some tables have around 4000-5000 entries
I am working on windows 7 with xammp server and I have created this database for magento website.
This is 100% working
First find to max_input_vars in php.ini file
and change to:
;max_input_vars = 1000
to
max_input_vars = 1000
This is working fine in know more please visit video link.
The following was the solution for me :
First of, be sure that you are editing the right php.ini file. To do that you can check what it is with phpinfo().
Then restart php to reload php.ini, depending on how you installed it, it could be done like that :
brew services restart php72
Restart apache as well to be sure :
sudo apachectl -k restart
Then put the following statement in a test.php file :
echo ini_get('max_input_vars'); and check it.
At this point you will see it is correctly updated.
If you still have phpmyadmin showing error after that, you should then hard refresh your phpmyadmin page : it can be done with CMD+SHIFT+R on a Mac.
The reason for this is that phpmyadmin's local javascript file is getting passed the value of max_input_vars from php and is responsible for deciding to display the error or not. So if the javascript local file isn't updated for caching reasons for example, then you just have to refresh it and problem gone : )
You should provide more information about what system you are on and the method you tried to export the database, otherwise answers can't be specific.
mysqldump is what you need for exports.
On Linux (and probably on Apple computers too) you can simply use:
mysqldump -u YOUR_USER -p YOUR_DATABASE > DESIRED_FILE_NAME.sql
mysqldump should be also be available for Windows.

LOAD DATA LOCAL INFILE fails - from php, to mysql (on Amazon rds)

We're moving our database from being on the webserver to a separate server (from an Amazon EC2 webserver to an RDS instance.)
We have a LOAD DATA INFILE that worked before that is going to need the LOCAL keyword added now that the database will be on a different machine to the webserver.
Testing on my dev server, it turns out that it doesn't work:
I can still LOAD DATA INFILE from php as I have been
I can LOAD DATA LOCAL INFILE from mysql commandline (with --local_infile=1)
I can't LOAD DATA LOCAL INFILE from php.
Between those 2 things that do work, it rules out:
problems with the sql or php code
problems with the upload file, including syntax and file permissions
mysql server settings problems
The error I get is:
ERROR 1148 (42000): The used command is not allowed with this MySQL version
(I get that error from the mysql commandline if I don't use --local_infile=1)
A few other bits of relevant info:
Ubuntu 12.04, mysql 5.5.24, php 5.3.10
I'm using php's mysql_connect (instead of mysqli, because we're planning on using facebook's hiphop compiler which doesn't support mysqli.)
Because of that, the connect command needs an extra flag set:
mysql_connect($dbHost, $dbUser, $dbPass, false, 128);
I've used phpinfo() to confirm that mysql.allow_local_infile = On
I've tried it on Amazon RDS (in case it was a problem in my dev server) and it doesn't work there either. (With the local_infile param turned on.)
The only thing I've read about that I haven't tried is to compile mysql server on my dev server with the flag turned on to allow local infile... but even if I get that working on my dev server it's not going to help me with Amazon RDS. (Besides which, LOAD DATA LOCAL INFILE does work from the mysql commandline.)
It seems like it's specifically a problem with php's mysql_connect()
Anybody using LOAD DATA LOCAL INFILE (maybe from Amazon RDS) that knows the trick to getting this to work?
I've given up on this, as I think it's a bug in php - in particular the mysql_connect code, which is now deprecated. It could probably be solved by compiling php yourself with changes to the source using steps similar to those mentioned in the bug report that #eggyal mentioned: https://bugs.php.net/bug.php?id=54158
Instead, I'm going to work around it by doing a system() call and using the mysql command line:
$sql = "LOAD DATA LOCAL INFILE '$csvPathAndFile' INTO TABLE $tableName FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\\\"' ESCAPED BY '\\\\\\\\' LINES TERMINATED BY '\\\\r\\\\n';";
system("mysql -u $dbUser -h $dbHost --password=$dbPass --local_infile=1 -e \"$sql\" $dbName");
That's working for me.
Here's a check list to rule out this nasty bug:
1- Grant the user FILE privileges in MySQL, phpMyAdmin generaly does not cover this privilege:
GRANT FILE ON *.* TO 'db_user'#'localhost';
2- Edit my.cnf in /etc/mysql/ or your mysql path:
[mysql]
local-infile=1
[mysqld]
local-infile=1
3- In php.ini at /etc/php5/cli/ or similar:
mysql.allow_local_infile = On
Optionally you can run ini_set in your script:
ini_set('mysql.allow_local_infile', 1);
4- The database handler library must use the correct options.
PDO:
new PDO('mysql:host='.$db_host.'.;dbname='.$db_name, $db_user, $db_pass,
array(PDO::MYSQL_ATTR_LOCAL_INFILE => 1));
mysqli:
$conn = mysqli_init();
mysqli_options($conn, MYSQLI_OPT_LOCAL_INFILE, true);
mysqli_real_connect($conn,server,user,code,database);
5- Make sure that the INFILE command uses the absolute path to the file and that it exists:
$sql = "LOAD DATA INFILE '".realpath(is_file($file))."'";
6- Check that the target file and parent directory are readable by PHP and by MySQL.
$ sudo chmod 777 file.csv
7- If you are working locally you can remove the LOCAL from your SQL:
LOAD DATA INFILE
Instead of:
LOAD DATA LOCAL INFILE
Note: Remember to restart the MySQL and PHP services if you edit their configuration files.
Hope this helps someone.
As referred in this post, adding 3rd and 4th parameter to mysql_connect are required to get LOAD LOCAL DATA INFILE working. It helped me. Any other suggestions (apparmor, local-infile=1 in my.cnf widely discussed in internet) did not help. Following PHP code worked for me!
mysql_connect(HOST,USER,PASS,false,128);
True, this is in manual, too.
use the following line that client activates with infile true
mysql --local-infile=1 -u root -p
If you're doing this in 2020, a tip for you is to check your phpinfo.php or php --ini for the location of the configuratin file. For me I was using virtualmin and changing the php ini file but my site had it's own specific ini file. Once I located it's location and changed it everything went back to normal.

Categories