We're moving our database from being on the webserver to a separate server (from an Amazon EC2 webserver to an RDS instance.)
We have a LOAD DATA INFILE that worked before that is going to need the LOCAL keyword added now that the database will be on a different machine to the webserver.
Testing on my dev server, it turns out that it doesn't work:
I can still LOAD DATA INFILE from php as I have been
I can LOAD DATA LOCAL INFILE from mysql commandline (with --local_infile=1)
I can't LOAD DATA LOCAL INFILE from php.
Between those 2 things that do work, it rules out:
problems with the sql or php code
problems with the upload file, including syntax and file permissions
mysql server settings problems
The error I get is:
ERROR 1148 (42000): The used command is not allowed with this MySQL version
(I get that error from the mysql commandline if I don't use --local_infile=1)
A few other bits of relevant info:
Ubuntu 12.04, mysql 5.5.24, php 5.3.10
I'm using php's mysql_connect (instead of mysqli, because we're planning on using facebook's hiphop compiler which doesn't support mysqli.)
Because of that, the connect command needs an extra flag set:
mysql_connect($dbHost, $dbUser, $dbPass, false, 128);
I've used phpinfo() to confirm that mysql.allow_local_infile = On
I've tried it on Amazon RDS (in case it was a problem in my dev server) and it doesn't work there either. (With the local_infile param turned on.)
The only thing I've read about that I haven't tried is to compile mysql server on my dev server with the flag turned on to allow local infile... but even if I get that working on my dev server it's not going to help me with Amazon RDS. (Besides which, LOAD DATA LOCAL INFILE does work from the mysql commandline.)
It seems like it's specifically a problem with php's mysql_connect()
Anybody using LOAD DATA LOCAL INFILE (maybe from Amazon RDS) that knows the trick to getting this to work?
I've given up on this, as I think it's a bug in php - in particular the mysql_connect code, which is now deprecated. It could probably be solved by compiling php yourself with changes to the source using steps similar to those mentioned in the bug report that #eggyal mentioned: https://bugs.php.net/bug.php?id=54158
Instead, I'm going to work around it by doing a system() call and using the mysql command line:
$sql = "LOAD DATA LOCAL INFILE '$csvPathAndFile' INTO TABLE $tableName FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\\\"' ESCAPED BY '\\\\\\\\' LINES TERMINATED BY '\\\\r\\\\n';";
system("mysql -u $dbUser -h $dbHost --password=$dbPass --local_infile=1 -e \"$sql\" $dbName");
That's working for me.
Here's a check list to rule out this nasty bug:
1- Grant the user FILE privileges in MySQL, phpMyAdmin generaly does not cover this privilege:
GRANT FILE ON *.* TO 'db_user'#'localhost';
2- Edit my.cnf in /etc/mysql/ or your mysql path:
[mysql]
local-infile=1
[mysqld]
local-infile=1
3- In php.ini at /etc/php5/cli/ or similar:
mysql.allow_local_infile = On
Optionally you can run ini_set in your script:
ini_set('mysql.allow_local_infile', 1);
4- The database handler library must use the correct options.
PDO:
new PDO('mysql:host='.$db_host.'.;dbname='.$db_name, $db_user, $db_pass,
array(PDO::MYSQL_ATTR_LOCAL_INFILE => 1));
mysqli:
$conn = mysqli_init();
mysqli_options($conn, MYSQLI_OPT_LOCAL_INFILE, true);
mysqli_real_connect($conn,server,user,code,database);
5- Make sure that the INFILE command uses the absolute path to the file and that it exists:
$sql = "LOAD DATA INFILE '".realpath(is_file($file))."'";
6- Check that the target file and parent directory are readable by PHP and by MySQL.
$ sudo chmod 777 file.csv
7- If you are working locally you can remove the LOCAL from your SQL:
LOAD DATA INFILE
Instead of:
LOAD DATA LOCAL INFILE
Note: Remember to restart the MySQL and PHP services if you edit their configuration files.
Hope this helps someone.
As referred in this post, adding 3rd and 4th parameter to mysql_connect are required to get LOAD LOCAL DATA INFILE working. It helped me. Any other suggestions (apparmor, local-infile=1 in my.cnf widely discussed in internet) did not help. Following PHP code worked for me!
mysql_connect(HOST,USER,PASS,false,128);
True, this is in manual, too.
use the following line that client activates with infile true
mysql --local-infile=1 -u root -p
If you're doing this in 2020, a tip for you is to check your phpinfo.php or php --ini for the location of the configuratin file. For me I was using virtualmin and changing the php ini file but my site had it's own specific ini file. Once I located it's location and changed it everything went back to normal.
Related
this has been annoying me for weeks and i cant find a proper solution.
im running a VPS
centos 7 (and aapanel which has no relevance)
php 7.4
mysql 5.7
phpmyadmin 5.0
ive gone into phpmyadmin, exported a table, updated 5000 rows, and i want to import and overwrite the old data into the same table.
the 'browse' and import is not an option(as vps error 503/low ram to load) and so ive tried to add it by SQL tab:
LOAD DATA LOCAL INFILE '/database/links.csv'
INTO TABLE links
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';
with permission denied
*yes, ive added
[MySQLi]
mysqli.allow_local_infile = On
to my.conf and even tried to add to php.ini
and restarted apache and even tried removing LOCAL(Saw that on stack too) with no avail*
does anyone have an updated version, or know of a solid solution to this annoying, but should be easy solution?
EDIT
root user = fixes issue for permission denied... but...
LOAD DATA INFILE
error
#1290 - The MySQL server is running with the --secure-file-priv option so it cannot execute this statement
--secure-file-priv has been removed from my.conf and apache restarted and error still appears
LOAD DATA LOCAL INFILE
error
#2000 - LOAD DATA LOCAL INFILE is forbidden, check mysqli.allow_local_infile
mysqli.allow_local_infile = On is still in my.conf
file has full permissions(777) and tried changing owner (www/root/mysql)
Carefully read difference between LOCAL and non-LOCAL versions of LOAD DATA command: https://dev.mysql.com/doc/refman/5.7/en/load-data.html#load-data-local
You need to check state of secure_file_priv config variable for non-LOCAL version of command:
mysql> select ##secure_file_priv;
+-----------------------+
| ##secure_file_priv |
+-----------------------+
| /var/lib/mysql-files/ |
+-----------------------+
File must be located here.
For LOCAL version of the command you must check permission of client to read such file. In case of phpmyadmin it will be php as client.
In both cases double check that mysql server or php both have enough permissions to enter directory and read file. The easiest way is to login under system user for mysql or php and just try to read file:
sudo -u mysql_or_php_user /bin/bash
or
su -s /bin/bash mysql_or_php_user
then
head /database/links.csv
I am trying to use LOAD DATA INFILE to insert some records into a table. Unfortunately, it's not working.
Here are some details
If I use this instruction:
LOAD DATA INFILE 'file.txt'
INTO TABLE table_ex
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
(field1, field2, field3, field4);
It works using the MySQL client program and a PHP application. In this way it will look for the file in the Data Directory of my MySQL installation.
Now if I try to execute the instructions using the LOCAL option, it only works if I use the mysql client, but not from PHP:
LOAD DATA LOCAL INFILE 'path/to/file/file.txt'
INTO TABLE table_ex
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
(field1, field2, field3, field4);
Again.. it works with MySQL client but not from the PHP application... I get this error:
LOAD DATA LOCAL INFILE forbidden in /path/to/my/application
I read that the problem is related to the compilation of PHP and using mysqlnd. I am using PHP 5.3.8 and MySQL 5.5.15, but I haven't found a solution.
Additional information: until now the only help I've found was an open PHP bug:
Check docs http://php.net/manual/en/ref.pdo-mysql.php.
Basically you need:
PDO::MYSQL_ATTR_LOCAL_INFILE => true
Set at instantiation.
Example:
$conn = new \PDO("mysql:host=$server;dbname=$database;", "$user", "$password", array(
PDO::MYSQL_ATTR_LOCAL_INFILE => true,
));
had this problem today and solved it by setting the following in php.ini
mysqli.allow_local_infile = On
I didn't get the exact error you get, but you need no ensure the following:
Enable by adding to your my.cnf:
[mysql]
local-infile=1
[mysqld]
local-infile=1
Tell the connection in PHP that it may use LOCAL INFILE
Using mysql:
mysql_connect(server,user,code,false,128); // 128 enables LOCAL INFILE
mysql_select_db(database);
Using mysqli:
$conn = mysqli_init();
mysqli_options($conn, MYSQLI_OPT_LOCAL_INFILE, true);
mysqli_real_connect($conn,server,user,code,database);
Give MySQL user FILE permission
When using LOCAL this shouldn't be necessary, though. LOCAL says that the file is located on the client server (where you have PHP is installed), otherwise it looks at server location (where MySQL is installed).
GRANT FILE ON *.* TO 'mysql_user'#'localhost'
Easier work around is to use exec()
exec("mysql -u myuser -pMyPass -e \"USE mydb;TRUNCATE mytable;LOAD DATA INFILE '" . $file . "' IGNORE INTO TABLE mytable;\"; ");
2019+ relevant answer with a bit more background:
In PHP >7.2.16 and >7.3.3 the default ini configuration of mysqli.allow_local_infile, which controls this, changed from '1' to '0' (so it is now disabled by default).
This directive is only configurable via PHP_INI_SYSTEM so ini_set() will not work.
The only option is to add the following directive to your php.ini file, not forgetting to reload apache.
[MySQLi]
mysqli.allow_local_infile = On
According to the MySQL manual MySQL must be compiled with --enable-local-infile. From a comment at that link:
You MUST have compiled PHP using the full path to MySQL, otherwise it
will use it's internal handlers, which don't work with the "new" LOAD
DATA.
--with-mysql=/usr/local/mysql (assuming your MySQL is located here)
You MUST start the MySQL daemon with the option '--local-infile=1'
The solution whish worked for me is below. Adding mysqli_options was required on second server I've setup same script.
$mysqli = new
mysqli("$db_server_name","$db_user_name","$db_password","$database_name");
// force LOCAL_INFILE
mysqli_options($mysqli, MYSQLI_OPT_LOCAL_INFILE, true);
LOAD DATA LOCAL INFILE executes regardless of the warnings. it works on mysql client since it allows the execution of queries, ignoring warnings. Though it later prints out the warnings. It refuses in PHP though because a warning will halt the script.
Easiest solution, that may work on some servers is to remove LOCAL like:
Original:LOAD DATA LOCAL INFILE
New/ It should be: LOAD DATA INFILE
Strange, but I have found this solution to work on my local machine, with xampp but it did not work on a live server with CentOS, so I'd to revert the code back and add 'LOCAL'.
I had exactly the same problem on a EC2 Ubuntu 12.04 LTS instance when accessing a MySQL on RDS: LOAD DATA LOCAL INFILE... works fine on a mysql console but not from PHP. Accidentaly i found out that it worked fine on another almost identical machine that used MariaDB (a binary compatible drop in replacement for MySQL).
So i replaced the MySQL clients with the ones from MariaDB and it worked.
If you use an Ubuntu server, you can try to install php5-mysqlnd :
sudo apt-get install php5-mysqlnd
To resolve the same problem in PHP Symfony application, this flag needs to be enabled in the yml config file. Here is an example:
# Doctrine Configuration
doctrine:
dbal:
driver: pdo_mysql
options:
!php/const PDO::MYSQL_ATTR_LOCAL_INFILE: true
# Skip the rest
Also note how to reference PHP constant here in yml file, and this format is used for Symfony 3.4. For older version, check out Symfony doc.
uncomment 'mysqli.allow_local_infile = On' in php.ini.
I get this error when I try to source a large SQL file (a big INSERT query).
mysql> source file.sql
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 2
Current database: *** NONE ***
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 3
Current database: *** NONE ***
Nothing in the table is updated. I've tried deleting and undeleting the table/database, as well as restarting MySQL. None of these things resolve the problem.
Here is my max-packet size:
+--------------------+---------+
| Variable_name | Value |
+--------------------+---------+
| max_allowed_packet | 1048576 |
+--------------------+---------+
Here is the file size:
$ ls -s file.sql
79512 file.sql
When I try the other method...
$ ./mysql -u root -p my_db < file.sql
Enter password:
ERROR 2006 (HY000) at line 1: MySQL server has gone away
max_allowed_packet=64M
Adding this line into my.cnf file solves my problem.
This is useful when the columns have large values, which cause the issues, you can find the explanation here.
On Windows this file is located at: "C:\ProgramData\MySQL\MySQL Server
5.6"
On Linux (Ubuntu): /etc/mysql
You can increase Max Allowed Packet
SET GLOBAL max_allowed_packet=1073741824;
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet
The global update and the my.cnf settings didn't work for me for some reason. Passing the max_allowed_packet value directly to the client worked here:
mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql
In general the error:
Error: 2006 (CR_SERVER_GONE_ERROR) - MySQL server has gone away
means that the client couldn't send a question to the server.
mysql import
In your specific case while importing the database file via mysql, this most likely mean that some of the queries in the SQL file are too large to import and they couldn't be executed on the server, therefore client fails on the first occurred error.
So you've the following possibilities:
Add force option (-f) for mysql to proceed and execute rest of the queries.
This is useful if the database has some large queries related to cache which aren't relevant anyway.
Increase max_allowed_packet and wait_timeout in your server config (e.g. ~/.my.cnf).
Dump the database using --skip-extended-insert option to break down the large queries. Then import it again.
Try applying --max-allowed-packet option for mysql.
Common reasons
In general this error could mean several things, such as:
a query to the server is incorrect or too large,
Solution: Increase max_allowed_packet variable.
Make sure the variable is under [mysqld] section, not [mysql].
Don't afraid to use large numbers for testing (like 1G).
Don't forget to restart the MySQL/MariaDB server.
Double check the value was set properly by:
mysql -sve "SELECT ##max_allowed_packet" # or:
mysql -sve "SHOW VARIABLES LIKE 'max_allowed_packet'"
You got a timeout from the TCP/IP connection on the client side.
Solution: Increase wait_timeout variable.
You tried to run a query after the connection to the server has been closed.
Solution: A logic error in the application should be corrected.
Host name lookups failed (e.g. DNS server issue), or server has been started with --skip-networking option.
Another possibility is that your firewall blocks the MySQL port (e.g. 3306 by default).
The running thread has been killed, so retry again.
You have encountered a bug where the server died while executing the query.
A client running on a different host does not have the necessary privileges to connect.
And many more, so learn more at: B.5.2.9 MySQL server has gone away.
Debugging
Here are few expert-level debug ideas:
Check the logs, e.g.
sudo tail -f $(mysql -Nse "SELECT ##GLOBAL.log_error")
Test your connection via mysql, telnet or ping functions (e.g. mysql_ping in PHP).
Use tcpdump to sniff the MySQL communication (won't work for socket connection), e.g.:
sudo tcpdump -i lo0 -s 1500 -nl -w- port mysql | strings
On Linux, use strace. On BSD/Mac use dtrace/dtruss, e.g.
sudo dtruss -a -fn mysqld 2>&1
See: Getting started with DTracing MySQL
Learn more how to debug MySQL server or client at: 26.5 Debugging and Porting MySQL.
For reference, check the source code in sql-common/client.c file responsible for throwing the CR_SERVER_GONE_ERROR error for the client command.
MYSQL_TRACE(SEND_COMMAND, mysql, (command, header_length, arg_length, header, arg));
if (net_write_command(net,(uchar) command, header, header_length,
arg, arg_length))
{
set_mysql_error(mysql, CR_SERVER_GONE_ERROR, unknown_sqlstate);
goto end;
}
I solved the error ERROR 2006 (HY000) at line 97: MySQL server has gone away and successfully migrated a >5GB sql file by performing these two steps in order:
Created /etc/my.cnf as others have recommended, with the following contents:
[mysql]
connect_timeout = 43200
max_allowed_packet = 2048M
net_buffer_length = 512M
debug-info = TRUE
Appending the flags --force --wait --reconnect to the command (i.e. mysql -u root -p -h localhost my_db < file.sql --verbose --force --wait --reconnect).
Important Note: It was necessary to perform both steps, because if I didn't bother making the changes to /etc/my.cnf file as well as appending those flags, some of the tables were missing after the import.
System used: OSX El Capitan 10.11.5; mysql Ver 14.14 Distrib 5.5.51 for osx10.8 (i386)
Just in case, to check variables you can use
$> mysqladmin variables -u user -p
This will display the current variables, in this case max_allowed_packet, and as someone said in another answer you can set it temporarily with
mysql> SET GLOBAL max_allowed_packet=1072731894
In my case the cnf file was not taken into account and I don't know why, so the SET GLOBAL code really helped.
You can also log into the database as root (or SUPER privilege) and do
set global max_allowed_packet=64*1024*1024;
doesn't require a MySQL restart as well. Note that you should fix your my.cnf file as outlined in other solutions:
[mysqld]
max_allowed_packet=64M
And confirm the change after you've restarted MySQL:
show variables like 'max_allowed_packet';
You can use the command-line as well, but that may require updating the start/stop scripts which may not survive system updates and patches.
As requested, I'm adding my own answer here. Glad to see it works!
The solution is increasing the values given the wait_timeout and the connect_timeout parameters in your options file, under the [mysqld] tag.
I had to recover a 400MB mysql backup and this worked for me (the values I've used below are a bit exaggerated, but you get the point):
[mysqld]
port=3306
explicit_defaults_for_timestamp = TRUE
connect_timeout = 1000000
net_write_timeout = 1000000
wait_timeout = 1000000
max_allowed_packet = 1024M
interactive_timeout = 1000000
net_buffer_length = 200M
net_read_timeout = 1000000
set GLOBAL delayed_insert_timeout=100000
Blockquote
I had the same problem but changeing max_allowed_packet in the my.ini/my.cnf file under [mysqld] made the trick.
add a line
max_allowed_packet=500M
now restart the MySQL service once you are done.
A couple things could be happening here;
Your INSERT is running long, and client is disconnecting. When it reconnects it's not selecting a database, hence the error. One option here is to run your batch file from the command line, and select the database in the arguments, like so;
$ mysql db_name < source.sql
Another is to run your command via php or some other language. After each long - running statement, you can close and re-open the connection, ensuring that you're connected at the start of each query.
If you are on Mac and installed mysql through brew like me, the following worked.
cp $(brew --prefix mysql)/support-files/my-default.cnf /usr/local/etc/my.cnf
Source: For homebrew mysql installs, where's my.cnf?
add max_allowed_packet=1073741824 to /usr/local/etc/my.cnf
mysql.server restart
I had the same problem in XAMMP
Metode-01: I changed max_allowed_packet in the D:\xampp\mysql\bin\my.ini file like that below:
max_allowed_packet=500M
Finally restart the MySQL service once and done.
Metode-02:
the easier way if you are using XAMPP. Open the XAMPP control panel, and click on the config button in mysql section.
Now click on the my.ini and it will open in the editor. Update the max_allowed_packet to your required size.
Then restart the mysql service. Click on stop on the Mysql service click start again. Wait for a few minutes.
Then try to run your Mysql query again. Hope it will work.
I encountered this error when I use Mysql Cluster, I do not know this question is from a cluster usage or not. As the error is exactly the same, so give my solution here.
Getting this error because the data nodes suddenly crash. But when the nodes crash, you can still get the correct result using cmd:
ndb_mgm -e 'ALL REPORT MEMORYUSAGE'
And the mysqld also works correctly.So at first, I can not understand what is wrong. And about 5 mins later, ndb_mgm result shows no data node working. Then I realize the problem. So, try to restart all the data nodes, then the mysql server is back and everything is OK.
But one thing is weird to me, after I lost mysql server for some queries, when I use cmd like show tables, I can still get the return info like 33 rows in set (5.57 sec), but no table info is displayed.
This error message also occurs when you created the SCHEMA with a different COLLATION than the one which is used in the dump. So, if the dump contains
CREATE TABLE `mytab` (
..
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
you should also reflect this in the SCHEMA collation:
CREATE SCHEMA myschema COLLATE utf8_unicode_ci;
I had been using utf8mb4_general_ci in the schema, cause my script came from a fresh V8 installation, now loading a DB on old 5.7 crashed and drove me nearly crazy.
So, maybe this helps you saving some frustating hours... :-)
(MacOS 10.3, mysql 5.7)
Add max_allowed_packet=64M to [mysqld]
[mysqld]
max_allowed_packet=64M
Restart the MySQL server.
If it's reconnecting and getting connection ID 2, the server has almost definitely just crashed.
Contact the server admin and get them to diagnose the problem. No non-malicious SQL should crash the server, and the output of mysqldump certainly should not.
It is probably the case that the server admin has made some big operational error such as assigning buffer sizes of greater than the architecture's address-space limits, or more than virtual memory capacity. The MySQL error-log will probably have some relevant information; they will be monitoring this if they are competent anyway.
This is more of a rare issue but I have seen this if someone has copied the entire /var/lib/mysql directory as a way of migrating their DB to another server. The reason it doesn't work is because the database was running and using log files. It doesn't work sometimes if there are logs in /var/log/mysql. The solution is to copy the /var/log/mysql files as well.
For amazon RDS (it's my case), you can change the max_allowed_packet parameter value to any numeric value in bytes that makes sense for the biggest data in any insert you may have (e.g.: if you have some 50mb blob values in your insert, set the max_allowed_packet to 64M = 67108864), in a new or existing parameter-group. Then apply that parameter-group to your MySQL instance (may require rebooting the instance).
For Drupal 8 users looking for solution for DB import failure:
At end of sql dump file there can commands inserting data to "webprofiler" table.
That's I guess some debug log file and is not really important for site to work so all this can be removed. I deleted all those inserts including LOCK TABLES and UNLOCK TABLES (and everything between). It's at very bottom of the sql file. Issue is described here:
https://www.drupal.org/project/devel/issues/2723437
But there is no solution for it beside truncating that table.
BTW I tried all solutions from answers above and nothing else helped.
I've tried all of above solutions, all failed.
I ended up with using -h 127.0.0.1 instead of using default var/run/mysqld/mysqld.sock.
If you have tried all these solutions, esp. increasing max_allowed_packet up to the maximum supported amount of 1GB and you are still seeing these errors, it might be that your server literally does not have enough free RAM memory available...
The solution = upgrade your server to more RAM memory, and try again.
Note: I'm surprised this simple solution has not been mentioned after 8+ years of discussion on this thread... sometimes we developers tend to overthink things.
Eliminating the errors which triggered Warnings was the final solution for me. I also changed the max_allowed_packet which helped with smaller files with errors. Eliminating the errors also sped up the process incredibly.
if none of this answers solves you the problem, I solved it by removing the tables and creating them again automatically in this way:
when creating the backup, first backup structure and be sure of add:
DROP TABLE / VIEW / PROCEDURE / FUNCTION / EVENT
CREATE PROCEDURE / FUNCTION / EVENT
IF NOT EXISTS
AUTO_INCREMENT
then just use this backup with your db and it will remove and recreate the tables you need.
Then you backup just data, and do the same, and it will work.
How about using the mysql client like this:
mysql -h <hostname> -u username -p <databasename> < file.sql
Alright, so after a week of trying all the different ideas answers I have found with no success, I am going to ask. This is for the LOAD DATA LOCAL INFILE, for MySQL. There have been many posts regarding this, but I am still unable to get it work from the web browser, but am able to run it from the mysql command prompt on the server, with the same user and pass connecting to the same database. What I have tried so far
MySQL version 5.5
Ubuntu 12.04
PHP version 5.3
In my.cnf
local-infile=1
in mysqld,mysql,mysql-safe
loose-local-infile=1 client
Restarted MySQL Server. At this point I was then able to run the query from the command prompt, and previously had not.
I have given the directory in which the files are being pulled from 777 access.
I have confirmed the php.ini has the local file parameter enabled.
I have updated apparmor.
Actual Query:
LOAD DATA LOCAL INFILE '/var/www/ui/uploads/r_import.csv' INTO TABLE r_data FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n' IGNORE 1 ROWS (first_name,last_name,apt,user_id)
Above query works from the mysql command with no special arguments in the connection to the server.
If anyone has anymore ideas on this, I would be happy to try anything....
Thanks in advance.
<?php
include 'includes/header.php';
if($_FILES['file']['type'] != "application/vnd.ms-excel"){
die("This is not a CSV file.");
}
elseif(is_uploaded_file($_FILES['file']['tmp_name'])){
$filename = $_FILES['file']['tmp_name'];
$name = $_FILES['file']['name'];
copy( $filename, 'uploads/'.$name ) or die( "Could not copy file!");
$file_to_import = '/var/www/ui/uploads/'.$name;
$query = 'LOAD DATA LOCAL INFILE \''.$file_to_import.'\' INTO TABLE r_data FIELDS TERMINATED BY \',\' ENCLOSED BY \'"\' LINES TERMINATED BY \'\n\' IGNORE 1 ROWS (first_name,last_name,apt,user_id)';
echo $query;
$result = mysqli_query($link,$query) or die(mysqli_error($link));
}
else{
die("You shouldn't be here");
}
?>
$link = mysqli_init();
mysqli_options($link, MYSQLI_OPT_LOCAL_INFILE, true);
mysqli_real_connect($link, 'localhost', $username, $password, $database);
The connection string to the database is what worked for me. I found it in the last comment in the link given by developerwjk
Your problem is the use of the LOCAL keyword. When you use LOCAL the server expects the MySQL client to read the file and send it. This applies when the client software is running on a remote machine, or when you're running the MySQL client on the server itself. (This is why you can run your query from the server command line).
If you're running PHP there is no client software involved. PHP makes calls directly to the server, so the LOCAL keyword is invalid in this context.
To use LOAD DATA INFILE from PHP you must make sure that the file is placed in a location in the server filesystem that the MySQL server has read access to, and that the full path to that file is passed as part of your query. Don't use the LOCAL keyword.
If you're trying to load a file from a remote client you'll need to upload the file to the file system first, then execute your query.
Note this sentence from the MySQL manual: If LOCAL is specified, the file is read by the client program on the client host and sent to the server.
Reference: http://dev.mysql.com/doc/refman/5.5/en/load-data.html
I am trying to use LOAD DATA INFILE to insert some records into a table. Unfortunately, it's not working.
Here are some details
If I use this instruction:
LOAD DATA INFILE 'file.txt'
INTO TABLE table_ex
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
(field1, field2, field3, field4);
It works using the MySQL client program and a PHP application. In this way it will look for the file in the Data Directory of my MySQL installation.
Now if I try to execute the instructions using the LOCAL option, it only works if I use the mysql client, but not from PHP:
LOAD DATA LOCAL INFILE 'path/to/file/file.txt'
INTO TABLE table_ex
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
(field1, field2, field3, field4);
Again.. it works with MySQL client but not from the PHP application... I get this error:
LOAD DATA LOCAL INFILE forbidden in /path/to/my/application
I read that the problem is related to the compilation of PHP and using mysqlnd. I am using PHP 5.3.8 and MySQL 5.5.15, but I haven't found a solution.
Additional information: until now the only help I've found was an open PHP bug:
Check docs http://php.net/manual/en/ref.pdo-mysql.php.
Basically you need:
PDO::MYSQL_ATTR_LOCAL_INFILE => true
Set at instantiation.
Example:
$conn = new \PDO("mysql:host=$server;dbname=$database;", "$user", "$password", array(
PDO::MYSQL_ATTR_LOCAL_INFILE => true,
));
had this problem today and solved it by setting the following in php.ini
mysqli.allow_local_infile = On
I didn't get the exact error you get, but you need no ensure the following:
Enable by adding to your my.cnf:
[mysql]
local-infile=1
[mysqld]
local-infile=1
Tell the connection in PHP that it may use LOCAL INFILE
Using mysql:
mysql_connect(server,user,code,false,128); // 128 enables LOCAL INFILE
mysql_select_db(database);
Using mysqli:
$conn = mysqli_init();
mysqli_options($conn, MYSQLI_OPT_LOCAL_INFILE, true);
mysqli_real_connect($conn,server,user,code,database);
Give MySQL user FILE permission
When using LOCAL this shouldn't be necessary, though. LOCAL says that the file is located on the client server (where you have PHP is installed), otherwise it looks at server location (where MySQL is installed).
GRANT FILE ON *.* TO 'mysql_user'#'localhost'
Easier work around is to use exec()
exec("mysql -u myuser -pMyPass -e \"USE mydb;TRUNCATE mytable;LOAD DATA INFILE '" . $file . "' IGNORE INTO TABLE mytable;\"; ");
2019+ relevant answer with a bit more background:
In PHP >7.2.16 and >7.3.3 the default ini configuration of mysqli.allow_local_infile, which controls this, changed from '1' to '0' (so it is now disabled by default).
This directive is only configurable via PHP_INI_SYSTEM so ini_set() will not work.
The only option is to add the following directive to your php.ini file, not forgetting to reload apache.
[MySQLi]
mysqli.allow_local_infile = On
According to the MySQL manual MySQL must be compiled with --enable-local-infile. From a comment at that link:
You MUST have compiled PHP using the full path to MySQL, otherwise it
will use it's internal handlers, which don't work with the "new" LOAD
DATA.
--with-mysql=/usr/local/mysql (assuming your MySQL is located here)
You MUST start the MySQL daemon with the option '--local-infile=1'
The solution whish worked for me is below. Adding mysqli_options was required on second server I've setup same script.
$mysqli = new
mysqli("$db_server_name","$db_user_name","$db_password","$database_name");
// force LOCAL_INFILE
mysqli_options($mysqli, MYSQLI_OPT_LOCAL_INFILE, true);
LOAD DATA LOCAL INFILE executes regardless of the warnings. it works on mysql client since it allows the execution of queries, ignoring warnings. Though it later prints out the warnings. It refuses in PHP though because a warning will halt the script.
Easiest solution, that may work on some servers is to remove LOCAL like:
Original:LOAD DATA LOCAL INFILE
New/ It should be: LOAD DATA INFILE
Strange, but I have found this solution to work on my local machine, with xampp but it did not work on a live server with CentOS, so I'd to revert the code back and add 'LOCAL'.
I had exactly the same problem on a EC2 Ubuntu 12.04 LTS instance when accessing a MySQL on RDS: LOAD DATA LOCAL INFILE... works fine on a mysql console but not from PHP. Accidentaly i found out that it worked fine on another almost identical machine that used MariaDB (a binary compatible drop in replacement for MySQL).
So i replaced the MySQL clients with the ones from MariaDB and it worked.
If you use an Ubuntu server, you can try to install php5-mysqlnd :
sudo apt-get install php5-mysqlnd
To resolve the same problem in PHP Symfony application, this flag needs to be enabled in the yml config file. Here is an example:
# Doctrine Configuration
doctrine:
dbal:
driver: pdo_mysql
options:
!php/const PDO::MYSQL_ATTR_LOCAL_INFILE: true
# Skip the rest
Also note how to reference PHP constant here in yml file, and this format is used for Symfony 3.4. For older version, check out Symfony doc.
uncomment 'mysqli.allow_local_infile = On' in php.ini.