I'm using a php script to backup my sql databases remotely that utilizes mysqldump. http://www.dagondesign.com/files/backup_dbs.txt
and I tried to add the the --lock-tables=false since I'm using MyISAM tables but still got an error.
exec( "$MYSQL_PATH/mysqldump --lock-tables=false $db_auth --opt $db 2>&1 >$BACKUP_TEMP/$db.sql", $output, $res);
error:
mysqldump: Couldn't execute 'show fields from `advisory_info`': Can't create/write to file 'E:\tmp\#sql_59c_0.MYD' (Errcode: 17) (1)
Someone told me this file was the lock file it self and I was able to find it in my Server that I wanted to backup.
So is this the lock file? And does it lock the database if you do remotely no matter if I put the variable --lock-tables=false? Or should it not be there since there are a lot of people working on the server and someone might have created it?
It's likely --lock-tables=false isn't doing what you think it's doing. Since you're passing --lock-tables, it's probably assuming you do want to lock the tables (even though this is the default), so it's locking them. In Linux, we don't prevent flags but appending something like =false or =0, but normally by having a --skip-X or --no-X.
You might want to try --skip-opt:
--skip-opt Disable --opt. Disables --add-drop-table, --add-locks,
--lock-tables, --set-charset, and --disable-keys.
Because --opt is enabled by default, you can --skip-opt then add back any flags you want.
On Windows 7 using Wamp, the option is --skip-lock-tables
Took from this answer
Related
I am trying to create periodic backups (poor man's cron) of my database using mysqldump with exec() function. I am using XAMPP/PHP7 on macOS.
$command = "$mysqldump_location -u$db_user -h$db_host -p$db_password $db_name > $backup_file_location";
exec($command);
When I run the PHP script, I get no SQL dump in the path mentioned in $backup_file_location but if I execute the same $command string on the terminal directly I get the desired SQL file in the desired location.
I am unable to understand what could be the problem here. Also open to suggestions on better ways to dump the entire DB.
Edit 1:
The value of $mysqldump_location is /Applications/XAMPP/xamppfiles/bin/mysqldump
The value of $backup_file_location is /Applications/XAMPP/xamppfiles/htdocs/app5/data/sqldumps/sql_data.sql
/app5/ is the folder in while I am developing my app.
Edit 2:
Possible duplicate suggestion does not apply since the issue here was not on how to dump SQL backups. The key issue here was that the backup using mysqldump was working through terminal, but not through PHP's exec() function.
The resolution of the issue, from above comments, was that the PHP request executes in XAMPP as a user that has limited privileges, and the mysqldump process inherits those privileges.
Checking the exit status of the process run by exec() confirmed that mysqldump exited with a nonzero exit status, indicating it failed for some reason.
Opening write privileges to 777 on the directory where the mysqldump process tries to write resolved the error.
It should also be adequate to figure out the specific uid & gid of Apache processes (check the User and Group config values in the Apache config file (e.g. xampp-home/apache/conf/httpd.conf) and make the output directory writeable by that uid or gid.
I have a 28 MB sql file need to import to mysql.
Firstly, i'm using xampp to import, and it fails, and so i change my max_file_uploads, post_size(something like that)in php.ini-development and php.ini-product to 40 MB, but it still show "max:2048kb" and import fail again.
From research, i've learned to import by using mysql.exe, so i open mysql.exe and type the command line(msdos) below:
-u root -p dbname < C:\xxx\xxx\sqlfile.sql
but still failed again and again.....
what the problem is? xampp? or my sql setting?
Try this:
mysql -uroot -p --max_allowed_packet=24M dbname
Once you log into the database:
source C:\xxx\xxx\sqlfile.sql
I think that you should be able to load your file
How large is your file?. You might as well do it from a console:
mysql -u##USER## -p ##YOUR_DATABASE## < ##PATH_TO_YOUR_FILE##
Do that without executing your mysql.ext file: just "cd" right into the directory and try the command.
It will ask for your password and start importing right away. Don't forget to create the database or delete all tables if it's already there.
I always found this approach quick, painless and easier that rolling around with php directives, phpmyadmin configuration or external applications. It's right there, built into the mysql core.
You should increase max_allowed_packet in MySQL.
Just execute this command before importing your file:
set global max_allowed_packet=1000000000;
I also fetched the similar problem. So after that I also conclude , large sql file will never be imported to mysql. It will always give timeout error.
Then I found a solution.
There is an software Heidisql.
follow below steps:-
1) download the software.
2) then install the software
3) create new session in Heidisql and open the session
4) then go to Tools -> Load SQL File -> Browse.
That's it. This solution works best for me.
check the link here
I found the only solution was to log in to MySQL from the command line and use the 'source' command:-
1) cd to the directory containing your SQL file for import, then log into MySQL:
#> mysql -u YOURUSERNAME -p -h localhost
2) use MySQL commands to import the data:
#> use NAMEOFYOURDB;
#> source NAMEOFFILETOIMPORT.sql
This also feeds back info about progress to your terminal, which is reassuring.
I am trying to execute a few PostgreSQL DB commands from a web interface.
I use proc_open() to pipe to the Windows command prompt.
Because psql (and all other postgres command) do not accept the password as an option, I must send the password to the write stream.
The code below causes the browser to hang. Either the resource is not be created, or the password is not being piped properly. Any suggestions are welcome at this point.
$cmd = '""C:\\Program files\\PostgreSQL\\9.0\\bin\\psql.exe" --host localhost --port 5432 -U postgres --dbname $database_name --command "$query""';
$p=proc_open($cmd,
array(array("pipe","r"), array("pipe","w"), array("pipe","w")),
$pipes);
if (is_resource($p)){
fwrite($pipes[0],"mypassword");
fclose($pipes[0]);
proc_terminate($p);
proc_close($p);
}
[You'll notice the crazy double-double-quoting in the command -- this is apparently needed for windows for some reason.]
Work-arounds to this problem are welcome:
I previously tried using system() and exec() but gave up since they don't handle interactive prompt. Is there a better option in php for interactive?
pg_query() is the main command for interacting with the postgres DB, but pg_dump and pg_restore operations are not supported. Is there another way to backup and restore from binary postgres .backup files that can be accomplished with php?
Don't mess around with the password prompt, better put an entry into %APPDATA%\postgresql\pgpass.conf. Format is
hostname:port:database:username:password
Make sure to pick the %APPDATA% of the user running the webserver process.
If you're really set on interacting with the prompt, you could try the Expect library which people often use for such tasks... disclaimer: I've never used it on windows and have no idea how well it works there, or if it really is necessary. Maybe your fwrite is just missing a terminating newline.
As #c-ramseyer suggested, messing around with simulating an interactive prompt via proc_open() was a non-starter. PostgreSQL offers two methods to get around providing the password through the interactive prompt. Method (1) is to provide it as environment variables, as suggested by the other answer. Method (2) is to create a pgpass.conf file in the DB user's %appinfo% directiory. (To find that directory do echo %appinfo% from windows command prompt.) See postgresql for how to make this one-liner conf file. Neither of these methods worked for me, for reasons I still don't understand.
To solve the problem, I had to modify the ph_hda.conf file (PostgreSQL Client Authentication Configuration File) to disable authentication. That file is located in the postgresql/data directory. I commented out the 2 lines of default settings at the bottom and replaced them with
#TYPE DATABASE USER CIDR-ADDRESS METHOD
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
Now if I call postgres from php I include the --no-password option and the sucker works. Note that this solution is not secure, and only makes sense in my case because it is being used for an internal company application, with machines running offline. This method should not be used for a production site, your DB will get hacked. Here's the php code.
$commande_restore = '""'.$postgres_bin_directory.'pg_restore" --host 127.0.0.1 --port 5432 -U postgres --no-password -d '.$database_name.' '.$restore_file.'"';
$this->execute($commande_restore);
function execute($cmd, $env = null){
$proc=proc_open($cmd,array(0=>array('pipe','r'),1=>array('pipe','w'),2=>array('pipe','w')),$pipes, null, $env = null);
//fwrite($pipes[0],$stdin); //add to argument if you want to pass stdin
fclose($pipes[0]);
$stdout=stream_get_contents($pipes[1]); fclose($pipes[1]);
$stderr=stream_get_contents($pipes[2]); fclose($pipes[2]);
$return=proc_close($proc);
return array( 'stdout'=>$stdout, 'stderr'=>$stderr, 'return'=>$return );
}
It took me close to 2 weeks to solve this, so I hope it helps someone.
Is there any way I can Import a huge database into my local server.
The database is of 1.9GB and importing it into my local is causing me a lot of problems.
I have tried sql dumping and was not successful in getting it in my local and have also tried changing the Php.ini settings.
Please let me know if there is any other way of getting this done.
I have used BigDump and also Sql Dump Splitter but I am still to able to find a solution
mysql -u #username# -p #database# < #dump_file#
Navigate to your mysql bin directory and login to your mysql
Select the database
use source command to import the data
[user#localhost] mysql -uroot -hlocalhost // assuming no password
[user#localhost] use mydb // mydb is the databasename
[user#localhost] source /home/user/datadump.sql
Restoring a backup of that size is going to take a long time. There's some great advice here: http://vitobotta.com/smarter-faster-backups-restores-mysql-databases-with-mysqldump/ which essentially gives you some additional options you can use to speed up both the initial backup and the subsequent restore.
Update: Finally got this thing working but still not sure what the problem was. I am using a wamp server that I access through a networked folder.
The problem that still exists is that to execute the mysqldump I have to access the php file from the actual machine that is being used to host the WAMP server.
End of update
I am running a wamp server and trying to use mysqldump to backup a mysql database I have. The following is the PHP code I am using to run mysqldump.
exec("mysqldump backup -u$user -p$pass > $sql_file");
When I run the script the page just loads inifnately and the backup is not created.
A blank file is being created so I know something is happening.
Extra info:
* exec() is not disabled
* PHP is not running in safe mode
Any ideas??
Win XP, WAMP, MYSQL 5.0.51b
mysqldump is likely to exceed the maximal time php is supposed to run on your system. Try using the command in cmd or increase the max_execution_time in your php.ini .
Are you sure $pass is defined and doesn't have a space character at the start?
If it wasn't, mysqldump would be waiting for command line entry of the password.
I had the same thing happen a while back. A co-worker pointed me to the MySQL GUI tools and I have been making backups with that. The Query Browser that comes with it is nice, too.
MySQL GUI tools
It might help to look at the stderr output from mysqldump:
$cmd = "mysqldump backup -u$user -p$pass 2>&1 > $sql_file";
exec($cmd, $output, $return);
if ($return != 0) { //0 is ok
die('Error: ' . implode("\r\n", $output));
}
Also you should use escapeshellarg() if $user or $pass are user-supplied.
I've also struggled with using the mysqldump utility. I few things to check/try based on my experience:
Is your server set up to allow programs to run programs with an exec command? (My webhost's server won't let me.) Test with a different command.
Is the mysqldump utility installed? Check with whereis mysqldump.
Try adding the optimize argument --opt