Restore a postgresql dump (.sql file) without the command line? - php

Scenario:
I have built a PHP framework that uses a postgresql database. The framework comes shipped with a .sql file which is a dump of the default tables and data that the framework requires.
I want to be able to run the sql file from the client (PHP), rather than the command line, in order to import the data. This is because I have come across some server setups where accessing the command line is not always a possibility, and/or running certain commands isn't possible (pg_restore may not be accessible to the PHP user for example).
I have tried simply splitting up the .sql file and running it as a query using the pg_sql PHP extension, however because the dump file uses COPY commands to create the data, this doesn't seem to work. It seems to be that because COPY is used, the .sql file expects to be imported using the pg_restore command (unless I am missing something?).
Question:
So the question is, how can I restore the .sql dump, or create the .sql dump in a way that it can be restored via the client (PHP) rather than the command line?
For example:
<?php pg_query(file_get_contents($sqlFile)); ?>
Rather than:
$ pg_restore -d dbname filename
Example of the error:
I am using pgAdmin III to generate the .sql dump, using the "plain" setting. In the .sql file, the data that will be inserted into a table looks like this:
COPY core_classes_models_api (id, label, class, namespace, description, "extensionName", "readAccess") FROM stdin;
1 data Data \\Core\\Components\\Apis\\Data The data api Core 310
\.
If I then run the above sql within a pgAdmin III query window, I get the following error:
ERROR: syntax error at or near "1"
LINE 708: 1 data Data \\Core\\Components\\Apis\\Data The data api Core...

This was a bit tricky to find, but after some investigation, it appears that pg_dump's "plain" format (which generates a plain-text SQL file) generates COPY commands rather than INSERT commands by default.
Looking at the specification for the pg_dump here, I found the option for --inserts. Configuring this option will allow the dump to create INSERT commands where it would normally create COPY commands.
The specification does state:
This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases. However, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents.
This works for my purposes however, and hopefully will help others with the same problem!

Related

Automated or regular backup of mysql data

I want to take regular backups of some tables in my mysql database using <insert favorite PHP framework here> / plain php / my second favorite language. I want it to be automated so that the backup can be restored later on in case something goes wrong.
I tried executing a query and saving the results to a file. Ended up with code that looks somewhat like this.
$sql = 'SELECT * FROM my_table ORDER id DESC';
$result = mysqli_query( $connect, $sql );
if( mysqli_num_rows( $result ) > 0){
$output=fopen('/tmp/dumpfile.csv','w+');
/* loop through recordset and add that to the file */
while( $row = mysqli_fetch_array( $result ) ) {
fputcsv( $output, $row, ',', '"');
}
fclose( $output );
}
I set up a cron job on my local machine to hit the web page with this code. I also tried writing a cronjob on the server run the script as a CLI. But it's causing all sorts of problems. These include
Sometimes the data is not consistent
The file appears to be truncated
The output cannot be imported into another database
Sometimes the script times out
I have also heard about mysqldump. I tried to run it with exec but it produces an error.
How can I solve this?
CSV and SELECT INTO OUTFILE
http://dev.mysql.com/doc/refman/5.7/en/select-into.html
SELECT ... INTO OUTFILE writes the selected rows to a file. Column and
line terminators can be specified to produce a specific output format.
Here is a complete example:
SELECT * FROM my_table INTO OUTFILE '/tmp/my_table.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM test_table;
The file is saved on the server and the path chosen needs to be writable. Though this query can be executed through PHP and a web request, it is best executed through the mysql console.
The data that's exported in this manner can be imported into another database using LOAD DATA INFILE
While this method is superior iterating through a result set and saving to a file row by row, it's not as good as using....
mysqldump
mysqldump is superior to SELECT INTO OUTFILE in many ways, producing CSV is just one of the many things that this command can do.
The mysqldump client utility performs logical backups, producing a set
of SQL statements that can be executed to reproduce the original
database object definitions and table data. It dumps one or more MySQL
databases for backup or transfer to another SQL server. The mysqldump
command can also generate output in CSV, other delimited text, or XML
format.
Ideally mysqldump should be invoked from your shell. It is possible to use exec in php to run it but since producing the dump might take a long time depending on the amount of data, and php scripts usually run only for 30 seconds, you would need to run it as a background process.
mysqldump isn't without it's fair share of problems.
It is not intended as a fast or scalable solution for backing up
substantial amounts of data. With large data sizes, even if the backup
step takes a reasonable time, restoring the data can be very slow
because replaying the SQL statements involves disk I/O for insertion,
index creation, and so on.
A classic example see this question: Server crash on MySQL backup using python where one mysqldump seems to start before the earlier one has finished and rendered the website completely unresponsive.
Mysql replication
Replication enables data from one MySQL database server (the master)
to be copied to one or more MySQL database servers (the slaves).
Replication is asynchronous by default; slaves do not need to be
connected permanently to receive updates from the master. Depending on
the configuration, you can replicate all databases, selected
databases, or even selected tables within a database.
Thus replication operates differently from SELECT INTO OUTFILE or msyqldump It's ideal keeping the data in the local copy almost upto date (Would have said perfectly in sync but there is something called slave lag) On the other hand if you use a scheduled task to run mysqldump once every 24 hours. Imagine what can happen if the server crashes after 23 hours?
Each time you run mysqldump you are producing a large amount of data, keep doing it regularly and you will find your hard disk filled up or your file storage bills are hitting the roof. With replication only the changes are passed on to the server (by using the so called binlog)
XtraBackup
An alternative to replication is to use Percona XtraBackup.
Percona XtraBackup is an open-source hot backup utility for MySQL -
based servers that doesn’t lock your database during the backup.
Though by Percona, it's compatible with Mysql and Mariadb. It has the ability to do incremental backups lack of which is the biggest limitation of mysqldump.
I am suggesting to get database backup by command line utility using script file instead of PHP script.
Make my.ini file for store configurations
make file my.ini for default db username and password in root directory of user. so script will take username, password and hostname from this file
[client]
user = <db_user_name>
password = <db_password>
host = <db_host>
Create sh file called backup.sh
#!/bin/sh
#
# script for get backup everyday
#change directory to your backup directory
cd /path_of_your_directory
#get backup of database of applications
mysqldump <your_database_name> tmp_db.sql;
#compress it in zip file
zip app_database-$(date +%Y-%m-%d).sql.zip tmp_db.sql;
#remove sql file
rm -rf tmp_db.sql;
Give Executable permission to sh.file
chmod +x backup.sh
Set Cronjob
sh /<script_path>/backup.sh >/dev/null 2>&1
Thats all
Goodluck

How to use PHP to access MySQL shell?

Currently I have a shell script within my application that depending on what argument you pass it, it will log you into either the live or development MySQL database shell. It works fine, no problem there.
The issue/challenge I am having is that if in the case my MySQL credentials (host / port) change for either the live or development database then I will manually have to edit this shell script, updating it will the new arguments. Problem is, I have done this before but I never want to have to do it again.
What I would like to do is have a PHP script (mysql.sh.php) that when executed, depending on the argument passed to it will log you into either the live or development database shell. How it will differ from the shell script is that it will pull the current credentials (and even host and port) from a PHP configuration class and pass those as arguments to a shell command that will log into the respective database.
Below gives you an illustration of what I am attempting.
#!/usr/bin/php
<?php
include ('common.php');
//Pull info from PHP class right here
exec("mysql --host={$myhostnotyours} --user={$myusernotyours} -p{$mypassnotyours} {$thedatabase}");
What I expect or would like is
mysl>
However, the command just hangs and I am not presented with the MySQL shell.
I would like to know if there is a way to accomplish what I am trying.
Thanks!

How can i use php to export a mysql table

I am trying to export just the rows of a mysql table without the table information to an xml file. I read that the mysqldump command will get the job done but I cant manage to get the correct syntax. Can someone post an example code for mysqldump command? Thank you.
$command="mysqldump --xml ";
Try the script on this page: http://www.chriswashington.net/tutorials/export-mysql-database-table-data-to-xml-file
By any chance you are not trying to run that command inside mysql_query are you ? It wont work that way. mysqldump is a command line utility.
To run it from php you would need to use the system() function, documentation - http://php.net/manual/en/function.system.php
If you are on a shared host with PHP in safe mode, or the system function is explicitly disabled in php.ini, then you will not be able to do this.
In that case you would need to read the data from your table using a SELECT query and iterating on all rows and potting it into an XML file, using XMLWriter or DOMDocument

Import postgresql database dump

I have the following work flow which I'd like to automate with short PHP script.
Download gzfile (with db dump) from specific URL (potentially FTP).
Decode the file to txt.
Import the txt to local postgre with psql (using cmd).
Now I have 2 questions:
What is the best way to pass the gunzipped file to pg_query?
I get an error when PHP reaches this line:
COPY rf (datum, id_emailu_op, recency, frequency) FROM stdin;
2011-08-29 8484 3 1. Can the stdin be a problem?
Thank you all!
A pg_dump file is meant to be imported via psql. You can load the file contents in, and even decompress it with php, then open a pipe to psql writing data out to that process (assuming you are on a unix machine). When psql is executed in this way as far as it's concerned the data you're writing via your php script is coming in via stdin.

Call URL with wget and return an ERRORLEVEL depending on URL's contents

A client has a Windows based in-house server, on which they edit the contents of a CMS. The data are synchronized nightly with the live web server. This is a workaround for a slow Internet connection.
There are two things to be synchronized: New files (already sorted) and a mySQL database. To do this, I am writing a script that exports the database into a dump file using mysqldump, and uploads the dump.
The upload process is done using a 3rd party tool named ScriptFTP, an FTP automation tool.
I then need to run a PHP based import script on the target server. Depending on this script's return value, the ScriptFTP operation goes on, and some directories are renamed.
I need an external tool for this, as scriptFTP only supports FTP calls. I was thinking about the Windows version of wget.
Within scriptFTP, I can execute any batch or exe file, but I can only parse the errorlevel resulting from the call and not the stdout output. This means that I need to return errorlevel 1 if the PHP import operation goes wrong, and errorlevel 0 if it goes well. Additionally, obviously, I need to return a positive errorlevel if the connection to the import script could not be made at all.
I have total control over the importing PHP script, and can decide what it does on error: Output an error message, return a header, whatever.
How would you go about running wget (or any other tool to kick off the server side import) and returning a certain error level depending on what the PHP script returns?
My best bet right now is building a batch file that executes the wget command, stores the result in a file, and the batch file returning errorlevel 0 or 1 depending on the file's contents. But I don't really know how to match a file's contents using batch programming.
You can do the following in powershell:
$a = wget --quiet -O - www.google.com
$rc = $a.CompareTo("Your magic string")
exit $rc

Categories