I want to export data from mysql quickly to an output file. As it turns out, INTO OUTFILE syntax seems miles ahead of any kind of processing I can do in PHP performance wise . However, this aproach seems to be ridden with problems:
The output file can only be created in /tmp or /var/lib/mysql/ (mysqld user needs write permissions)
The output file owner and group will be
set as mysqld
tmp dir is pretty much a dumpster fire because of settings like "private tmp".
How would I manage this in a way that isn't a nightmare in terms of managing the user accounts / file permissions?
I need to access the output file from my php script and I would also like to output this file to the application directory if possible. Of course, if there is another way to export my query results in a performance effective way, I would like to know of it.
Currently I am thinking of the following aproaches:
Add mysqld user to a "www-data" group to give access to application files, write to application dir and other www-data users will hopefully be able to access the output files.
I could not get the access rights working for the mysql user. Having scripts add the user to www-data group or other such measures would also increase the application deployment overhead.
I decided to go with using the program piping method with symfony Process component.
mysql -u <username> -p<password> <database> -e "<query>" | sed 's/\t/","/g;s/^/"/;s/$/"/;' > /output/path/here.csv
Note that the csv formatting might break in case you have values that contain reserved characters like \ , " \n etc. in your columns. You will also need escape these characters (" to \" for example) and possibly do something about mysql outputting null values as NULL (string).
Related
I want to take regular backups of some tables in my mysql database using <insert favorite PHP framework here> / plain php / my second favorite language. I want it to be automated so that the backup can be restored later on in case something goes wrong.
I tried executing a query and saving the results to a file. Ended up with code that looks somewhat like this.
$sql = 'SELECT * FROM my_table ORDER id DESC';
$result = mysqli_query( $connect, $sql );
if( mysqli_num_rows( $result ) > 0){
$output=fopen('/tmp/dumpfile.csv','w+');
/* loop through recordset and add that to the file */
while( $row = mysqli_fetch_array( $result ) ) {
fputcsv( $output, $row, ',', '"');
}
fclose( $output );
}
I set up a cron job on my local machine to hit the web page with this code. I also tried writing a cronjob on the server run the script as a CLI. But it's causing all sorts of problems. These include
Sometimes the data is not consistent
The file appears to be truncated
The output cannot be imported into another database
Sometimes the script times out
I have also heard about mysqldump. I tried to run it with exec but it produces an error.
How can I solve this?
CSV and SELECT INTO OUTFILE
http://dev.mysql.com/doc/refman/5.7/en/select-into.html
SELECT ... INTO OUTFILE writes the selected rows to a file. Column and
line terminators can be specified to produce a specific output format.
Here is a complete example:
SELECT * FROM my_table INTO OUTFILE '/tmp/my_table.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM test_table;
The file is saved on the server and the path chosen needs to be writable. Though this query can be executed through PHP and a web request, it is best executed through the mysql console.
The data that's exported in this manner can be imported into another database using LOAD DATA INFILE
While this method is superior iterating through a result set and saving to a file row by row, it's not as good as using....
mysqldump
mysqldump is superior to SELECT INTO OUTFILE in many ways, producing CSV is just one of the many things that this command can do.
The mysqldump client utility performs logical backups, producing a set
of SQL statements that can be executed to reproduce the original
database object definitions and table data. It dumps one or more MySQL
databases for backup or transfer to another SQL server. The mysqldump
command can also generate output in CSV, other delimited text, or XML
format.
Ideally mysqldump should be invoked from your shell. It is possible to use exec in php to run it but since producing the dump might take a long time depending on the amount of data, and php scripts usually run only for 30 seconds, you would need to run it as a background process.
mysqldump isn't without it's fair share of problems.
It is not intended as a fast or scalable solution for backing up
substantial amounts of data. With large data sizes, even if the backup
step takes a reasonable time, restoring the data can be very slow
because replaying the SQL statements involves disk I/O for insertion,
index creation, and so on.
A classic example see this question: Server crash on MySQL backup using python where one mysqldump seems to start before the earlier one has finished and rendered the website completely unresponsive.
Mysql replication
Replication enables data from one MySQL database server (the master)
to be copied to one or more MySQL database servers (the slaves).
Replication is asynchronous by default; slaves do not need to be
connected permanently to receive updates from the master. Depending on
the configuration, you can replicate all databases, selected
databases, or even selected tables within a database.
Thus replication operates differently from SELECT INTO OUTFILE or msyqldump It's ideal keeping the data in the local copy almost upto date (Would have said perfectly in sync but there is something called slave lag) On the other hand if you use a scheduled task to run mysqldump once every 24 hours. Imagine what can happen if the server crashes after 23 hours?
Each time you run mysqldump you are producing a large amount of data, keep doing it regularly and you will find your hard disk filled up or your file storage bills are hitting the roof. With replication only the changes are passed on to the server (by using the so called binlog)
XtraBackup
An alternative to replication is to use Percona XtraBackup.
Percona XtraBackup is an open-source hot backup utility for MySQL -
based servers that doesn’t lock your database during the backup.
Though by Percona, it's compatible with Mysql and Mariadb. It has the ability to do incremental backups lack of which is the biggest limitation of mysqldump.
I am suggesting to get database backup by command line utility using script file instead of PHP script.
Make my.ini file for store configurations
make file my.ini for default db username and password in root directory of user. so script will take username, password and hostname from this file
[client]
user = <db_user_name>
password = <db_password>
host = <db_host>
Create sh file called backup.sh
#!/bin/sh
#
# script for get backup everyday
#change directory to your backup directory
cd /path_of_your_directory
#get backup of database of applications
mysqldump <your_database_name> tmp_db.sql;
#compress it in zip file
zip app_database-$(date +%Y-%m-%d).sql.zip tmp_db.sql;
#remove sql file
rm -rf tmp_db.sql;
Give Executable permission to sh.file
chmod +x backup.sh
Set Cronjob
sh /<script_path>/backup.sh >/dev/null 2>&1
Thats all
Goodluck
I'm running a cron job that executes mysqldump via a PHP script, the dump requires the RELOAD privilege. Using the MySQL admin account doesn't feel right, but neither does creating a user with admin privileges.
My main concern is the security aspect, I'm loading the db attributes (username, password, etc.) in a protected array "property" of the class I'm using.
I'm wondering which approach makes more sense or if there's another way to achieve the same results.
Overview:
LAMP Server: CENTOS 5.8, Apache 2.2.3, MySQL 5.0.95, PHP 5.3.3
Cron job outline:
Dump raw stats data from two InnoDB tables in the website db, they
have a foreign key relationship.
Load the data into tables in the stats db
Get the last value of the auto-incrementing primary key that was
transferred
Use the primary key value in a query that deletes the copied data
from the website db
Process the transferred data in the stats db to populate the reports
tables
When processing completes, delete the raw stats data from the stats
db
The website database is configured as a master with binary logging, and the replicated server will be set up once the stats data is no longer stored and processed in the website database (replicating the website database was the impetus for moving the stats to their own database).
All files accessed during the cron job are located outside the DocumentRoot directory.
The nitty gritty:
The mysqldump performed in the first step requires the RELOAD privilege, here's the command:
<?php
$SQL1 = "--no-create-info --routines --triggers --master-data ";
$SQL1 .= "--single-transaction --quick --add-locks --default-character-set=utf8 ";
$SQL1 .= "--compress --tables stats_event stats_event_attributes";
$OUTPUT_FILENAME = "/var/stats/daily/daily-stats-18.tar.gz";
$cmd1 = "/usr/bin/mysqldump -u website_user -pXXXXXX website_db $SQL1 | gzip -9 > $OUTPUT_FILENAME";
exec( $cmd1 );
?>
The error message:
mysqldump: Couldn't execute 'FLUSH TABLES': Access denied; you need the RELOAD privilege for this operation (1227)
Works fine if I use the mysql admin credentials.
I'm wondering which approach makes more sense or if there's another way to achieve the same results.
The bottom line is that you need a user with certain privileges to run that mysqldump command. While it may seem silly to create a new user just for this one cron job, it's the most straightforward and simple approach you can take that at least gives the outward appearance of lolsecurity.
Given that this is a stopgap measure until you can get replication up and running, there's no real harm being done here. Doing this by replication is totally the way to go, and the stopgap measure seems sane.
Also, when it comes time to get replication going, xtrabackup is your friend. It includes binary log naming and position information with the snapshot it takes, which makes setting up new slaves a breeze.
I just ran across this same error (probably on the same site you were working on :) ), even when running as the MySQL root user. I managed to get around it by not specifying --skip-add-locks, e.g. this worked:
/usr/bin/mysqldump -u USERNAME -pPW DATABASE_NAME --skip-lock-tables --single-transaction --flush-logs --hex-blob
This is more a general question than really a language-specific one. I have to implement a program, which automatically processes csv files (read the file, write to database, move file). This isn't the problem at all.
The problem is - I've a directory structure like the following one and have to check regularly (will be like 5 minutes or so) if there are any new files in it which need to be processed...
-+ basedir
--+ AT (ISO country abbreviation ...)
--+ DE
---+ ID1234 (directory for user)
---+ ID2345
---+ ID4523
---+ ...
Do you have any idea how to go through each directories in a very performant manner? I don't think that it's that good to perform a loop over all directories and scan them.
Files get uploaded via FTP and I've full control over the server.
Watching the log on your FTP server is a good idea, especially if you have a lot of subdirectories to scan. A tail avoid the overhead of a polling solution, and will tell you precisely where to look for files. But that's something that would be achieved more easily using shell than PHP, I think.
I have vsftpd on one server, which generates logs that include lines like this:
Fri Feb 24 05:37:43 2012 [pid 86561] [bob] OK UPLOAD: Client "10.2.3.4", "/path/to/file.txt", 6036 bytes, 32.77Kbyte/sec
To trigger actions based on this, I could use a shell script like the following:
#!/bin/sh
tail -F /var/log/vsftpd.log | while read junk junk junk junk junk junk junk user status command junk sourceip file junk; do
if [ "$command" = "UPLOAD:" -a "$status" = "OK" ]; then
if echo "$file" | grep -q '/path/to/.*\.txt'; then
# do some triggered action, like:
sql="INSERT INTO log VALUES ('$user', '$sourceip', '$file')"
if mysql -uusername -ppasswd -Ddbname -e"$sql"; then
filename="`echo \"$file\" | sed -r 's/\"(.*)\",$/\\1/'`"
mv "$filename" /path/to/donefiles/
fi
fi
fi
done
This could be started using your OS's normal startup facilities, or launched by cron using a #reboot special.
Add error handling to taste.
You can setup logging for ftp and parse the log for new events.
Or try something like inotify, fschange, audit, ...
I am working on a php website and it gets regularly infected by Malware. I've gone through all the security steps but failed. But I know how it every time infect my code. It comes at the starting of my php index file as following.
<script>.....</script><?
Can anybody please help me how can I remove the starting block code of every index file at my server folders? I will use a cron for this.
I already gone through regex question for removal of javascript malware but did not found what I want.
You should change FTP password to your website, and also make sure that there are no programs running in background that open TCP connections on your server enabling some remote dude to change your site files. If you are on Linux, check the running processes and kill/delete all that is suspicious.
You can also make all server files ReadOnly with ROOT...
Anyhow, trojan/malware/unautorized ftp access is to blame, not JavaScript.
Also, this is more a SuperUser question...
Clients regularly call me do disinfect their non-backed up, PHP malware infected sites, on host servers they have no control over.
If I can get shell access, here is a script I wrote to run:
( set -x; pwd; date; time grep -rl zend_framework --include=*.php --exclude=*\"* --exclude=*\^* --exclude=*\%* . |perl -lne 'print quotemeta' |xargs -rt -P3 -n4 sed -i.$(date +%Y%m%d.%H%M%S).bak 's/<?php $zend_framework=.*?>//g'; date ; ls -atrFl ) 2>&1 | tee -a ./$(date +%Y%m%d.%H%M%S).$$.log`;
It may take a while but ONLY modifies PHP files containing the trojan's signature <?php $zend_framework=
It makes a backup of the infected .php versions to .bak so that when re-scanned, will skip those.
If I cannot get shell access, eg. FTP only, then I create a short cleaner.php file containing basically that code for php to exec, but often the webserver times out the script execution before it goes through all subdirectories though.
WORKAROUND for your problem:
I put this in a crontab / at job to run eg. every 12 hours if such access to process scheduling directly on the server is possible, otherwise, there are also more convoluted approaches depending on what is permitted, eg. calling the cleaner php from the outside once in a while, but making it start with different folders each time via sort --random (because after 60sec or so it will get terminated by the web server anyway).
Change Database Username Password
Change FTP password
Change WordPress Hash Key.
Download theme + plugins to your computer and scan with UPDATED antivirus specially NOD32.
Don't look for the pattern that tells you it is malware, just patch all your software, close unused ports, follow what people told you here already instead of trying to clean the code with regex or signatures...
I want to make a script that would run something in screen on root user. This has to be done through php system() function therefore I need to find out way to sudo to root and pass a password, all using PHP.
If you really need to sudo from PHP (not recommended), it's best to only allow specific commands and not require password for them.
For example, if PHP is running as the apache user, and you need to run /usr/bin/myapp, you could add the following to /etc/sudoers (or wherever sudoers is):
apache ALL = (root) NOPASSWD:NOEXEC: /usr/bin/myapp
This means that user apache can run /usr/bin/myapp as root without password, but the app can't execute anything else.
I'm sure there must be a better way to do whatever it is you're trying to accomplish than whatever mechanism you're trying to create.
If you simply want to write messages from a php script to a single screen session somewhere, try this:
In php
Open a file with append-write access:
$handle = fopen("/var/log/from_php", "wb");
Write to your file:
fwrite($handle, "Sold another unit to " . $customer . "\n");
In your screen session
tail -F /var/log/from_php
If you can't just run tail in a screen session, you can use the write(1) utility to write messages to different terminals. See write(1) and mesg(1) for details on this mechanism. (I don't like it as much as the logfile approach, because that is durable and can be searched later. But I don't know exactly what you're trying to accomplish, so this is another option that might work better than tail -F on a log file.)