Is there a way in php to execute a command with arguments but without having "shell".
The problem is that both exec() and shell_exec() runs the supplied string in a shell and therefor any user inputted data must be escaped.
Is there a way to do it like:
some_exec(["command","argument","argument])
As one would do it in for example Python or Java.
I would very much be able to execute commands with arguments without having to escape some of the arguments.
I'm working on a UI for a router and the Wifi key is set trough a command
and right now I have to escape the string which limits the number of characters that the key can contain. If i where not to escape the key one could do injections without any problems.
Clarification
The reason why this actually is an annoying problem in this case is that i need to execute the following command:
wireless.#wifi-iface[0].key='<the password>'
The problem is that since the password is being set in a string this password , $il|\|t would become \$ile\|\\\|t and the problem now is that incorrect password is a valid one. And so the password that was stored is not the same as the user entered.
And even though we enter the password in a string we can still do an inject like this ' ; reboot ; '
I have some trouble understanding your need not to escape arguments. I tried to call echo with an escaped string, and echo displays the original string :
$cmd = 'echo';
$passwd = escapeshellarg('&#;`|*?~<>^()[]{}$');
echo 'ARG : ' . $passwd . PHP_EOL;
exec($cmd . ' ' . $passwd, $output);
echo 'exec : ' . $output[0] . PHP_EOL;
echo 'passthru : ';
passthru($cmd . ' ' . $passwd);
echo 'shell_exec : ' . shell_exec($cmd . ' ' . $passwd);
echo 'system : ';
system($cmd . ' ' . $passwd);
Here is the output :
ARG : '&#;`|*?~<>^()[]{}$'
exec : &#;`|*?~<>^()[]{}$
passthru : &#;`|*?~<>^()[]{}$
shell_exec : &#;`|*?~<>^()[]{}$
system : &#;`|*?~<>^()[]{}$
Could you give an example of the command you want to execute ?
Related
I have command from where I am executing the PHP function and writing output to a log file and after success of this I want to run next command which should send email about the status of first command.
e.g.
$nextCommand = 'php _protected/yii xyz-integration/job-status-email ' . $processId . ' >'.realpath(Yii::$app->basePath) . '/../../uploads/second_log.log 2>&1 & echo $!';
$command = 'php _protected/yii xyz-integration ' . $processId . ' >'.realpath(Yii::$app->basePath) . '/../../uploads/first_log.log 2>&1 & echo $! && '.$nextCommand;
exec($command, $output);
By doing this I am able to echo output of the php _protected/yii xyz-integration ' . $processId . ' to the realpath(Yii::$app->basePath) . '/../../uploads/first_log.log.
But after running this process I want to send an email about the status of first command. Right now I am getting status as In-Progress which means $nextCommand is not getting executing after first finishes the execution(Here in my case I want status which should be like success or failed which is been updated in database by first command). If I remove the & echo $! then It won't log the output/error to the log file which is necessary.
This code segment 2>&1 & echo $! && should be 2>&1 && echo $! && (look at the missing & character after &1.
The reason is with only one & character in the end, the command will run in the background and return immediately. So that your second command is started when the first command is not finished yet.
Here is a code change that I made to solve the issue.
$nextCommand = 'php _protected/yii xyz-integration/job-status-email ' . $processId . ' >'.realpath(Yii::$app->basePath) . '/../../uploads/second_log.log 2>&1 & echo $!';
$command = 'php _protected/yii xyz-integration ' . $processId . ' >'.realpath(Yii::$app->basePath) . '/../../uploads/first_log.log 2>&1 ';
exec($command .' ; '. $nextCommand);
For readability purpose I have assigned commands to variables and using ';' I have executed the both commands one after the other.
Am trying to get a auto backup of our mysql database via a cron job that runs daily.
We have:
$database_user = 'VALUE';
$database_pass = 'VALUE';
$database_server = 'localhost';
// Name of the database to backup
$database_target = 'VALUE';
// Get Database dump
$sql_backup_file = $temp_dir . '/sql_backup.sql';
$backup_command = "mysqldump -u" . $database_user . " -p" . $database_pass . " -h " . $database_server . " " . $database_target . " > " . $sql_backup_file;
system($backup_command);
// End Database dump
Problem is we get a message back from the Cron Daemon with:
Usage: mysqldump [OPTIONS] database [tables]
OR mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...]
OR mysqldump [OPTIONS] --all-databases [OPTIONS]
For more options, use mysqldump --help
sh: -h: command not found
So it looks like it has something to do with the -h
~~~
Anyone have any thoughts on how to fix?
First, I recommend you upgrade to mysql 5.6+ so that you can keep your database passwords more secure. First, you would follow the instructions from this stackoverflow answer to set up the more-secure mysql login method for command line scripts.
You should probably write a bash script instead of using PHP. Here's a full backup script, very simple. db_name is the name of your database, and /path/to/backup_folder/ is obviously where you want to store backups. The --login-path=local switch will look in the home directory of whoever is running this bash script and see if there is a login file there (must be readable by the current user and accessible by no one else).
#!/bin/bash
#$NOW will provide you with a timestamp in the filename for the backup
NOW=$(date +"%Y%m%d-%H%M%S")
DB=/path/to/backup_folder/"$NOW"_db_name.sql.gz
mysqldump --login-path=local db_name | gzip > "$DB"
#You could change permissions on the created file if you want
chmod 640 "$DB"
I save that file as db_backup.sh inside of the /usr/local/bin/ folder and make sure it is readable/executable by the user who is going to be doing the db backups. Now I can run # db_backup.sh from anywhere on the system and it will work.
To make it a cron, I put a file called 'db_backup' (the name doesn't really matter) inside my /etc/crond.d/ folder that looks like this (user_name is the user who is supposed to run the backup script)
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin:usr/local/bin
MAILTO=root
HOME=/
# "user_name" will try to run "db-backup.sh" every day at 3AM
0 3 * * * user_name db-backup.sh
I think that you have some strange char on your password string. Maybe a "`" character and this why you are getting both errors.
The first from mysqldump because you don't have specified any database (because the broken string) and the second from shell saying that cannot find -h command (next string after password)
Try to escape $backup_command before system_call (http://www.php.net/escapeshellcmd)
system(escapeshellcmd($backup_command));
You should not use -p option anyway, as this can be read out by anybody on the system via ps aux.
It is a much better way to specify your credentials for mysqldump in an option file:
[client]
host = localhost
user = root
password = "password here"
Then you use call your script like this:
mysqldump --defaults-file=/etc/my-option-file.cnf ...other-args
This way, there is no password exposure on your system to other users. It might even also fix your proglem with unescaped complex passwords.
I would also recommend you to look at this project to get some more ideas:
mysqldump-secure
Do not keep space between -p and $database_pass. You need to write the password
immediately after the -p, without a space.
$backup_command = "mysqldump -u" . $database_user . " -p" . $database_pass . " -h " .
$database_server . " " . $database_target . " > " . $sql_backup_file;
hope will work for you
You need to supply database name. If you want to ahve backup for all please add --all-databases and Also remove spacing between -p and password
"mysqldump -u" . $database_user . " -p" . $database_pass . " -h " .
$database_server . " --all-databases" . $database_target . " > " .
$sql_backup_file;
Try this by adding the space between -u and username
$backup_command = "mysqldump -u " . $database_user . " -p" . $database_pass . " -h " . $database_server . " " . $database_target . " > " . $sql_backup_file;
^
If the database server located at the localhost remove the host param
$backup_command = "mysqldump -u " . $database_user . " -p" . $database_pass . " " . $database_target . " > " . $sql_backup_file;
Hi I am writing to a remote MSSQL 2005 server from a php application, and have a situation where mssql_num_rows errors out with the "mssql_num_rows() expects parameter 1 to be resource, boolean given" message.... but I can't figure out why
$writeitem = "INSERT INTO RebateSubmissionProducts VALUES ('" . $buyproduct . "'," . $quantity . ",CAST('" . $itemUUID . "' as UNIQUEIDENTIFIER),CAST('" . $eligible . "' as UNIQUEIDENTIFIER),CAST('" . $prodID . "' as UNIQUEIDENTIFIER),CAST('" . $UUID . "' as UNIQUEIDENTIFIER),NULL)";
$itemresult = mssql_query($writeitem);
if (!mssql_num_rows($itemresult)){
echo 'Problem writing to RebateSubmissionProducts';
} else {
echo 'Success writing to RebateSubmissionProducts';
}
mssql_free_result($itemresult);
The upshot is that I get the error message, but the insert works fine.
BTW all the input is run through HTMLPurifier so don't slag me too hard about that. The hosting company can't set up PDO_DBLIB so I can't use PDO/bound params.... I also don't have access to the MS server for creating a stored procedure.
Any ideas why php thinks that $itemresult is a boolean? (both mssql_num_rows and mssql_free_result issue the same error message)
As andrewsi pointed out, mssql_num_rows() responds in a different way depending on the query. In my case, I was running an insert, and so when this ran:
$itemresult = mssql_query($writeitem);
$itemresult was boolean(TRUE) beacuse the insert succeeded, so mssql_num_rows (and mssql_free_result() both issued warnings since there wasn't a result set.
I'm trying to dump some of my tables if the prefix of theme is matching a given sub string using php. Trying to use php's system did not bring any result in sense the dump file was not created. I thought to use the command line function exec to achieve my result and I was making the following
exec('E:/xampp/mysql/bin/mysqldump '. $dbname .' -h '. $this->host .' -u ' .$this->user . ' $(E:/xampp/mysql/bin/mysql -u '. $this->user . ' -p ' . $dbname .' -Bse "show tables like \'wp_dev%\'")> mydb.sql 2>&1', $output);
but to sub query what would filter out the matching tables is returning the following error
mysqldump: unknown option '-s'
It seems like I'm missing something by the syntax.
Use like this, it will take dump of only listed tables.
exec('E:/xampp/mysql/bin/mysqldump -h '. $this->host .' -u ' .$this->user . ' -p'. $this->password .' '. $dbname .' table1 table2 > /path_to_file/dump_file.sql');
I have a LIVE version of a MySQL database with 5 tables and a TEST version.
I am continually using phpMyAdmin to make a copy of each table in the LIVE version to the TEST version.
Does anyone have the mysql query statement to make a complete copy of a database? The query string would need to account for structure, data, auto increment values, and any other things associated with the tables that need to be copied.
Thanks.
Ok, after a lot of research, googling, and reading through everyone's comments herein, I produced the following script -- which I now run from the browser address bar. Tested it and it does exactly what I needed it to do. Thanks for everyone's help.
<?php
function duplicateTables($sourceDB=NULL, $targetDB=NULL) {
$link = mysql_connect('{server}', '{username}', '{password}') or die(mysql_error()); // connect to database
$result = mysql_query('SHOW TABLES FROM ' . $sourceDB) or die(mysql_error());
while($row = mysql_fetch_row($result)) {
mysql_query('DROP TABLE IF EXISTS `' . $targetDB . '`.`' . $row[0] . '`') or die(mysql_error());
mysql_query('CREATE TABLE `' . $targetDB . '`.`' . $row[0] . '` LIKE `' . $sourceDB . '`.`' . $row[0] . '`') or die(mysql_error());
mysql_query('INSERT INTO `' . $targetDB . '`.`' . $row[0] . '` SELECT * FROM `' . $sourceDB . '`.`' . $row[0] . '`') or die(mysql_error());
mysql_query('OPTIMIZE TABLE `' . $targetDB . '`.`' . $row[0] . '`') or die(mysql_error());
}
mysql_free_result($result);
mysql_close($link);
} // end duplicateTables()
duplicateTables('liveDB', 'testDB');
?>
Depending on your access to the server. I suggest using straight mysql and mysqldump commands. That's all phpMyAdmin is doing under the hood.
Reference material for Mysqldump.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
There is a PHP class for that, I didn't test it yet.
From it's description:
This class can be used to backup a MySQL database.
It queries a database and generates a list of SQL statements that can be used later to restore the database **tables structure** and their contents.
I guess this what you need.
Hi here you can use simple bash script to backup whole database.
######### SNIP BEGIN ##########
## Copy from here #############
#!/bin/bash
# to use the script do following:
# sh backup.sh DBNAME | sh
# where DBNAME is database name from alma016
# ex Backuping mydb data:
# sh backup.sh mydb hostname username pass| sh
echo "#sh backup.sh mydb hostname username pass| sh"
DB=$1
host=$2
user=$3
pass=$4
NOW=$(date +"%m-%d-%Y")
FILE="$DB.backup.$NOW.gz"
# rest of script
#dump command:
cmd="mysqldump -h $host -u$user -p$pass $DB | gzip -9 > $FILE"
echo $cmd
############ END SNIP ###########
EDIT
If you like to clone backuped database just edit the dump and change the database name then:
tar xzf yourdump.tar.gz| mysql -uusername -ppass
cheers Arman.
Well in script form, you could try using
CREATE TABLE ... LIKE syntax, iterating through a list of tables, which you can get from SHOW TABLES.
Only problem is that does not recreate indexes or foreign keys natively. So you would have to list them and create them as well. Then a few INSERT ... SELECT calls to get the data in.
If your schema never changes, only the data. Then create a script that replicates the table structure and then just do the INSERT ... SELECT business in a transaction.
Failing that, mysqldump as the others say is pretty easy to get working from a script. I have a daily firing cron job that dumps all manner of databases from my datacenter servers, connects via FTPS to my location and sends all the dumps across. It can be done, quite effectively. Obviously you have to make sure such facilities are locked down, but again, not overly hard.
As per code request
The code is proprietary, but I'll show you the critical section that you need. This is from in the middle of a foreach loop, hence the continue statements and the $c.. prefixed variables (I use that to indicate current loop (or instance) variables). The echo commands could be whatever you want, this is a cron script, so echoing current status was appropriate. The flush() lines are helpful for when you run the script from the browser, as the output will be sent up to that point, so the browser results fill as it runs, rather than all turning up at the end. The ftp_fput() line is obviously down to my situation of uploading the dump somewhere and uploads directly from the pipe - you could use another process open to pipe the output in to a mysql process to replicate the database. Providing suitable amendments where made.
$cDumpCmd = $mysqlDumpPath . ' -h' . $dbServer . ' -u' . escapeshellarg($cDBUser) . ' -p' . escapeshellarg($cDBPassword) . ' ' . $cDatabase . (!empty($dumpCommandOptions) ? ' ' . $dumpCommandOptions : '');
$cPipeDesc = array(0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'w'));
$cPipes = array();
$cStartTime = microtime(true);
$cDumpProc = proc_open($cDumpCmd, $cPipeDesc, $cPipes, '/tmp', array());
if (!is_resource($cDumpProc)) {
echo "failed.\n";
continue;
} else {
echo "success.\n";
}
echo "DB: " . $cDatabase . " - Uploading Database...";
flush();
$cUploadResult = ftp_fput($ftpConn, $dbFileName, $cPipes[1], FTP_BINARY);
$cStopTime = microtime(true);
if ($cUploadResult) {
echo "success (" . round($cStopTime - $cStartTime, 3) . " seconds).\n";
$databaseCount++;
} else {
echo "failed.\n";
continue;
}
$cErrorOutput = stream_get_contents($cPipes[2]);
foreach ($cPipes as $cFHandle) {
fclose($cFHandle);
}
$cDumpStatus = proc_close($cDumpProc);
if ($cDumpStatus != 0) {
echo "DB: " . $cDatabase . " - Dump process caused an error:\n";
echo $cErrorOutput . "\n";
continue;
}
flush();
If you're using linux or mac, here is a single line to clone a database.
mysqldump -uUSER -pPASSWORD -hsample.host --single-transaction --quick test | mysql -uUSER -pPASSWORD -hqa.sample.host --database=test
The 'advantage' here is that it will lock the database while its making a copy.
That means you end up with a consistent copy.
It also means your production database will be tied up for the duration of the copy which generally isn't a good thing.
Without locks or transactions, if something is writing to the database while you're making a copy, you could end up with orphaned data in your copy.
To get a good copy without impacting production, you should create a slave on another server.
The slave is updated in real time.
You can run the same command on the slave without impacting production.