I am trying to figure out why my curl cron job is not executing correctly. When I run it in the browser it runs just fine however if I run it in command line I get the following output. I have replaced the actual URL however the url is something like example.com/file.php?pass=password&t=d
root#low [/home/user]# sudo -u user curl URL
[1] 13959 root#low
[/home/user]# Starting backups for 0 accounts!
The output in the browser runs for all 9, 10, or however many backups I have. Am I just missing a flag in my curl request?
You are using the & sign in the command line, so the parameter t gets lost and your script does not get all its input data (and it's probably the same with your cron job, since it's executed shell-wise).
try:
curl 'url'
Checkout http://linuxcommand.org/lts0080.php, search for "Putting a program in the background" for more info on & in the shell.
I believe curl is typically located in /usr/bin/curl. Perhaps you need to run it as absolute path in your cronjob (in case /usr/bin is not contained in your default path).
This would typically be the case if it works from commandline but not from cron (as exact same command).
Related
I have 2 websites, hosted on 2 different servers. They are kind of interlinked. Sometimes I just do stuff on Website-1 and run a script on Website-2. Like I edited something on Website-1 and now I want to run a script on Website-2 to update accordingly on it's server.
Till now I am using following code on website 1.
$file = file_get_contents('Website-2/update.php');
But the problem with this is that my Website-1 server script stops running and wait for the file to return some data. And I don't wanna do anything with that data. I just wanted to run the script.
Is there a way where I can do this in a better way or tell PHP to move to next line of code.
If you want to call the second site without making your user wait for a response,
I would recommend using a message queue.
Site 1 request would put a message to the queue.
Cron job to check queue and run update on site 2 when message exists.
Common queues apps to look at:
[https://aws.amazon.com/sqs/?nc2=h_m1][1]
[https://beanstalkd.github.io/][2]
[https://www.iron.io/mq][3]
[1]: https://aws.amazon.com/sqs/?nc2=h_m1
[2]: https://beanstalkd.github.io/
[3]: https://www.iron.io/mq
What you're trying to achieve is called a web hook and should be implemented with proper authentication, so that not anybody can execute your scripts at any time and overload your server.
On server 2 you need to execute your script asynchronously via workers, threads, message queues or similar.
You can also run the asynchronous command on your server 1. There are many ways to achieve this. Here are some links with more on this.
(Async curl request in PHP)
(https://segment.com/blog/how-to-make-async-requests-in-php/)
Call your remote server as normal. But, In the PHP script you normally call, Take all the functionality and put it in a third script. Then from the old script call the new one with (on Linux)
exec('php -f "{path to new script}.php" $args > /dev/null &');
The & at the end makes this a background or non-blocking call. Because you call it from the remote sever you don't have to change anything on the calling server. The php -f runs a php file. The > /dev/null sends the output from that file to the garbage.
On windows you can use COM and WScript.Shell to do the same thing
$WshShell = new \COM('WScript.Shell');
$oExec = $WshShell->Run('cmd /C php {path to new script}.php', 0, false);
You may want to use escapeshellarg on the filename and any arguments supplied.
So it will look like this
Server1 calls Server2
Script that was called (on Server2) runs exec and kicks off a background job (Server2) then exits
Server1 continues as normal
Server2 continues the background process
So using your example instead of calling:
file_get_contents('Website-2/update.php');
You will call
file_get_contents('Website-2/update_kickstart.php');
In update_kickstart.php put this code
<?php
exec('php -f "{path}update.php" > /dev/null &');
Which will run update.php as a separate background (non-blocking) call. Because it's non-blocking update_kickstart.php will finish and return to searver1 which can go about it's business and update.php will run on server2 independantly
Simple...
The last note is that file_get_contents is a poor choice. I would use SSH and probably PHPSecLib2.0 to connect to server2 and run the exec command directly with a user that has access only to that file(Chroot it or something similar). As it is anyone can call that file and run it. With it behind a SSH login it's protected, with it Chrooted that "special" user can only run that one file.
$output = shell_exec('echo "php '.$realFile.'" | at '.$targTime.' '.$targDate.' 2>&1');
print $output;
Can someone please help me figure out why the above line isn't doing what it's supposed to be doing? The idea is for it to create an 'at' job that will execute a php script. If I switch to the user apache(which will ideally control the at function when the php file is complete) I can run
echo "php $realFile.php" | at 00:00 05/30/17
and it'll do EXACTLY what I want. The problem is in the above snippet from my php file it will not create the at job correctly. when I do a at -c job# on both of them the job made from my file is about a 3rd the length missing the User info and everything. It basically starts at PATH= and goes down. Doesn't include HOSTNAME=, SHELL=, SSH_CLIENT=, SSH_TTY=, USER=. I assume it needs most of this info to run correctly. The end output (below)is always the same though it just doesn't have any of the top part for some reason. Let me know if you need more info. I didn't want to paste all of my code here as it contains job specific information.
${SHELL:-/bin/sh} << 'marcinDELIMITER0e4bb3e8'
php "$realFile".php
marcinDELIMITER0e4bb3e8
It doesn't seem to be a permission issue because I can su to apache and run the exact command needed. The folder the files are located in are also owned by apache. I've also resulted to giving each file I try to run 777 or 755 permissions through chmod so I don't think that's the issue.
I figured out a coupe ways around it a while back. The way I'm using right now is an ssh2 connect to my own server as root and creating it that way. No compromise as you have to enter the password manually each time. Really bad work around. The main issue is that apache doesn't have the correct permissions to do everything needed for the AT job so someone figuring that out would be awesome. Another option I found on a random webpage would be to use sudo through the php script, but basically the same minus having to reconnect to your own server. Any other options would be appreciated.
Reading the manual and logs would be a good place to start. In particular:
The value of the SHELL environment variable at the time of at invocation will determine which shell is used to execute the at job commands. If SHELL is unset when at is invoked, the user’s login shell will be used; otherwise, if SHELL is set when at is invoked, it must contain the path of a shell interpreter executable that will be used to run the commands at the specified time.
Other things to check are that the user is included in at.allow, SELinux is disabled and the webserver is not running chrrot.
I need your help here.
I wrote one PERL script for PHP application which needs to be run for every 5 mins.
This script will call PHP program, which will fetch data from MySQL DB and will generate a excel report and will mail those reports to specific users.
Every thing seems to be fine when I ran this script manually with the command (perl reports.pl).
But when I set this Perl in a cron tab, nothing works and reports are not getting generated.
Details: perl script path /opt/app/deweb/web/EDI/Microsoft/reports.pl
this script will call PHP program (/opt/app/deweb/web/EDI/Microsoft/reports.php)
content of script
#!/usr/local/bin/perl
use Net::FTP;
use File::Copy;
use POSIX;
#errorreport = `php /opt/app/deweb/web/EDI/Microsoft/reports.php`;
print "#errorreport\n";
exit;
It is working perfectly when running Manually using command - perl reports.pl
No results, when set in CRON:
*/5 7-19 * * * /usr/local/bin/perl /opt/app/deweb/web/EDI/Microsoft/reports.pl
Please note that this crontab is under super user account named webserv and my login is having access to edit under this super user account.
I'm editing this cron tab using command :: sudo -u webserv crontab -e
I would check the following:
Does it run using sudo -u webserv perl reports.pl? If not, fix the problem for the webserv user (permissions or whatever) and it should work via cron too.
Does which perl using your login give you /usr/local/bin/perl? If not, change the path to Perl in crontab to what you got in which perl to fix the problem.
I found myself to be in the same situtation. After trying to find out the reason, I am almost sure about the reason this happens. Crontab does not have the same environment variables as you when running the script. You must be sure about paths. Try for example run your script like /perl-path /path-to-perl-script/script.pl outside the parent directory of the script and I am almost sure that your programm will not find some files. And as you call one php script from the perl script, it's possible to have the same problem with paths to your php script too.
So the solution is to use absolute paths and no relative.
Also at your perl script don't use php but /full-path-to-php for example:
#errorreport = /usr/bin/php /opt/app/deweb/web/EDI/Microsoft/reports.php;
So I want to execute a bash command from PHP on my web server. I can do this using shell_exec. However, one of the commands I want to execute is curl. I use it to send a .wav file to another server and record its response. But when invoked from PHP, curl doesn't work.
I reduced the error to the following small example. I have a script named php_script.php which contains:
<?php
$ver=shell_exec("curl -F file=#uploads/2013-7-24-17-31-43-29097-flash.wav http://otherserver");
echo $ver
The curious thing is that when I run this php script from command line using php php_script.php, the result I get is
Status: 500 Internal Server Error
Content-type: text/html
However, if I run curl -F file=#uploads/2013-7-24-17-31-43-29097-flash.wav http://otherserver directly, I get the response I was expecting:
verdict = authentic
(Edit:) I should probably mention that if I put some bash code inside the shell_exec argument which does not contain curl, the bash command executes fine. For example, changing the line to $ver = shell_exec("echo hello > world"); puts the word "hello" into the file "world" (provided it exists and is writable). (End edit.)
Something is blocking the execution of curl when it is invoked from PHP. I thought this might be PHP's running in safe mode, but I found no indication of this in php.ini. (Is there a way to test this to make 100% sure?) What's blocking curl and, more importantly, how can I bypass or disable this block?
(And yes, I realize PHP has a curl library. However, I prefer to use commands I can run from the command line as well, for debugging purposes.)
cheers,
Alan
The reason is the administrative privileges when you run the command directly you are running it as root and thus the command gets executed. But, when you run the command through PHP it runs as an user. By, default user has not the privileges to run the shell_exec commands.
You have to change the settings of shell_exec through CPanel/Apache config file. But, it is not recommended to provide the shell_exec access to the user as it help hackers to attack on server and thus, proper care should be taken.
It would be more appropriate to use the curl library provided in PHP.
I want initiate one php page as background process from another php page.
Use popen():
$command = 'php somefile.php';
pclose(popen($command,'r'));
This launches somefile.php as a background process.
This is a technique I used to get around restrictions applied by my webhost (who limited cronjobs to 15 minutes of execution time, so my backup scripts would always timeout).
exec( 'php somefile.php | /dev/null &' );
The breakdown of this line is:
exec() - PHP reference Runs the specified command, as if from the Linux Command Line.
php somefile.php: Invokes PHP to open, and run, somefile.php. This is the same behaviour as what would happen if that file was accessed through a web browser.
| ("pipe") - Sends the output of the proceeding command to a specified target. In this instance, it would "pipe" the content which would normally be read by the web browser accessing the file.
/dev/null - A blackhole. No, not kidding. It is a place where you send output if you just want it to disappear.
& - Appending this character to the end of a Linux command means "Do not wait - Send this to the background and continue."
So, in summary, the provided code will execute a PHP script, return no output, and not wait for it to finish before continuing onto the next line.
(And, as always, if any of these assumptions on my part are in error, I would love to be corrected by more knowledgeable members of the community.)
You have to make sure, that the background process is not terminated when the processing of the page finished. If you are on a Linux system, you could try to use the nohup command:
$command = 'nohup php somefile.php';
pclose(popen($command,'r'));
If it still gets terminated, you could try the "daemon" command.