Is there a way to call a script from a cron job only and make it so no other ips can run that script?
I have a script that sends notifications to phones. It is supposed to be called by a cron job once day, but sometimes something triggers it and everyone gets notified when they shouldn't. I would like to limit it to be called from my server only.
In other words to make it not to be able to be called from a browser or a spider etc..
Thanks
I'd suggest that you don't make this file available publicly.
But to answer your question directly; A way to do this is to add the following check:
if (php_sapi_name() === 'cli') {...}
More info: https://secure.php.net/manual/en/function.php-sapi-name.php
Your script seems to be available via public URL. Move it somewhere else. For example, if the script is within /www/site.com/public/script.php, and /www/site.com/public is the public directory of the Web server, move it to some /www/site.com/cron/script.php. Make sure that the Web server is not configured to fetch files from the cron directory.
As others have written, there may be other setups you could use, (permissions and file locations) however, I've never been someone who tells someone to re-engineer their whole process to meet a single need. That feels too much like the tail wagging the dog. Therefore, if you wish to have a script that can only be executed by your cron job and not mistakenly by another process, I would offer you this solution
The php_sapi solution has been noted to provide inconsistent behavior and is system configuration specific (for example echo it's output when calling your script from the command line, and again when calling from a cron). This can lead to tracking down other bugs. The best solution I have seen and used, is to pass a command line argument to your script and evaluate the existence of that argument.
your cron job would look something like:
* * * * * * php /path/to/script --cron
Then, inside your script you would perform an if check, that when not satisfied, stops the script.
if( $argv[0] != 'cron')
{
//NO CRON ARGUMENT SUPPLIED, SOMETHING ELSE MUST BE CALLING THIS
exit;
}
Take not that some systems may use the first argument as the script name, so based on your individual system configuration, you may need to check $argv[1]
Hope this helps, let me know if you have any questions, I'm happy to respond or edit if needed.
Related
I know this question has been asked before, but none of the answers are working for me.
I'm trying to run a simple PHP script every night at midnight. I created a file called "autoDelete.php" that contains just this code:
<?php
include 'my-database-connection.php';
mysql_query("DELETE FROM meetings WHERE indexDate < NOW()");
?>
I know this script is working because if I navigate to it in a browser, it does what it should.
I then set up the Cron job (via GoDaddy cPanel) to run every minute, with a command to run the script using this:
* * * * /usr/bin/php -q /home/username/public_html/autoDelete.php
However, this is not working. I suspect this has something to do with whatever precedes the "/home" in the command.
Some things to check:
1) cron jobs' default "working directory" is the home directory of the user ID they're running under. That'd most likely be /home/username in this case. That means if you have any relative-pathed include/require commands, they're going to be relative to /home/username, NOT /home/username/public_html. Make SURE that all necessary files are accessible.
2) You're simply assuming the query call succeeded. That's exactly the wrong the thing to do. Calls to external resources (DBs in particular) have exactly ONE way to succeed, and a near infinite number of ways to fail. ALWAYS check for failure, and treat success as a pleasant surprise.
Combining these two (failing to include your connection script, and failing to check for failure), and you end up with what you've got: "nothing" happening. At least try something like
mysql_query(...) or die(mysql_error());
^^^^^^^^^^^^^^^^^^^^^^
The error message will become your script's output, and get emailed to the controlling account's mailbox.
I've had issues in the past with running PHP scripts in a cron job when trying to invoke the PHP binary directly. My solution was to use wget in the cron job since the script was servable by Apache anyway (0 0 * * * wget url/of/script.php). Apache already has the right PHP environment set up so might as well just ask it to handle the job.
I'd use the following code:
$SERVER_PATH = dirname(__FILE__);
shell_exec($PHP_LOCATION.' '.$SERVER_PATH."/script.php?k1=v1&k2=v2 > /dev/null 2>/dev/null &");
Where:
$PHP_LOCATION should contain the path to PHP,
$SERVER_PATH - is current working directory (fortunately the script to run is in the same directory),
> /dev/null 2>/dev/null & added to make this call asynchronous (taken from Asynchronous shell exec in PHP question)
This code has two problems:
As far as I remember ?k1=v1&k2=v2 will work for web-call only, so in this particular case parameters will not be passed to the script.
I don't really know how to init the $PHP_LOCATION variable to be flexible and to work on the most hosts.
I conducted some research regarding both problems:
To solve 1 suggested to use -- 'parameters_string' but it is also recommended to modify the script to parse parameters string which looks a bit clumsy. Is there a better solution?
To solve 2 I found a solution to use PHP_BINARY but this is a PHP 5.4+ case (I'm using 5.3). But the original question was about to run PHP of the same version as the original script version. So for me (as I use PHP 5.3 only) is there probably a solution?
EDIT 0
Let me do some explanation why I stuck to this weird (for PHP) approach:
Those PHP scripts should be separate from each other:
one of those will analyze the data and
the second will generate PNG graphs as a final result.
Those scripts aren't intended to run simultaneously, this means that the second can run at it's own schedule it is only needed that the run should be upon its data will be ready (which done by the first script). So no data should be passed back from second script (child) to the first (parent).
EDIT 1
As seeing from most of the comments the main discussion goes to forking direction. However I'd like to make stress on the point 1 and 2 asked in the original questions. I have some reasons to solve the task in the way I pointed out and I tried to point all that reason. If some of my points looks weird, please post a comment - I will make it more clear or I will change the main question.
Thank you in advance!
How to get executable
Assuming you're using Linux, you can use:
function getBinaryRunner($binary)
{
return trim(shell_exec('which '.$binary));
}
For instance, same may be used for checking if needed stuff is installed:
function checkIfCommandExists($command)
{
$result = shell_exec('which '.$command);
return !empty($result);
}
Some points:
Yes, it will work only for Linux
You should be careful with user input if it is allowed to be passed to shell commands: escapeshellarg() and company
Indeed, normally PHP should not be used for stuff like this as if it is about asynchronous requests, better to either implement forking or run commands from external workers.
How to pass parameters
With doing shell_exec() via file system path you're accessing the file and, obviously, all "GET" parameters are becoming just part of file name, it is no longer "URI" as there is no web-server to process that. So you have two options:
Invoke call via accessing your web-server. So it will be like:
//Yes, you will use wget or, better, curl to make web-request from CLI
shell_exec('wget http://your.web-server.domain/script.php?foo=bar');
Downside here: if you'll access your web-server via public DNS, it will cause network gap and all processing overheads. Benefit - obviously, you will not have to expect anything else in your script and make no distinction between CLI and non-CLI calls
Use $_SERVER array in your script and pass parameters as it should be with CLI:
shell_exec('/usr/bin/php /path/to/script.php foo bar');
//inside your script.php you will see:
//$_SERVER['argv'][0] is "script.php"
//$_SERVER['argv'][1] is "foo"
//$_SERVER['argv'][2] is "bar"
Yes, it will require modification in the script, and, probably, some logic of how to map "regular" web-requests and CLI ones. I would suggest even to think of separating CLI-related stuff to different scripts bundle so not to mess that logic.
More about "asynchronous run"
When you do php script.php & you just run it in background mode. That, however, still keeps parent-child relation for your process. That means - if parent process dies, it's childs will also be removed. To be precise, SIGHUP will be triggered and to avoid this situation you should use nohup command. It will allow to emulate "detaching" of a process and therefore making it's run reliable and independent of circumstances happening to parent process.
I have attempted to create a cron job in my CakePHP 2.x application. But all of the resources I have read online seem to be either doing it completely different to one another with little consistency or explain it in very complex terminology.
Basically I have created the following file MyShell.php in /app/Console/Command
<?php
class MyShell extends Shell {
public function sendEmail() {
App::uses('CakeEmail', 'Network/Email');
$email = new CakeEmail();
$email->from('cameron#driz.co.uk');
$email->to('cameron#driz.co.uk');
$email->subject('Test Email from Cron');
$result = $email->send('Hello from Cron');
}
}
?>
And I want to say run this code at midnight every night.
What do I do next? As the next part really confuses me! I have read on the Book at: http://book.cakephp.org/2.0/en/console-and-shells/cron-jobs.html that I should run some code in the terminal to make it do it at a certain time etc. And I can set these up using my hosting provider rather easily it seems.
But I'm rather confused about the Console directory. What should go in what folder in here: https://github.com/cakephp/cakephp/tree/master/app/Console
/Console/Command
/Console/Command/Tasks
/Console/Templates
Also noticed that many of the files are .php (e.g. my Shell file is also .php), but according to documentation I've read for Cron jobs, the executed files should be .sh?
Can anyone shed more light on this?
And what would the code be to call that command?
e.g. would presume this is incorrect: 0 0 * * * cd /domains/driz.co.uk/html/App && cake/Console MyShell sendEmail
Thanks
No. There is no way to do it just in PHP. But that doesn't matter, because crons are easy to set up.
In that article you linked to, you still have to set up a cron - the difference is just that you set up a single cron, that runs all your other crons - as opposed to setting up one cron per job. So, either way, you have to learn to create a cron.
The instructions depend on your server's operating system and also what host you're with. Some hosts will have a way to set up cron jobs through a GUI interface like cPanel or something, without you having to touch the terminal.
It's usually pretty easy to find instructions online for how to set up cron jobs with your host or server OS, but if you're having trouble, update your question with your host's name, and your server OS and version.
Also ---------------------------------
Often in cron jobs you'll be running a shell script (.sh). But don't worry about that for this case; your's will end in .php.
Re: the directory structure:
/Console/Command is where your new file should go.
If you're doing a lot of shell stuff, you may want to abstract common code out into the /Console/Command/Task folder. Read more about that here. This probably won't be needed in your case.
/Console/Command/Templates is where you can put custom templates for the Cake bake console - don't worry about that for now.
If I've only got a couple of cron jobs to run, then I create just one file called CronJobsShell.php, and put them all in there.
Really, you should read Cake's documentation on shells from start to end. It will give you a nice picture of how it all hangs together.
This can be done very easily by the following steps -:
1) Create a shell let's say HelloShell.php in Console/Command
<?php
class HelloShell extends AppShell
{
public function main()
{
//Your functionality here...
}
}
?>
This shell can be called by Console/cake hello
2) Write the command crontab-e .This will open up the default editor or the editor which you select Now as we want that our shell should run after at midnight write:-
0 0 * * * /PATH TO APP/Console/cake hello
For better understanding refer https://www.youtube.com/watch?v=ljgvo2jM234
Thanks!
I am coding a PHP-scripted web page that is intended to accept the filename of a JFFS2 image which was previously uploaded to the server. The script is to then re-flash a partition on the server with the image, and output the results. I had been using this:
$tmp = shell_exec("update_flash -v " . $filename . " 4 2>&1");
echo '<h3>' . $tmp . '</h3>';
echo verifyResults($tmp);
(The verifyResults function will return some HTML that indicates to the user whether the update command completed successfully. I.e., in the case that the update completes successfully, display a button to restart the device, etc.)
The problem with this is that the update command takes several minutes to complete, and the PHP script blocks until the shell command is complete before it returns any of the output. This typically means that the update command will continue running, while the user will see an HTTP 504 error (at worst) or wait for the page to load for several minutes.
I was thinking about doing something like this instead:
shell_exec("rm /tmp/output.txt");
shell_exec("update_flash -v " . $filename . " 4 2>&1 >> /tmp/output.txt &");
echo '<div id="output"></div>';
echo '<div id="results"></div>';
This would theoretically put the command in the background and append all output to /tmp/output.txt.
And then, in a Javascript function, I would periodically request getOutput.php, which would simply print the contents of /tmp/output.txt and stick it into the "output" div. Once the command is completely done, another Javascript function would process the output and display a result in the "results" div.
But the problem I see here is that getOutput.php will eventually become inaccessible during the process of updating the device's flash memory, because it's on the partition to which is targeted for an update. So that could leave me in the same position as before, albeit without the 504 or a seemingly eternally-loading page.
I could move getOutput.php to another partition in the device, but then I think I would still have to do some funky stuff with the webserver configuration to be able to access it there (a symlink to it from the webroot would, like any other file, eventually be overwritten during the re-flash).
Is there any other way of displaying the output of the command as it runs, or should I just make do with the solution I have?
Edit 1: I'm currently testing some solutions. I'll update my question with results later.
Edit 2: It seems that the filesystem does not get overwritten as I had originally thought. Instead, the system seems to mount the existing filesystem in read-only mode, so I can still access getOutput.php even after the filesystem is re-flashed.
The second solution I described in my question does seem to work in addition with using popen (as mentioned in an answer below) instead of shell_exec. The page loads, and via Ajax I can display the contents of output.txt.
However, it seems that output.txt does not reflect the output from the re-flash command in real time-- it seems to display nothing until the update command returns from execution. I will need to do further testing to see what's going on here.
Edit 3: Never mind, it looks like the file is current as I access it. I was just hitting a delay while the kernel did some JFFS2-related tasks triggered by my use of the partition on which the source JFFS2 image is stored. I don't know why, but this apparently causes all PHP scripts to block until it's done.
To work around that, I'm going to put the update command invocation in a separate script and request it via Ajax-- that way, the user will at least receive some prepackaged feedback while technically still waiting on the system.
Look at the popen: http://it.php.net/manual/en/function.popen.php
Interesting scenario.
My first thought was to do something regarding proc_* and $_SESSION, but I'm not sure if that will work or not. Give it a try, but if not...
If you're worried about the file being flashed during the process, you could always instantiate a mysql database in the secondary process and write to that. The database can exist on another partition, and you can address it by local ip and the system will take care of the routing.
Edit
When I mentioned proc_* with sessions, I meant something similar to this where $descriptorspec would become:
$_SESSION = array(
1 => array("pipe", "w"),
);
However I kind of doubt that will work. The process will end up writing to the $_SESSION in memory which no longer exists once the first script is killed.
Edit 2
ACTUALLY, on that note, you could install memcache and have your secondary process write directly to memory, which can then be re-read by your web-interfaced process.
If you wipe the DocRoot there is no resource/script that can respond to requests from the user during this time. Therefore you have to send updates to the user in the same request that does the wipe. This requires you to start the shell process and immediately return to PHP. This can be accomplished with pcntl_fork() and pcntl_exec(). Your PHP script should now continuously send the output of the shell script to the client. If the shell script appends to a file in /tmp, you could fpassthru() that file and clear it until the shell script ends.
Regarding your However:
My guess is you are trying to use the file as a stream. I haven't done any production tests, but I believe that the file will only be written back to disk on fclose().
If you are writing to the file continually in script #2, those writes are actually going directly into memory until the file is closed.
Again - I cannot verify this, but if you want to test it, try re-opening and closing the file for every write. This will confirm or deny my theory and you can modify your approach accordingly.
I've got a script that calls two functions, A and B, from the same class. A creates an Amazon virtual server and B destroys one, both via shell_exec()'s of Amazon's command line tools. The script, doActions.php, pulls actions from a queue. If the action is "create" it creates an instance; when the action is "destroy" it kills one.
The script works fine to execute both A and B when I execute it from the command line: php script.php.
When I put it on a cron, it runs but only successfully runs the B function. It deletes destroys instances but won't create them.
The point of failure is clearly function B. It chokes at the first and most important shell_exec, returning and echoing nothing.
echo $string = shell_exec('/home/user/public_html/domain.com/private/ec2-api-tools/bin/ec2-run-instances ami-23b6534a -k gsg-keypair -z us-east-1a');
Unless you know something specific about the way Amazon's command line tools work, please suggest to me reasons why a shell_exec might work in one case and not the other.
Another shell_exec in the same place behaves as expected:
echo $string = shell_exec ('echo overflow');
My guess is that it has to do something with permissions. But when I have it run shell_exec('whoami') it return "root," and when I su and run the command it works fine. I'm having a hard time thinking of creative ways to troubleshoot why my PHP script won't work in cron when it does from the command line. Can you suggest some?
When something runs from the command line but refuses to do so within cron, it's often an environment issue (path or some other environment variable that's needed by the code you're running).
For a start you should modify the script to output the current environment (shell_exec('env')?) at the very top and examine the output from the command line and cron.
Hopefully, there will be something obvious such as AMAZON_EC2_VITAL_VAR but, if not, you should move the cron environment towards your command line one, one variable at a time, until it starts working.
A quick test to ascertain this. From your command line, do:
env >/tmp/pax_env.sh
Then run your PHP script from a shell script which first executes:
. /tmp/pax_env.sh
so that the environments are identical.
And keep in mind that su on its own doesn't give you the same environment as you'd get from logging in directly as a specific user (su - does, I think). You may want to check the behaviour for when you log in as root directly.
Re your comment:
Yes, I do believe you've got it. I'm likely going to mark your answer as correct but need you to suffer through a few addendums about your clever solution. First of all, what's the best way to execute the pax_env.sh script? Does shell_exec() work?
Never let it be said I didn't work for my money :-) No. The shell_exec will almost certainly be running a sub-shell so the variables would be set in that sub-shell but would not affect the PHP parent process.
My advice, if you wanted all those variables set, would be to create a shell-script consisting of all the commands in /tmp/pax_env.sh (probably prefixing each with export) followed by the command you currently have running in cron, something along the lines of:
export PATH=.:/usr/bin
export PS1=Urk:
export PS2=MoreUrk:
/home/user/pax/scriptB.php
Then run that script from cron rather than /home/user/pax/scriptB.php directly. That will ensure the environment is set up before your PHP code is called.
Astute readers will have noticed the phrase "if you wanted all those variables set" above. I don't personally think it's a good idea to dump all your command line variables into the shell script for the cron job. I'd prefer to actually find out which ones are needed and only include those. That lessens the pollution your cron job has to run under. For example, it's unlikely that the PS1/PS2 prompt variables will be required for your PHP script.
If it works, you can set all the environment variables - I just prefer the absolute minimum so I don't have to worry too much when things change.
A way of finding out what's needed is to comment out one export at a time until your script breaks again. Then you know that variable is needed. Once it works with the maximum amount of export statements commented out, you can just delete those commented export statements altogether and what remains, however improbable, must be okay (with apologies to Sir Arthur Conan Doyle).