I'm trying to set up a cron job to update all of our clients. They each have their own db and directory in our web root. An individual call uses this script:
<?php
include_once 'BASEPATH'.$_REQUEST['client'].'PATHTOPHPLIB';
//Call some functions here
//backup db
$filename='db_backup_'.date('G_a_m_d_y').'.sql';
$result=exec('mysqldump '.Config::read('db.basename').' --password='.Config::read('db.password').' --user='.Config::read('db.user').' --single-transaction >BACKUPDIRECTORYHERE'.$filename,$output);
if($output=='') {
/* no output is good */
}else {
logit('Could not backup db');
logit($output);
}
?>
I need to call this same script multiple times, each with a unique include based on a client variable being passed in. We originally had a unique cron job for each client, but this is no longer a possibility. What is the best way to call this script? I'm looking at creating a new php script that will have an array of our clients and loop through it running this script, but I can't just include it because the libraries will have overlapping functions. I'm not considering cUrl because these scripts are not in the web root.
First off, a quick advert for the Symfony console component. There are others, but I've been using Symfony for a while and gravitate towards that. Hopefully you are PSR-0 /Composer -able in your project. Even if you aren't this could give you and excuse to do something self contained.
You absolutely don't want these sorts of scripts under the webroot. There is no value in having them run through apache, and there are limitations imposed on them in terms of memory and runtime that are different in a command line php context.
Base script:
<?php
if (PHP_SAPI != "cli") {
echo "Error: This should only be run from the command line environment!";
exit;
}
// Script name is always passed, so $argc with 1 arg == 2
if ($argc !== 2) {
echo "Usage: $argv[0] {client}\n";
exit;
}
// Setup your constants?
DEFINE('BASEPATH', '....');
DEFINE('PATHTOPHPLIB', '...');
require_once 'BASEPATH' . $argv[1] . 'PATHTOPHPLIB';
//Call some functions here
//backup db
$filename='db_backup_'.date('G_a_m_d_y').'.sql';
$result=exec('mysqldump '.Config::read('db.basename').' -- password='.Config::read('db.password').' --user='.Config::read('db.user').' --single-transaction >BACKUPDIRECTORYHERE'.$filename,$output);
if($output=='') {
/* no output is good */
} else {
logit('Could not backup db');
logit($output);
}
Calling Script Runs in cron:
<?php
// Bootstrap your master DB
// Query the list of clients
DEFINE('BASE_SCRIPT', 'fullpath_to_base_script_here');
foreach ($clients as $client) {
exec('/path/to/php ' . BASE_SCRIPT . " $client");
}
If you want to keep things decoupled inside the caller script you could pass the path to the backup processing script rather than hardwiring it, and if so, use the same techniques to get the param from $argc and $argv.
Related
I want a PHP CLI* script to run, do a task, sleep for two seconds, and then run again. Currently, this looks like this:
#!/usr/bin/env php
<?php
require __DIR__ . '/config/app.php';
$w = new Worker;
if ($w->running) {
exit;
} elseif ($job = $w->next()) {
$w->run($job);
sleep(2);
exec(__FILE__);
exit;
} else {
exit;
}
However, it occurs to me that the new run starts before the old run completes. I am mostly a web developer, so am unfamiliar with this level (I’m at home at a somewhat higher level of abstraction), but I think this becomes what’s known as a fork bomb. How can I do this safely?
I’ve read the PHP manual for pnctl_exec(), but I’m not confident that I’m understanding it correctly.
* It’s done as PHP so most of the actual functionality can be in a library which can also be called from a web interface.
You could simply put a loop around your worker and execute it, while it has some jobs to do.
#!/usr/bin/env php
<?php
require __DIR__ . '/config/app.php';
$w = new Worker;
if ($w->running) {
exit;
}
while ($job = $w->next()) {
$w->run($job);
sleep(2); // Not sure, if you really need this?
}
I have a PHP script which is typically run as part of a bigger web application.
The script essentially makes some changes to a database and reports back to the web user on the status/outcome.
I have an opening section in my PHP:
require $_SERVER['DOCUMENT_ROOT'].'/security.php';
// Only level <=1 users should be able to access this page:
if ( $_SESSION['MySecurityLevel'] > 1 ) {
echo '<script type="text/javascript" language="JavaScript">window.location = \'/index.php\'</script>';
exit();
}
So, basically, if the authenticated web user's security level is not higher than 1, then they are just redirected to the web app's index.
The script works fine like this via web browsers.
Now to my issue...
I want to also cron-job this script - but I don't know how to bypass the security check if ran from the CLI.
If I simply run it from the CLI/cron with 'php -f /path/to/report.php' and enclose the security check in a "if ( php_sapi_name() != 'cli' )", it spews out errors due to multiple uses of $_SERVER[] vars used in the script (there may be other complications but this was the first error encountered).
If I run it using CURL, then the php_sapi_name() check won't work as it's just being served by Apache.
Please can anyone offer some assistance?
Thank you! :)
If you invoke the script through the CLI some of the $_SERVER variables will be defined however their values may not be what you expect: for instance $_SERVER['DOCUMENT_ROOT'] will be empty so your require will look for a file called 'security.php' in the filesystem root. Other arrays such as $_SESSION will not be populated as the CLI does not have a comparable concept.
You could get around these issues by manually defining the variables (see "Set $_SERVER variable when calling PHP from command line?" however a cleaner approach would be to extract the code that makes the database changes to a separate file which is independent from any specific and that does not depend on any SAPI-specific variables being defined.
For instance your PHP script (let's call it index.php) could be modified like this:
require $_SERVER['DOCUMENT_ROOT'].'/security.php';
require $_SERVER['DOCUMENT_ROOT'].'/db_changes.php';';
// Only level <=1 users should be able to access this page:
if ( $_SESSION['MySecurityLevel'] > 1 ) {
echo '<script type="text/javascript" language="JavaScript">window.location = \'/index.php\'</script>';
exit();
} else {
do_db_changes();
}
Then in the SAPI-agnostic db_changes.php you would have:
<?
function do_db_changes() {
// Do the DB changes here...
}
?>
And finally you would have a file, outside the web root, which you can invoke from cron (say cron.php):
<?
require("/absolute/path/to/db_changes.php");
do_db_changes();
?>
Like this you can continue using index.php for the web application and invoke cron.php from cron to achieve your desired results.
I have the following question: how can I run a php script only once? Before people start to reply that this is indeed a similar or duplicate question, please continue reading...
The situation is as follows, I'm currently writing my own MVC Framework and I've come up with a module based system so I can easily add new functionality to my framework. In order to do so, I created a /ROOT/modules directory in which one could add the new modules.
So as you can imagine, the script needs to read the directory, read all the php files, parse them and then is able to execute the new functionality, however it has to do this for all the webbrowsers requests. This would make this task about O(nAmountOfRequests * nAmountOfModules) which is rather big on websites with a large amount of user requests every second.
Then I figured, what if I would introduce a session variable like: $_SESSION['modulesLoaded'] and then simply check if its set or not. This would reduce the load to O(nUniqueAmountOfRequests * nAmountOfModules) but this is still a large Big O if the only thing I want to do is read the directory once.
What I have now is the following:
/** Load the modules */
require_once(ROOT . DIRECTORY_SEPARATOR . 'modules' . DIRECTORY_SEPARATOR . 'module_bootloader.php');
Which exists of the following code:
<?php
//TODO: Make sure that the foreach only executes once for all the requests instead of every request.
if (!array_key_exists('modulesLoaded', $_SESSION)) {
foreach (glob('*.php') as $module) {
require_once($module);
}
$_SESSION['modulesLoaded'] = '1';
}
So now the question, is there a solution, like a superglobal variable, that I can access and exists for all requests, so instead of the previous Big Os, I can make a Big O thats only exists of nAmountOfModules? Or is there another way of easily reading the module files only once?
Something like:
if(isFirstRequest){
foreach (glob('*.php') as $module) {
require_once($module);
}
}
At the most basic form, if you want to run it once, and only once (per installation, not per user), have your intensive script change something on the server state (add a file, change a file, change a record in a database), then check against that every time a request to run it is issued.
If you find a match, it would mean the script was already run, and you can continue with the process without having to run it again.
when called, lock the file, at the end of the script, delete the file. only called once. and as so not needed any longer, vanished in nirvana.
This naturally works the other way round, too:
<?php
$checkfile = __DIR__ . '/.checkfile';
clearstatcache(false, $checkfile);
if (is_file($checkfile)) {
return; // script did run already
}
touch($checkfile);
// run the rest of your script.
Just cache the array() to a file and, when you upload new modules, just delete the file. It will have to recreate itself and then you're all set again.
// If $cache file does not exist or unserialize fails, rebuild it and save it
if(!is_file($cache) or (($cached = unserialize(file_get_contents($cache))) === false)){
// rebuild your array here into $cached
$cached = call_user_func(function(){
// rebuild your array here and return it
});
// store the $cached data into the $cache file
file_put_contents($cache, $cached, LOCK_EX);
}
// Now you have $cached file that holds your $cached data
// Keep using the $cached variable now as it should hold your data
This should do it.
PS: I'm currently rewriting my own framework and do the same thing to store such data. You could also use a SQLite DB to store all such data your framework needs but make sure to test performance and see if it fits your needs. With proper indexes, SQLite is fast.
I have a script that, inserts into the database e.g. 20,000 users with email addresses in batches of 1000
(so two tables, emailParent, emailChild), there are 1000 rows in emailChild for every row in emailParent.
I want to run a script that sends these emails which basically says
//check_for_pending_parent_rows() returns the id of the first pending row found, or 0
while($parentId = check_for_pending_parent_row()){//loop over children of parent row}
Now because this is talking to the sendgrid servers this can take some time.
So I want to be able to hit a page and have that page launch a background process which sends the emails to sendgrid.
I thought I could use exec() but then I realized, I am using code igniter, which means the entry point MUST be index.php hence, I don't think exec() will work,
How can I launch a background process that uses code igniter?
This is not really an answer, Just something that is too long to post as a comment
#Frank Farmer: 70 lines seems a bit
excessive, this example from simple
test does it in pretty much half that,
What is the difference?
<?php
//---------------------------
//define required constants
//---------------------------
define('ROOT', dirname(__file__) . '/');
define('APPLICATION', ROOT . 'application/');
define('APPINDEX', ROOT . 'index.php');
//---------------------------
//check if required paths are valid
//---------------------------
$global_array = array(
"ROOT" => ROOT,
"APPLICATION" => APPLICATION,
"APPINDEX" => APPINDEX);
foreach ($global_array as $global_name => $dir_check):
if (!file_exists($dir_check)) {
echo "Cannot Find " . $global_name . " File / Directory: " . $dir_check;
exit;
}
endforeach;
//---------------------------
//load in code igniter
//---------------------------
//Capture CodeIgniter output, discard and load system into $ci variable
ob_start();
include (APPINDEX);
$ci = &get_instance();
ob_end_clean();
//do stuff here
Use exec to run a vanilla CLI PHP script to calls the page via cURL
See http://php.net/manual/en/book.curl.php for info on cURL
This is what I have had to do with some of my codeigniter applications
(Also make sure you set time out to 0)
And doing it this way, you are still able to debug it in the browser
Petah suggested cURL, but recently (since 2.0), CodeIgniter now permits calls to your controllers through the CLI:
This should be easier than cURL.
I am converting a PDF with PDF2SWF and Indexing with XPDF.. with exec.. only this requires the execution time to be really high.
Is it possible to run it as background process and then launch a script when it is done converting?
in general, php does not implement threads.
But there is an ZF-class which may be suitable for you:
http://framework.zend.com/manual/en/zendx.console.process.unix.overview.html
ZendX_Console_Process_Unix allows
developers to spawn an object as a new
process, and so do multiple tasks in
parallel on console environments.
Through its specific nature, it is
only working on nix based systems
like Linux, Solaris, Mac/OSx and such.
Additionally, the shmop_, pcntl_* and
posix_* modules are required for this
component to run. If one of the
requirements is not met, it will throw
an exception after instantiating the
component.
suitable example:
class MyProcess extends ZendX_Console_Process_Unix
{
protected function _run()
{
// doing pdf and flash stuff
}
}
$process1 = new MyProcess();
$process1->start();
while ($process1->isRunning()) {
sleep(1);
}
echo 'Process completed';
.
Try using popen() instead of exec().
This hack will work on any standard PHP installation, even on Windows, no additional libraries required. Yo can't really control all aspects of the processes you spawn this way, but sometimes this is enough:
$p1 = popen("/bin/bash ./some_shell_script.sh argument_1","r");
$p2 = popen("/bin/bash ./some_other_shell_script.sh argument_2","r");
$p2 = popen("/bin/bash ./yet_other_shell_script.sh argument_3","r");
The three spawned shell scripts will run simultaneously, and as long as you don't do a pclose($p1) (or $p2 or $p3) or try to read from any of these pipes, they will not block your PHP execution.
When you're done with your other stuff (the one that you are doing with your PHP script) you can call pclose() on the pipes, and that will pause your script execution until the process you are pclosing finishes. Then your script can do something else.
Note that your PHP will not conclude or die() until those scripts have finished. Reaching the end of the script or calling die() will make it wait.
If you are running it from the command line, you can fork a php process using pcntl_fork
There are also daemon classes that would do the same trick:
http://pear.php.net/package/System_Daemon
$pid = pcntl_fork();
if ($pid == -1) {
die('could not fork');
} else if ($pid) {
//We are the parent, exit
exit();
} else {
// We are the child, do something interesting then call the script at the end.
}