Batch File / Cron Job - php

I have 80 Php scripts that I want to run. I want to Start Script1 wait until its finish starts script2 ...
Now, if I add the following :
start C:\xampp\php\php.exe -f C:\xampp\Script1.php
start C:\xampp\php\php.exe -f C:\xampp\Script2.php
start C:\xampp\php\php.exe -f C:\xampp\Script3.php
When I execute my Batch file it will run all scripts at the same time.
Is there any command I could use in my bat file to tell the system wait until script is done then execute the one after?
Anyway to set a time interval between scripts ?
I could run all 80 scripts in the cron tab, However, Id rather configure one batch file to handle all the scripts then assign this file to the crontab.
Thanks

start fires off the tasks asynchronously, thus they will all run at the same time. Use the /wait flag to start the task and wait for it to complete.
e.g.
start /wait C:\xampp\php\php.exe -f C:\xampp\Script1.php
start /wait C:\xampp\php\php.exe -f C:\xampp\Script2.php
start /wait C:\xampp\php\php.exe -f C:\xampp\Script3.php
Alternately, as Marc B states, removing the start call should give you sequential execution as well.

As Joseph Sibler suggests, you could craft a single PHP file that calls them sequentially. You could even generate that PHP file using something like xargs (available on windows w/ mingw32, cygwin, and probably other sources).
Something like this ought to do it:
del single.php
dir /b Script*.php | xargs cat >> single.php
start c:\xampp\php\php.exe -f single.php

Related

Tailing named pipe in Docker to write to stdout

My Dockerfile:
FROM php:7.0-fpm
# Install dependencies, etc
RUN \
&& mkfifo /tmp/stdout \
&& chmod 777 /tmp/stdout
ADD docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
as you can see I'm creating a named pipe at /tmp/stdout. And my docker-entrypoint.sh:
#!/usr/bin/env bash
# Some run-time configuration stuff...
exec "./my-app" "$#" | tail -f /tmp/stdout
My PHP application (an executable named my-app) writes its application logs to /tmp/stdout. I want those logs to then be captured by Docker so that I can do docker logs <container_id> and see the logs that the application wrote to /tmp/stdout. I am attempting to do this by running the my-app command and then tailing /tmp/stdout, which will then output the logs to stdout.
What I'm seeing happen is that when I run my application, it hangs when it writes the first log message. I believe this happens because there is nothing "reading" from the named pipe, and writing to a named pipe blocks until something reads from it. This is confirmed if I do docker exec -it <container_id> bash, and then do tail -f /tmp/stdout myself inside the container. Once I do that, the container immediately exits because the application has written its logs to the named pipe.
For reasons that I won't bloat this post with, it's not possible for my-app itself to write logs to stdout. It has to use /tmp/stdout.
Can anybody tell me why this isn't working, and what I need to change? I expect I have to change the exec call in docker-entrypoint.sh, but I'm not sure how. Thank you!
What I'm seeing happen is that when I run my application, it hangs when it writes the first log message. I believe this happens because there is nothing "reading" from the named pipe, and writing to a named pipe blocks until something reads from it.
This is correct, see fifo(7). But with your example code
exec "./my-app" "$#" | tail -f /tmp/stdout
this should actually work since the pipe will start ./my-app and tail simultaneously so that there is something reading from /tmp/stdout.
But one problem here is that tail -f will never terminate by itself and so neither your docker-entrypoint.sh/container. You could fix this with:
tail --pid=$$ -f /tmp/stdout &
exec "./my-app" "$#"
tail --pid will terminate as soon as the process provided by id terminates where $$ is the pid of the bash process (and through exec later the pid of ./my-app).
For reasons that I won't bloat this post with, it's not possible for my-app itself to write logs to stdout. It has to use /tmp/stdout.
Does this mean it has to write to a filesystem path or is the path /tmp/stdout hardcoded?
If you can use any path you can use /dev/stdout / /proc/self/fd/1 / /proc/$$/fd/1 as logging path to let your application write to stdout.
If /tmp/stdout is hardcoded try symlinking it to /dev/stdout:
ln -s /dev/stdout /tmp/stdout

How run PHP script file in the background forever

My issue seems to be asked before but hold on, this one is a bit different.
I have 2 php files, I run the following commands:
nohup php file_1.php >/dev/null 2>&1 &
nohup php file_2.php >/dev/null 2>&1 &
This will create basically 2 php processes.
My problem is that sometimes one of the files or even both of them are killed by the server for unknown reason and I have to re-enter the commands over again. I tried 'forever' but doesn't help.
If the server is rebooted I will have to enter those 2 commands, I thought about Cronjob but I'm not sure if it would launch it twice which would create more confusion.
My question is how to start automatically the files if one or both of them got killed? What is the best way to achieve this which would check exactly that file_1.php or that file_2.php is indeed running?
There are a couple of ways you can do this. As #Chris Haas mentioned in the comments, supervisord can do this for you, or you can run a watchdog shell script from cron at regular intervals that checks to see if your script is running, and if not, start it. Here's one I use.
#!/bin/bash
FIND_PROC=`ps -ef |grep "php queue_agent.php --env prod" | awk '{if ($8 !~ /grep/) print $2}'`
# if FIND_PROC is empty, the process has died; restart it
if [ -z "${FIND_PROC}" ]; then
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo queue_agent.php failed at `date`
cd ${DIR}
nohup nice -n 10 php queue_agent.php --env prod -v > ../../sandbox/logs/queue_agent.log &
fi
exit 0
I think u can try to figure out why these two php scripts shut down as the first step. To solve this problem, u can use this php function:
void register_shutdown_function(callback $callback[.mixed $parameter]);
which Registers a callback to be executed after script execution finishes or exit() is called. So u can log some info when php files get shut down like this:
function myLogFunction() {
//log some info here
}
register_shutdown_function(myLogFunction);
Instead of putting the standard output and error output into /dev/null, u can put them into a log file(Since maybe we can get some helpful info from the output). So instead of using:
nohup php file_1.php >/dev/null 2>&1 &
nohup php file_2.php >/dev/null 2>&1 &
try:
nohup php file_1.php 2>yourLog.log &
nohup php file_2.php 2>yourLog.log &
If u want to autorun these two php files when u boot the server, try edit /etc/rc.local(which is autorun when the os start up). Add your php cli command lines in this file.
If u can't figure out why php threads get shut down, try supervisor as #Chris Haas mensioned.

linux kill whole pipe on exist of script

tail -n 1 -f /tmp/remoteinput | php ./myscript.php conf.conf
I run the above command to have myscript accept redirected input. This bit is working.
The issue I am facing is when myscript.php finishes execution it exit's however the
pipe is still left open. I think tail is still lingering.
What I want to archive is when myscript.php exits to have the whole pipe killed
Use tail without -f parameter. -f makes it continuing to listen, so command never dies and pipe is never closed.
-f, --follow[={name|descriptor}]
output appended data as the file grows; -f, --follow, and --follow=descriptor are equivalent

Parallelizing PHP processes with a Bash Script?

I want to launch ~10 php processes from a bash script. When one of them finishes, I'd like the bash script to launch another php process, and continue on indefinitely, always having ~10 php processes running.
What is the simplest way to do this?
The php file launched will be the same every time, but the php process will know to pull new values from the database so it's processing new data each time. The file I need to launch and all it's classes are already written in php.
Seems like a good fit for superivisord. The following configuration will make sure that 10 processes are always running, and deals with log rotation, which is also handy. All output, including stderr, will be written to /var/log/worker.log. With "autorestart=true", supervisord will replace a child process as soon as it exits.
[program:worker]
command=php /path/to/worker.php
process_name=%(program_name)s_%(process_num)d
stdout_logfile=/var/log/%(program_name)s.log
redirect_stderr=true
stdout_capture_maxbytes=512MB
stdout_logfile_backups=3
numprocs=10
numprocs_start=0
autostart=true
autorestart=true
Once you have the supervisor config in place (usually /etc/supervisord/conf.d), you can use supervisorctl as a convenient way to start and stop the process group.
$ supervisorctl start worker
...
$ supervisorctl stop worker
...
$ supervisorctl status
worker:worker_0 RUNNING pid 8985, uptime 0:09:24
worker:worker_1 RUNNING pid 10157, uptime 0:08:52
...
worker:worker_9 RUNNING pid 12459, uptime 0:08:31
You could use GNU Parallel, piping the list of images to manage into parallel as described here.
You could use something like this. Use one file to launch 10 (only run this once) and the bottom of each file could relaunch itself when it finished.
/**
* Asynchronously execute/include a PHP file. Does not record the output of the file anywhere.
*
* #param string $filename file to execute, relative to calling script (or root?)
* #param string $options (optional) arguments to pass to file via the command line
*/
function asyncInclude($filename, $options = '') {
exec("/path/to/php -f {$filename} {$options} >> /dev/null &");
}
jcomeau#intrepid:/tmp$ cat test.sh
#!/bin/sh
set -m # monitor mode
task="php-cgi /tmp/sleep.php"
function do_task {
$task >/dev/null &
echo -n spawned $! ' ' >&2
}
trap do_task SIGCHLD
for i in $(seq 1 10); do
do_task
done
while true; do
wait
done
jcomeau#intrepid:/tmp$ cat sleep.php
<?php sleep(3); ?>

How to check if a php script is still running

I have a PHP script that listens on a queue. Theoretically, it's never supposed to die. Is there something to check if it's still running? Something like Ruby's God ( http://god.rubyforge.org/ ) for PHP?
God is language agnostic but it would be nice to have a solution that works on windows as well.
I had the same issue - wanting to check if a script is running. So I came up with this and I run it as a cron job. It grabs the running processes as an array and cycles though each line and checks for the file name. Seems to work fine. Replace #user# with your script user.
exec("ps -U #user# -u #user# u", $output, $result);
foreach ($output AS $line) if(strpos($line, "test.php")) echo "found";
In linux run ps as follows:
ps -C php -f
You could then do in a php script:
$output = shell_exec('ps -C php -f');
if (strpos($output, "php my_script.php")===false) {
shell_exec('php my_script.php > /dev/null 2>&1 &');
}
The above code lists all php processes running in full, then checks to see if "my_script.php" is in the list of running processes, if not it runs the process and does not wait for the process to terminate to carry on doing what it was doing.
Just append a second command after the script. When/if it stops, the second command is invoked. Eg.:
php daemon.php 2>&1 | mail -s "Daemon stopped" you#example.org
Edit:
Technically, this invokes the mailer right away, but only completes the command when the php script ends. Doing this captures the output of the php-script and includes in the mail body, which can be useful for debugging what caused the script to halt.
Simple bash script
#!/bin/bash
while [true]; do
if ! pidof -x script.php;
then
php script.php &
fi
done
Not for windows, but...
I've got a couple of long-running PHP scripts, that have a shell script wrapping it. You can optionally return a value from the script that will be checked in the shell-script to exit, restart immediately, or sleep for a few seconds -and then restart.
Here's a simple one that just keeps running the PHP script till it's manually stopped.
#!/bin/bash
clear
date
php -f cli-SCRIPT.php
echo "wait a little while ..."; sleep 10
exec $0
The "exec $0" restarts the script, without creating a sub-process that will have to unravel later (and take up resources in the meantime). This bash script wraps a mail-sender, so it's not a problem if it exits and pauses for a moment.
Here is what I did to combat a similar issue. This helps in the event anyone else has a parameterized php script that you want cron to execute frequently, but only want one execution to run at any time. Add this to the top of your php script, or create a common method.
$runningScripts = shell_exec('ps -ef |grep '.strtolower($parameter).' |grep '.dirname(__FILE__).' |grep '.basename(__FILE__).' |grep -v grep |wc -l');
if($runningScripts > 1){
die();
}
You can write in your crontab something like this:
0 3 * * * /usr/bin/php -f /home/test/test.php my_special_cron
Your test.php file should look like this:
<?php
php_sapi_name() == 'cli' || exit;
if($argv[1]) {
substr_count(shell_exec('ps -ax'), $argv[1]) < 3 || exit;
}
// your code here
That way you will have only one active instace of the cron job with my-special-cron as process key. So you can add more jobs within the same php file.
test.php system_send_emails sendEmails
test.php system_create_orders orderExport
Inspired from Justin Levene's answer and improved it as ps -C doesn't work in Mac, which I need in my case. So you can use this in a php script (maybe just before you need daemon alive), tested in both Mac OS X 10.11.4 & Ubuntu 14.04:
$daemonPath = "FULL_PATH_TO_DAEMON";
$runningPhpProcessesOfDaemon = (int) shell_exec("ps aux | grep -c '[p]hp ".$daemonPath."'");
if ($runningPhpProcessesOfDaemon === 0) {
shell_exec('php ' . $daemonPath . ' > /dev/null 2>&1 &');
}
Small but useful detail: Why grep -c '[p]hp ...' instead of grep -c 'php ...'?
Because while counting processes grep -c 'php ...' will be counted as a process that fits in our pattern. So using a regex for first letter of php makes our command different from pattern we search.
One possible solution is to have it listen on a port using the socket functions. You can check that the socket is still listening with a simple script. Even a monitoring service like pingdom could monitor its status. If it dies, the socket is no longer listening.
Plenty of solutions.. Good luck.
If you have your hands on the script, you can just ask him to set a time value every X times in db, and then let a cron job check if that value is up to date.
troelskn wrote:
Just append a second command after the script. When/if it stops, the second command is invoked. Eg.:
php daemon.php | mail -s "Daemon stopped" you#example.org
This will call mail each time a line is printed in daemon.php (which should be never, but still.)
Instead, use the double ampersand operator to separate the commands, i.e.
php daemon.php & mail -s "Daemon stopped" you#example.org
If you're having trouble checking for the PHP script directly, you can make a trivial wrapper and check for that. I'm not sufficiently familiar with Windows scripting to put how it's done here, but in Bash, it'd look like...
wrapper_for_test_php.sh
#!/bin/bash
php test.php
Then you'd just check for the wrapper like you'd check for any other bash script: pidof -x wrapper_for_test_php.sh
I have used cmder for windows and based on this script I came up with this one that I managed to deploy on linux later.
#!/bin/bash
clear
date
while true
do
php -f processEmails.php
echo "wait a little while for 5 secobds...";
sleep 5
done

Categories