In a apcahe server i want to run a PHP scripts as cron which starts a php file in background and exits just after starting of the file and doesn't wait for the script to complete as that script will take around 60 minutes to complete.how this can be done?
You should know that there is no threads in PHP.
But you can execute programs and detach them easily if you're running on Unix/linux system.
$command = "/usr/bin/php '/path/to/your/php/to/execute.php'";
exec("{$command} > /dev/null 2>&1 & echo -n \$!");
May do the job. Let's explain a bit :
exec($command);
Executes /usr/bin/php '/path/to/your/php/to/execute.php' : your script is launched but Apache will awaits the end of the execution before executing next code.
> /dev/null
will redirect standard output (ie. your echo, print etc) to a virtual file (all outputs written in it are lost).
2>&1
will redirect error output to standard output, writting in the same virtual and non-existing file. This avoids having logs into your apache2/error.log for example.
&
is the most important thing in your case : it will detach your execution of $command : so exec() will immediatly release your php code execution.
echo -n \$!
will give PID of your detached execution as response : it will be returned by exec() and makes you able to work with it (such as, put this pid into a database and kill it after some time to avoid zombies).
You need to use "&" symbol to run program as background proccess.
$ php -f file.php &
Thats will run this command in background.
You may wright sh script
#!/bin/bash
php -f file.php &
And run this script from crontab.
This may not be the best solution to your specific problem. But for the record, there is Threads in PHP.
https://github.com/krakjoe/pthreads
I'm assuming you know how to use threads, this is very young code that I wrote myself, but if you have experience with threads and mutex and the like you should be able to solve your problem using this extension.
This is clearly a shameless plug of my own project, and if the user doesn't have the access required to install extensions then it won't help him, but many people find stackoverflow and it will solve other problems no doubt ...
Related
I have a php script that leads up to running another expect script by passing it arguments.
$output = shell_exec("expect login_script.tcl '$user' '$host' '$port' '$password'");
Using shell_exec doesn't work as the script gets run in the background or 'within' the php script. I need it to run in the foreground, allowing user interactivity. Is there an elegant way to do this. Already it is getting messy by having to use different scripting languages. I tried wrapping the two scripts with a shell script that called the php script, assigned output the result as a variable (which was a command) and then ran sh on that. However I have the same problem again where the scripts are run in the background and any user interactivity creates a halt/freeze. Its ok in this situation if php 'quits' out when calling shell exec. Ie. php stops and expect gets run as if you called it. (the same as if i just copied the command that is output and pasted it into the terminal).
Update
I am having much more luck with the following command in php:
shell_exec("gnome-terminal -e 'bash -c \"expect ~/commands/login_script.tcl; exec bash\"' &");
However, can this be improved in order to not close the shell immediately after the secondary script (login_script) is finished?
Further Update
From reading the answers I think I need to clarify things as it looks like people are assuming a 'more complicated' issue.
the two processes do not need to communicate with each other, I should probably not have put the $output = shell_exec in the example and just shell_exec on its own as I believe this has led to the confusion.
The php script needs to only initiate the expect script with some cli parameters, e.g. my-script 'param1' 'param2' and can be thought of as completely 'asynchronous'. This is much like the behaviour of launcher programs like 'launchy' or 'synapse' they can launch other programs but need not affect them. Nor do they wait for the secondary program to quit/finish.
I made the mistake of saying 'shell_exec' doesn't work for me. What I should have said was that 'I have so far not succeeded with using shell_exec', but shell_exec("gnome-terminal -e 'bash -c \"expect ~/commands/login_script.tcl; exec bash\"' &"); is 'working' but still trying to find the right quote combination to allow passing arguments to the expect script.
Task managing is an interesting but difficult job.
Because your user can move during a task (and leads it to an unexpected result, such as session freezes, or an incomplete work from the process), you need to execute it in background. If you need to interact between your user and your process, you'll need to create a way to communicate.
The easiest way (I think) is to use a file, shared between your user session and the task.
If you have a lot of users simultaneously and communicates a lot between user and processes, you can mount a partition in memory to optimize the read/write operations.
In your fstab, a line like :
tmpfs /memory tmpfs defaults,uid=www-data,gid=www-data,size=128M 0 0
Or, in a script, you could do :
#!/bin/sh
mkfs -t ext2 -q /dev/ram1 65536
[ ! -d /memory ] && mkdir -p /memory
mount /dev/ram1 /memory
chmod -R 777 /memory
You'll need to take care of a lot of things :
file access (to avoid concurrency between your webapp and your processes)
time (to avoid zombies or useless long-running scripts)
security (such operations must be carefully designed)
resources management (to avoid that 10000 processes runs simuntaneouly)
...
I think what you're looking for is the proc_open() command. It gives you access to the stdin/stdout streams of the background process. You can pass your own stdin/stdout streams to the new process in the $descriptorSpec parameter, which will let your background process talk to the user.
Your 'foreground' application will have to wait around until the background process has died. I haven't actuallly done this with PHP, but I'm guessing you'll have to watch the $pipes to see when they get closed -- then you'll know the background process is finished and you can delete the process resource and continue on with whatever the foreground process needs to do.
In the end, I managed to get it working by by adding a third quotation mark type: ` (I believe it is called a 'tack'?) which allowed me to pass arguments to the next script from the first script
The command I needed in my php script was:
$command = `gnome-terminal -e 'bash -c "expect ~/commands/login_script.tcl \"$user\" \"$host\" \"$port\" \"$password\"; exec bash"' &`;
shell_exec($command);
It took a while to get all the quotes right as swapping the type of quotes around can lead to it not working.
Here is a video demonstrating the end result
Use:
pcntl_exec("command", array("parameter1", "parameter2"));
For example, I have a script that starts the mysql command using the parameters in the current php project that looks like:
pcntl_exec("/usr/bin/mysql", array(
"--user=".$params['user'],
"--password=".$params['password'],
"--host=".$params['host'],
$params['dbname']
));
This doesn't rely on gnome terminal or anything, it replaces PHP with the program you call.
You do need to know the full path of the command, which is a pain because it can vary by platform, but you can use the env command command which is available at /usr/bin/env on most systems to find the command for you. The above example above becomes:
pcntl_exec("/usr/bin/env", array(
"mysql",
"--user=".$params['user'],
"--password=".$params['password'],
"--host=".$params['host'],
$params['dbname']
));
I have this set in my php script to make it supposedly run as long as it needs to to parse and do mysql queries and fetch images for over 100,000 rows.
ignore_user_abort(true);
set_time_limit(0);
#begin logging output
error_reporting(E_ALL);
ini_set('memory_limit', '512M');
I run the command like this in shell:
nohup php myscript.php > output.txt
after running about 8 to 10 hours this script will still be running but execution just stops... no more output.. it's not a zombie process I checked top. It hasn't met the memory limit either and if it did wouldn't it exit?
What is going on? It's a real pain to babysit this script and write custom code to nudge it along. What is going on? I read up on unix maybe cleaning up zombies but it's not a zombie. I know it's not php settings.. and it's not running through a webserver it's from command line only so what gives.
It looks like you haven't detached your process correctly. Currently, if your process's parent die, your process will die too. If you place your process in background (create a real daemon), you'll not meet scuh trouble.
You can execute your PHP this way to really detach it :
php myscript.php > output.txt 2>&1 &
For your information :
> output.txt
will redirect standard output (ie. your echo, print etc) to output.txt file
2>&1
will redirect error output to standard output, writting it in the same output.txt file
&
is the most important thing in your case : it will detach your process to create a real daemon.
Edit : if you're having troubles while disconecting your shell, the most simple is to put your script on a bash script, for example run.sh :
#!/bin/bash
php myscript.php > output.txt 2>&1 &
And you'll run your script this way :
bash run.sh &
In such case, your shell will "think" your program has ended at the end of the shell script, not at the end of the php daemon.
Long-running PHP scripts shouldn't die or hang without reason. I've had scripts that run continuously for 6 months +. There must be something else going on inside of your script body.
I know I should use comment to answer this, but I have not enough reputation to do it...
Maybe your process is consuming 100% of CPU, I had an issue with a while loop without calling a sleep() or usleep() at the end of the loop.
For a website, I need to be able to start and stop a daemon process. What I am currently doing is
exec("sudo /etc/init.d/daemonToStart start");
The daemon process is started, but Apache/PHP hangs. Doing a ps aux revealed that sudo itself changed into a zombie process, effectively killing all further progress. Is this normal behavior when trying to start a daeomon from PHP?
And yes, Apache has the right to execute the /etc/init.d/daemonToStart command. I altered the /etc/sudoers file to allow it to do so. No, I have not allowed Apache to be able to execute any kind of command, just a limited few to allow the website to work.
Anyway, going back to my question, is there a way to allow PHP to start daemons in a way that no zombie process is created? I ask this because when I do the reverse, stopping an already started daemon, works just fine.
Try appending > /dev/null 2>&1 & to the command.
So this:
exec("sudo /etc/init.d/daemonToStart > /dev/null 2>&1 &");
Just in case you want to know what it does/why:
> /dev/null - redirect STDOUT to /dev/null (blackhole it, in other words)
2>&1 - redirect STDERR to STDOUT (blackhole it as well)
& detach process and run in the background
I had the same problem.
I agree with DaveRandom, you have to suppress every output (stdout and stderr). But no need to launch in another process with the ending '&': the exec() function can't check the return code anymore, and returns ok even if there is an error...
And I prefer to store outputs in a temporary file, instead of 'blackhole'it.
Working solution:
$temp = tempnam(sys_get_temp_dir(), 'php');
exec('sudo /etc/init.d/daemonToStart >'.$temp.' 2>&1');
Just read file content after, and delete temporary file:
$output = explode("\n", file_get_contents($temp));
#unlink($temp);
I have never tried starting a daemon from PHP, but I have tried running other shell commands, with much trouble. Here are a few things I have tried, in the past:
As per DaveRandom's answer, append /dev/null 2>&1 & to the end of your command. This will redirect errors to standard output. You can then use this output to debug.
Make sure your webserver's user's PATH contains all referenced binaries inside your daemon script. You can do this by calling exec('echo $PATH; whoami;). This will tell you the user PHP is running under, and it's current PATH variable.
I have this in one PHP file:
echo shell_exec('nohup /usr/bin/php -f '.CRON_DIRECTORY.'testjob.php > /dev/null 2>&1 &');
and in testjob.php I have:
file_put_contents('test.txt',time()); exit;
And it all runs just dandy. However if I go to processes it's not terminating testjob.php after it runs.
(Having to post this as an answer instead of comment as stackoverflow still won't let me post comments...)
Works for me. I made testjob.php exactly as described, and another file test.php with just the given line (except I removed CRON_DIRECTORY, because testjob.php was in the same directory for me).
To be sure I was measuring correctly, I added "sleep(5)" at the top of testjob.php, and in another window I have:
watch 'ps a |grep php'
running. This happens:
I run test.php
test.php exits immediately but testjob.php appears in my list
After 5 seconds it disappears.
I wondered if shell might matter, so I switched from bash to sh. Same result.
I also wondered if it might be because your outer script is long-running. So I put "sleep(10)" at the bottom of test.php. Same result (i.e. testjob.php finishes after 5 seconds, test.php finishes 5 seconds after that).
So, unhelpfully, your problem is somewhere other than the code you've posted.
Remove & from the end of your command. This symbol says nohup to continue running in background, thus shell_exec is waiting for task to complete... and waiting... and waiting... till the end of times ;)
I don't even understan why would you perform this command with nohup.
echo shell_exec('/usr/bin/php -f '.CRON_DIRECTORY.'testjob.php > /dev/null 2>&1');
should be enough.
You're executing PHP and make that execution a background task. That means it will run in background until it is finished. shell_exec will not kill that process or something similar.
You might want to set an execution limit, PHP cli has a setting of unlimited by default. See as well set_time_limit PHP Manual;
So if you wonder why the php process does not terminate, you need to debug the script. If that's too complicated and you're unable to find out why the script runs that long, you might just want to terminate the process after some time, e.g. 1 minute.
I have a large PHP application and I'm looking for a way to know which PHP script is running at a given moment. Something like when you run "top" on a Linux command line but for PHP.
Are you trying to do so from within the PHP application, or outside of it? If you're inside the PHP code, entering debug_print_backtrace(); at that point will show you the 'tree' of PHP files that were included to get you at that point.
If you're outside the PHP script, you can only see the one process that called the original PHP script (index.php or whatnot), unless the application spawns parallel threads as part of its execution.
If you're looking for this information at the system level, e.g. all php files running under any Apache child process, or even any PHP files in use by other apps, there is the lsof program (list open files), which will spit out by default ALL open files on the system (executables, sockets, fifos, .so's, etc...). You can grep the output for '.php' and get a pretty complete picture of what's in use at that moment.
This old post shows a way you can wrap your calls to php scripts and get a PID for each process.
Does PHP have threading?
$cmd = 'nohup nice -n 10 /usr/bin/php -c /path/to/php.ini -f /path/to/php/file.php action=generate var1_id=23 var2_id=35 gen_id=535 > /path/to/log/file.log & echo $!';
$pid = shell_exec($cmd);