I have a PHP script running on Debian that calls the ping command and redirects the output to a file using exec():
exec('ping -w 5 -c 5 xxx.xxx.xxx.xxx > /var/f/ping/xxx.xxx.xxx.xxx_1436538580.txt &');
The PHP script then has a while loop that scans the /var/f/ping/ folder and checks to see if the ping has finished writing to it. I tried checking the output using:
exec('lsof | grep /var/f/ping/xxx.xxx.xxx.xxx_1436538580.txt');
to see if the file was still open, but it takes lsof about 10-15 seconds to return its results, which is too slow for what we need. Ideally it should be able to check this within 2 or 3 seconds.
Is there a faster/better way to test if the ping has completed?
using grep with lsof is probably the slowest way, as lsof will scan everything. you can narrow down the scope that lsof uses to one directory by doing:
lsof +D /var/f/ping
or similar.
there's a good and easy-to-read overview of lsof uses here:
http://www.thegeekstuff.com/2012/08/lsof-command-examples/
alternately, you could experiment with:
http://php.net/manual/en/function.fam-monitor-file.php
and see if that meets your requirements better.
You need Deffered queue pattern to such kind of tasks. Make pings in background by cron and create table or file with job statuses.
Related
I have a php script that leads up to running another expect script by passing it arguments.
$output = shell_exec("expect login_script.tcl '$user' '$host' '$port' '$password'");
Using shell_exec doesn't work as the script gets run in the background or 'within' the php script. I need it to run in the foreground, allowing user interactivity. Is there an elegant way to do this. Already it is getting messy by having to use different scripting languages. I tried wrapping the two scripts with a shell script that called the php script, assigned output the result as a variable (which was a command) and then ran sh on that. However I have the same problem again where the scripts are run in the background and any user interactivity creates a halt/freeze. Its ok in this situation if php 'quits' out when calling shell exec. Ie. php stops and expect gets run as if you called it. (the same as if i just copied the command that is output and pasted it into the terminal).
Update
I am having much more luck with the following command in php:
shell_exec("gnome-terminal -e 'bash -c \"expect ~/commands/login_script.tcl; exec bash\"' &");
However, can this be improved in order to not close the shell immediately after the secondary script (login_script) is finished?
Further Update
From reading the answers I think I need to clarify things as it looks like people are assuming a 'more complicated' issue.
the two processes do not need to communicate with each other, I should probably not have put the $output = shell_exec in the example and just shell_exec on its own as I believe this has led to the confusion.
The php script needs to only initiate the expect script with some cli parameters, e.g. my-script 'param1' 'param2' and can be thought of as completely 'asynchronous'. This is much like the behaviour of launcher programs like 'launchy' or 'synapse' they can launch other programs but need not affect them. Nor do they wait for the secondary program to quit/finish.
I made the mistake of saying 'shell_exec' doesn't work for me. What I should have said was that 'I have so far not succeeded with using shell_exec', but shell_exec("gnome-terminal -e 'bash -c \"expect ~/commands/login_script.tcl; exec bash\"' &"); is 'working' but still trying to find the right quote combination to allow passing arguments to the expect script.
Task managing is an interesting but difficult job.
Because your user can move during a task (and leads it to an unexpected result, such as session freezes, or an incomplete work from the process), you need to execute it in background. If you need to interact between your user and your process, you'll need to create a way to communicate.
The easiest way (I think) is to use a file, shared between your user session and the task.
If you have a lot of users simultaneously and communicates a lot between user and processes, you can mount a partition in memory to optimize the read/write operations.
In your fstab, a line like :
tmpfs /memory tmpfs defaults,uid=www-data,gid=www-data,size=128M 0 0
Or, in a script, you could do :
#!/bin/sh
mkfs -t ext2 -q /dev/ram1 65536
[ ! -d /memory ] && mkdir -p /memory
mount /dev/ram1 /memory
chmod -R 777 /memory
You'll need to take care of a lot of things :
file access (to avoid concurrency between your webapp and your processes)
time (to avoid zombies or useless long-running scripts)
security (such operations must be carefully designed)
resources management (to avoid that 10000 processes runs simuntaneouly)
...
I think what you're looking for is the proc_open() command. It gives you access to the stdin/stdout streams of the background process. You can pass your own stdin/stdout streams to the new process in the $descriptorSpec parameter, which will let your background process talk to the user.
Your 'foreground' application will have to wait around until the background process has died. I haven't actuallly done this with PHP, but I'm guessing you'll have to watch the $pipes to see when they get closed -- then you'll know the background process is finished and you can delete the process resource and continue on with whatever the foreground process needs to do.
In the end, I managed to get it working by by adding a third quotation mark type: ` (I believe it is called a 'tack'?) which allowed me to pass arguments to the next script from the first script
The command I needed in my php script was:
$command = `gnome-terminal -e 'bash -c "expect ~/commands/login_script.tcl \"$user\" \"$host\" \"$port\" \"$password\"; exec bash"' &`;
shell_exec($command);
It took a while to get all the quotes right as swapping the type of quotes around can lead to it not working.
Here is a video demonstrating the end result
Use:
pcntl_exec("command", array("parameter1", "parameter2"));
For example, I have a script that starts the mysql command using the parameters in the current php project that looks like:
pcntl_exec("/usr/bin/mysql", array(
"--user=".$params['user'],
"--password=".$params['password'],
"--host=".$params['host'],
$params['dbname']
));
This doesn't rely on gnome terminal or anything, it replaces PHP with the program you call.
You do need to know the full path of the command, which is a pain because it can vary by platform, but you can use the env command command which is available at /usr/bin/env on most systems to find the command for you. The above example above becomes:
pcntl_exec("/usr/bin/env", array(
"mysql",
"--user=".$params['user'],
"--password=".$params['password'],
"--host=".$params['host'],
$params['dbname']
));
Currently I have a parser.php which loads an xml file and inserts new data from the xml file into a mysql database. How would I go about refreshing this php file every 30 seconds so my mysql table always has fresh data? I think I could use short-polling to do this, but I'm guessing this is not the most efficient of options.
Thanks in advance
This is a non-PHP solution which will require you to have shell (SSH) access in order to run the script, however you can also run it through PHP with exec() if you want to. Shared hosting environments might present a challenge for this approach but as long as you can execute scripts under your user credentials you should have no problems running it.
First you will need to create a bash script with the following content and save it (I'll use the name parser.sh for the purpose of this example). You can then adjust the timeout in the sleep 30 line if you want to.
#!/bin/sh
while true
do
php parser.php
sleep 30
done
In order to run the script you'll need to give it execute permissions.
chmod +x parser.sh
Now you can use the nohup command with the ampersand (&) argument to ensure that the script will run in the background even when a termination signal is sent after, lets say, closing the shell (SSH). The ampersand is important!
nohup ./parser.sh &
Now you can use top or ps aux | grep parser to ensure that the script is running. As I already said before you can also use PHP exec() to start the process but shell is still the preferred and most reliable way to do this.
If you want to stop the background process which executes your script then you'll simply have to kill it. Just use ps aux | grep parser to find out the PID of the parser process (its in the second column to the left) and use it with the kill command.
kill 4183
You need to use a cronjob, but crons jobs runs every 1 minute or more.
Another way is to make a "daemon".
Very basic example:
<?php
while(true) {
// check if 30 seconds has passed.
// then execute some function...
};
?>
Then you need to execute this in your terminal:
$ php script.php &
This link should help.
Greatings!
In a apcahe server i want to run a PHP scripts as cron which starts a php file in background and exits just after starting of the file and doesn't wait for the script to complete as that script will take around 60 minutes to complete.how this can be done?
You should know that there is no threads in PHP.
But you can execute programs and detach them easily if you're running on Unix/linux system.
$command = "/usr/bin/php '/path/to/your/php/to/execute.php'";
exec("{$command} > /dev/null 2>&1 & echo -n \$!");
May do the job. Let's explain a bit :
exec($command);
Executes /usr/bin/php '/path/to/your/php/to/execute.php' : your script is launched but Apache will awaits the end of the execution before executing next code.
> /dev/null
will redirect standard output (ie. your echo, print etc) to a virtual file (all outputs written in it are lost).
2>&1
will redirect error output to standard output, writting in the same virtual and non-existing file. This avoids having logs into your apache2/error.log for example.
&
is the most important thing in your case : it will detach your execution of $command : so exec() will immediatly release your php code execution.
echo -n \$!
will give PID of your detached execution as response : it will be returned by exec() and makes you able to work with it (such as, put this pid into a database and kill it after some time to avoid zombies).
You need to use "&" symbol to run program as background proccess.
$ php -f file.php &
Thats will run this command in background.
You may wright sh script
#!/bin/bash
php -f file.php &
And run this script from crontab.
This may not be the best solution to your specific problem. But for the record, there is Threads in PHP.
https://github.com/krakjoe/pthreads
I'm assuming you know how to use threads, this is very young code that I wrote myself, but if you have experience with threads and mutex and the like you should be able to solve your problem using this extension.
This is clearly a shameless plug of my own project, and if the user doesn't have the access required to install extensions then it won't help him, but many people find stackoverflow and it will solve other problems no doubt ...
I'm trying to make an online judge written PHP. My code currently looks like:
exec("gcc /var/www/qwerty.c -o /var/www/binary",$output,$returnval);
print_r($output);
exec("cat /var/www/qwe.txt | /var/www/binary",$output,$returnval);
print_r($output);
However, I want each process spawned by exec to have at most 1 second to run. I'm not sure how to do this.
set_time_limit() isn't working
I would use the proc_ functions as suggested by #Adam Wright, but a quick alternative to Linux-like environments is to use the GNU timeout command before your command:
// If it took more than 1 second, $returnval will got the exit code 124
exec('timeout 1 gcc /var/www/qwerty.c -o /var/www/binary', $output, $returnval);
You can probably use ulimit for that:
exec(" ( ulimit -t 1 ; gcc ... | /var/www/binary) ");
That of course only works if the process uses active CPU time, not if it waits for I/O endlessly.
This can be achieved using the proc_ family of functions. Launch your processes using proc_open. Repeatedly poll using proc_status (with some small sleep in-between polls) until either the process has returned or 1 second has passed. If 1 second passes without proc_status indicating the process has finished, use proc_terminate (or proc_close) to kill it off, and take appropriate action.
I'm not saying that spawning external processes in a PHP script is a good idea, though.
I have a large PHP application and I'm looking for a way to know which PHP script is running at a given moment. Something like when you run "top" on a Linux command line but for PHP.
Are you trying to do so from within the PHP application, or outside of it? If you're inside the PHP code, entering debug_print_backtrace(); at that point will show you the 'tree' of PHP files that were included to get you at that point.
If you're outside the PHP script, you can only see the one process that called the original PHP script (index.php or whatnot), unless the application spawns parallel threads as part of its execution.
If you're looking for this information at the system level, e.g. all php files running under any Apache child process, or even any PHP files in use by other apps, there is the lsof program (list open files), which will spit out by default ALL open files on the system (executables, sockets, fifos, .so's, etc...). You can grep the output for '.php' and get a pretty complete picture of what's in use at that moment.
This old post shows a way you can wrap your calls to php scripts and get a PID for each process.
Does PHP have threading?
$cmd = 'nohup nice -n 10 /usr/bin/php -c /path/to/php.ini -f /path/to/php/file.php action=generate var1_id=23 var2_id=35 gen_id=535 > /path/to/log/file.log & echo $!';
$pid = shell_exec($cmd);