How to identify two identical nohup commands? - php

I have two websites using the Laravel framework. I need to use a command to process queues. For that I use nohup.
However I need to run two identical nohup commands to make it run in the background for the two different websites. The problem is that sometimes I need to stop only one command for maintenance. How do I identify which nohup command belongs to a particular website ? Can I use a name identifier in the nohup command ?

You could use php's -E cli argument, which lets you specify code to execute "after" the rest of the script completes, and this extra code could be just a PHP comment, allowing you to embed ID information:
sudo nohup php -E '//job #x' artisan etc...
^^^^^^^^---raw php code, no <?..?> required
Since it's just a comment, it won't actually DO anything, and you can format the comment to contain whatever ID information you want.

Witch $! you can get the PID of the last background process. Save it to a file like echo $! > processA.pid and then use that PID to stop the desired process.

Related

php run another script in foreground

I have a php script that leads up to running another expect script by passing it arguments.
$output = shell_exec("expect login_script.tcl '$user' '$host' '$port' '$password'");
Using shell_exec doesn't work as the script gets run in the background or 'within' the php script. I need it to run in the foreground, allowing user interactivity. Is there an elegant way to do this. Already it is getting messy by having to use different scripting languages. I tried wrapping the two scripts with a shell script that called the php script, assigned output the result as a variable (which was a command) and then ran sh on that. However I have the same problem again where the scripts are run in the background and any user interactivity creates a halt/freeze. Its ok in this situation if php 'quits' out when calling shell exec. Ie. php stops and expect gets run as if you called it. (the same as if i just copied the command that is output and pasted it into the terminal).
Update
I am having much more luck with the following command in php:
shell_exec("gnome-terminal -e 'bash -c \"expect ~/commands/login_script.tcl; exec bash\"' &");
However, can this be improved in order to not close the shell immediately after the secondary script (login_script) is finished?
Further Update
From reading the answers I think I need to clarify things as it looks like people are assuming a 'more complicated' issue.
the two processes do not need to communicate with each other, I should probably not have put the $output = shell_exec in the example and just shell_exec on its own as I believe this has led to the confusion.
The php script needs to only initiate the expect script with some cli parameters, e.g. my-script 'param1' 'param2' and can be thought of as completely 'asynchronous'. This is much like the behaviour of launcher programs like 'launchy' or 'synapse' they can launch other programs but need not affect them. Nor do they wait for the secondary program to quit/finish.
I made the mistake of saying 'shell_exec' doesn't work for me. What I should have said was that 'I have so far not succeeded with using shell_exec', but shell_exec("gnome-terminal -e 'bash -c \"expect ~/commands/login_script.tcl; exec bash\"' &"); is 'working' but still trying to find the right quote combination to allow passing arguments to the expect script.
Task managing is an interesting but difficult job.
Because your user can move during a task (and leads it to an unexpected result, such as session freezes, or an incomplete work from the process), you need to execute it in background. If you need to interact between your user and your process, you'll need to create a way to communicate.
The easiest way (I think) is to use a file, shared between your user session and the task.
If you have a lot of users simultaneously and communicates a lot between user and processes, you can mount a partition in memory to optimize the read/write operations.
In your fstab, a line like :
tmpfs /memory tmpfs defaults,uid=www-data,gid=www-data,size=128M 0 0
Or, in a script, you could do :
#!/bin/sh
mkfs -t ext2 -q /dev/ram1 65536
[ ! -d /memory ] && mkdir -p /memory
mount /dev/ram1 /memory
chmod -R 777 /memory
You'll need to take care of a lot of things :
file access (to avoid concurrency between your webapp and your processes)
time (to avoid zombies or useless long-running scripts)
security (such operations must be carefully designed)
resources management (to avoid that 10000 processes runs simuntaneouly)
...
I think what you're looking for is the proc_open() command. It gives you access to the stdin/stdout streams of the background process. You can pass your own stdin/stdout streams to the new process in the $descriptorSpec parameter, which will let your background process talk to the user.
Your 'foreground' application will have to wait around until the background process has died. I haven't actuallly done this with PHP, but I'm guessing you'll have to watch the $pipes to see when they get closed -- then you'll know the background process is finished and you can delete the process resource and continue on with whatever the foreground process needs to do.
In the end, I managed to get it working by by adding a third quotation mark type: ` (I believe it is called a 'tack'?) which allowed me to pass arguments to the next script from the first script
The command I needed in my php script was:
$command = `gnome-terminal -e 'bash -c "expect ~/commands/login_script.tcl \"$user\" \"$host\" \"$port\" \"$password\"; exec bash"' &`;
shell_exec($command);
It took a while to get all the quotes right as swapping the type of quotes around can lead to it not working.
Here is a video demonstrating the end result
Use:
pcntl_exec("command", array("parameter1", "parameter2"));
For example, I have a script that starts the mysql command using the parameters in the current php project that looks like:
pcntl_exec("/usr/bin/mysql", array(
"--user=".$params['user'],
"--password=".$params['password'],
"--host=".$params['host'],
$params['dbname']
));
This doesn't rely on gnome terminal or anything, it replaces PHP with the program you call.
You do need to know the full path of the command, which is a pain because it can vary by platform, but you can use the env command command which is available at /usr/bin/env on most systems to find the command for you. The above example above becomes:
pcntl_exec("/usr/bin/env", array(
"mysql",
"--user=".$params['user'],
"--password=".$params['password'],
"--host=".$params['host'],
$params['dbname']
));

How to autoload an xml file every 30 seconds?

Currently I have a parser.php which loads an xml file and inserts new data from the xml file into a mysql database. How would I go about refreshing this php file every 30 seconds so my mysql table always has fresh data? I think I could use short-polling to do this, but I'm guessing this is not the most efficient of options.
Thanks in advance
This is a non-PHP solution which will require you to have shell (SSH) access in order to run the script, however you can also run it through PHP with exec() if you want to. Shared hosting environments might present a challenge for this approach but as long as you can execute scripts under your user credentials you should have no problems running it.
First you will need to create a bash script with the following content and save it (I'll use the name parser.sh for the purpose of this example). You can then adjust the timeout in the sleep 30 line if you want to.
#!/bin/sh
while true
do
php parser.php
sleep 30
done
In order to run the script you'll need to give it execute permissions.
chmod +x parser.sh
Now you can use the nohup command with the ampersand (&) argument to ensure that the script will run in the background even when a termination signal is sent after, lets say, closing the shell (SSH). The ampersand is important!
nohup ./parser.sh &
Now you can use top or ps aux | grep parser to ensure that the script is running. As I already said before you can also use PHP exec() to start the process but shell is still the preferred and most reliable way to do this.
If you want to stop the background process which executes your script then you'll simply have to kill it. Just use ps aux | grep parser to find out the PID of the parser process (its in the second column to the left) and use it with the kill command.
kill 4183
You need to use a cronjob, but crons jobs runs every 1 minute or more.
Another way is to make a "daemon".
Very basic example:
<?php
while(true) {
// check if 30 seconds has passed.
// then execute some function...
};
?>
Then you need to execute this in your terminal:
$ php script.php &
This link should help.
Greatings!

Multi threading in PHP

In a apcahe server i want to run a PHP scripts as cron which starts a php file in background and exits just after starting of the file and doesn't wait for the script to complete as that script will take around 60 minutes to complete.how this can be done?
You should know that there is no threads in PHP.
But you can execute programs and detach them easily if you're running on Unix/linux system.
$command = "/usr/bin/php '/path/to/your/php/to/execute.php'";
exec("{$command} > /dev/null 2>&1 & echo -n \$!");
May do the job. Let's explain a bit :
exec($command);
Executes /usr/bin/php '/path/to/your/php/to/execute.php' : your script is launched but Apache will awaits the end of the execution before executing next code.
> /dev/null
will redirect standard output (ie. your echo, print etc) to a virtual file (all outputs written in it are lost).
2>&1
will redirect error output to standard output, writting in the same virtual and non-existing file. This avoids having logs into your apache2/error.log for example.
&
is the most important thing in your case : it will detach your execution of $command : so exec() will immediatly release your php code execution.
echo -n \$!
will give PID of your detached execution as response : it will be returned by exec() and makes you able to work with it (such as, put this pid into a database and kill it after some time to avoid zombies).
You need to use "&" symbol to run program as background proccess.
$ php -f file.php &
Thats will run this command in background.
You may wright sh script
#!/bin/bash
php -f file.php &
And run this script from crontab.
This may not be the best solution to your specific problem. But for the record, there is Threads in PHP.
https://github.com/krakjoe/pthreads
I'm assuming you know how to use threads, this is very young code that I wrote myself, but if you have experience with threads and mutex and the like you should be able to solve your problem using this extension.
This is clearly a shameless plug of my own project, and if the user doesn't have the access required to install extensions then it won't help him, but many people find stackoverflow and it will solve other problems no doubt ...

Run shell commands with PHP?

Occasionally my media server goes down and I'm wondering if it's possible to start it remotely using php to check the port and if it's not running invoke cron (or some other way) to run a shell command. Is this possible because this is not a strong area for me. Here's the process I use with PuTTy.
login to shell
cd to source/red5/dist
screen
./red5.sh
CTRL-A then D to detach
logout
Simplest thing is to write a shell script. And then login to remote console via PHP.
shell_exec: execute a shell command and returns the output as string.
exec: just executes an external program
A simple way to achieve what you want is to run this in screen:
while /bin/true ; do ./red5.sh ; done
If you can write a shell script that does what you need, then PHP's has exec(), system() and passthru() for you.
PHP actually has a special operator for executing shell commands, the backtick:
`cd source/red5/dist`
will go to the specified directory. (But I don't know much about shell, so I can't implement you the whole thing.)
If you need much control over the execution (I don't know whether you need here) use proc_open.
you can use corn job on php and put all command on .sh file and run like this
59 11 * * 1,2,3,4,5 root command file.sh?token
something like this ,it will be save
There is more than one good answer here, but you should opt for executing the init script for red5 instead of the .sh or .bat. There are pre-made init scripts here: http://code.google.com/p/bigbluebutton/downloads/detail?name=red5&can=2&q= and here: http://www.videowhisper.com/forum.php?ftid=48&t=init-file-red5-linux-installations-red5-linux-init.d-chkconfig

spawn an entirely separate process in linux via bash

I need to have a script execute (bash or perl or php, any will do) another command and then exit, while the other command still runs and exits on its own. I could schedule via at command, but was curious if there was a easier way.
#!/bin/sh
your_cmd &
echo "started your_cmd, now exiting!"
Similar constructs exists for perl and php, but in sh/bash its very easy to run another command in the background and proceed.
edit
A very good source for generic process manipulation are all the start scripts under /etc/init.d. They do all sorts of neat tricks such as keep track of pids, executing basic start/stop/restart commands etc.
To run a command in the background, you can append an '&' to the command.
If you need the program to last past your login session, you can use nohup.
See this similar stackoverflow discussion: how to run a command in the background ...
The usual way to run a command and have it keep running when you log out is to use nohup(1). nohup prevents the given command from receiving the HUP signal when the shell exits. You also need to run in the background with the ampersand (&) command suffix.
$ nohup some_command arg1 arg2 &
&?
#!/usr/bin/bash
# command1.sh: execute command2.sh and exit
command2.sh &
I'm not entirely sure if this is what you are looking for, but you can background a process executed in a shell by appending the ampersand (&) symbol as the last character of the command.
So if you have script, a.sh
and a.sh needs to spawn a seperate process, like say execute the script b.sh, you'd:
b.sh &
So long as you mentioned Perl:
fork || exec "ls";
...where "ls" is anything at all. Repeat for as many commands as you need to fire off.
Most answers are correct in showing..
mycmd &
camh's answer goes further to keep it alive with nohup.
Going further with advanced topics...
mycmd1 &
mycmd2 &
mycmd3 &
wait
"wait" will block processing until the backgrounded tasks are all completed. This can be useful if response times are significant such as for off-system data collection. It helps if you an be sure they will complete.
How do I subsequently foreground a process?
If it is your intent to foreground a process on a subsequent logon, look into screen or tmux.
screen -dmS MVS ./mvs
or (Minecraft example).
screen -dm java -Xmx4096M -Xms1024M -jar server.jar nogui
You can then re-attach to the terminal upon subsequent login.
screen -r
The login that launches these need not be interactive, you can use ssh remotely (plink, Ansible, etc.) to spawn these in a "drive by" manner.

Categories