We have several php scripts wrapped with other non-php commands in a single shell script, like this:
build_stuff.sh
mv things stuff
cp files folders
composer install
php bin/console some:command
php bin/console some:other:command
This shell script is then called in a "execute shell" build step.
sh ./build_stuff.sh
Is there any possibility to abort the build as "failure", as soon as there are php errors/warnings?
So that the next command wouldn't be executed.
And still maintain all commands in one script.
...
I found the Log Parser Plugin, but i would like to abort when the errors occur, not continue and parse the logs afterwards.
I though about maybe catching the PHP exit codes, as described here:
Retrieve exit status from php script inside of shell script
But you wouldn't be able to instantly see text output of the php scripts then, would you?
Combining them with double ampersands will do the job. In the below example cmd2 will be executed only if cmd1 succeeds (returns a zero exit status).
cmd1 && cmd2
It will work with composer; but your commands might need modification to send proper exit codes.
Related
I'm trying to write a cronjob which launches multiple processes that I want to run in parallel.
I'm using a foreach calling each command, but the command line waits for the output. I don't want it to put.
Was wondering if anyone ever used any library for this?
Add an ampersand after the command:
$ php task.php &
It will run that instance of php in the background and continue.
If you read the manual on passthru you'll notice it tells you how to avoid this...
If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
So you can rely on UNIX fds to redirect output to something like /dev/null for example if you don't care about the output or to some file if you do want to save the output and this will avoid PHP waiting on the command to finish.
pssthru("somecommand > /some/path/to/file")
So I was wondering if there's a better way to do this in 2013.
Older answers suggest something like this
shell_exec('nohup php script.php some_argument another_argument > /dev/null 2>&1 &');
Which seems to work, but it looks weird. And the thing is that I don't have much control over the script I execute. For example I would like the possibility to terminate it inside another request. The only solution I found is to place a file somewhere, like "exit_now.txt', then inside that script check if that file exists and fire exit() if it does.
You can send signal to the invoked process (by kill command from shell) and install signal handler to your program to react as you wish.
Documentation for installing signal handler
Example with pcntl_fork() and pcntl_signal()
I have a problem with PHP passthru() blocking when it is supposed to start a daemon.
I have a Node.js daemon with a bash script wrapper around it. That bash script uses a bit of process replacement because the Node.js server can't directly log to syslog. The bash script contains a command like this:
forever -l app.log app.js
But because I want it to log to syslog, I use:
forever -l >(logger) app.js
The logger process replacement created a file descriptor like /dev/fd/63 whose path is passed to the forever command as the logfile to use.
This works great when I start the daemon using the bash script directly, but when the bash script is executed using PHP passthru() or exec() then these calls will block. If I use a regular logfile instead of the process replacement then both passthru() and exec() work just fine, starting the daemon in the background.
I have created a complete working example (using a simple PHP daemon instead of Node.js) on Github's Gist: https://gist.github.com/1977896 (needs PHP 5.3.6+)
Why does the passthru() call block on the process replacement? And is there anything I can do to work around it?
passthru() will block in PHP even if you start a daemon, it's unfortunate. I've heard some people have luck rewriting it with nohup:
exec('/path/to/cmd');
then becomes:
exec('nohup /path/to/cmd &');
Personally, what I've had the most luck with is exec()'ing a wget exec to call another script (or the same script) to actually run the blocking exec. This frees the calling process from getting blocked by giving it to another http process not associated with the live user. With the appropriate flags, wget will return immediately, not waiting for a response:
exec('wget --quiet --tries=1 -O - --timeout=1 --no-cache http://localhost/path/to/cmd');
The http handler will eventually time out which is fine and should leave the daemon running. If you need output (hence the passthru() call you're making) just run the script redirecting output to a file and then poll that file for changes in your live process.
I know there's been similar questions but they don't solve my problem...
After checking out the folders from the repo (which works fine).
A method is called from jquery to execute the following in php.
exec ('svn cleanup '.$checkout_dir);
session_write_close(); //Some suggestion that was supposed to help but doesn't
exec ('svn commit -m "SAVE DITAMAP" '.$file);
These would output the following:
svn cleanup USER_WORKSPACE/0A8288
svn commit -m "SAVE DITAMAP" USER_WORKSPACE/0A8288/map.ditamap
1) the first line (exec ('svn cleanup')...executes fine.
2) as soon as I call svn commit then my server hangs, and everything goes to hell
The apache error logs show this error:
[notice] Child 3424: Waiting 240 more seconds for 4 worker threads to finish.
I'm not using the php_svn module because I couldn't get it to compile on windows.
Does anyone know what is going on here? I can execute the exact same cmd from the terminal windows and it executes just fine.
since i cannot find any documentation on jquery exec(), i assume this is calling php. i copied this from the documentation page:
When calling exec() from within an apache php script, make sure to take care of stdout, stderr and stdin (as in the example below). If you forget this and your shell command produces output the sh and apache deamons may never return (they will normally time out after a few minutes). From the calling web page the script may seem to not return any data.
If you want to start a php process that continues to run independently from apache (with a different parent pid) use nohub. Example:
exec('nohup php process.php > process.out 2> process.err < /dev/null &');
hope it helps
Okay, I've found the problem.
It actually didn't have anything to do with the exec running the background, especially because a one file commit doesn't take a lot of time.
The problem was that the commit was expecting a --username and --password that didn't show up, and just caused apache to hang.
To solve this, I changed the svnserve.conf in the folder where I installed svn and changed it to allow non-auth users write access.
I don't think you'd normally want to do this, but my site already authenticates the user name and pass upon logging in.
Alternatively you could
I need to have a script execute (bash or perl or php, any will do) another command and then exit, while the other command still runs and exits on its own. I could schedule via at command, but was curious if there was a easier way.
#!/bin/sh
your_cmd &
echo "started your_cmd, now exiting!"
Similar constructs exists for perl and php, but in sh/bash its very easy to run another command in the background and proceed.
edit
A very good source for generic process manipulation are all the start scripts under /etc/init.d. They do all sorts of neat tricks such as keep track of pids, executing basic start/stop/restart commands etc.
To run a command in the background, you can append an '&' to the command.
If you need the program to last past your login session, you can use nohup.
See this similar stackoverflow discussion: how to run a command in the background ...
The usual way to run a command and have it keep running when you log out is to use nohup(1). nohup prevents the given command from receiving the HUP signal when the shell exits. You also need to run in the background with the ampersand (&) command suffix.
$ nohup some_command arg1 arg2 &
&?
#!/usr/bin/bash
# command1.sh: execute command2.sh and exit
command2.sh &
I'm not entirely sure if this is what you are looking for, but you can background a process executed in a shell by appending the ampersand (&) symbol as the last character of the command.
So if you have script, a.sh
and a.sh needs to spawn a seperate process, like say execute the script b.sh, you'd:
b.sh &
So long as you mentioned Perl:
fork || exec "ls";
...where "ls" is anything at all. Repeat for as many commands as you need to fire off.
Most answers are correct in showing..
mycmd &
camh's answer goes further to keep it alive with nohup.
Going further with advanced topics...
mycmd1 &
mycmd2 &
mycmd3 &
wait
"wait" will block processing until the backgrounded tasks are all completed. This can be useful if response times are significant such as for off-system data collection. It helps if you an be sure they will complete.
How do I subsequently foreground a process?
If it is your intent to foreground a process on a subsequent logon, look into screen or tmux.
screen -dmS MVS ./mvs
or (Minecraft example).
screen -dm java -Xmx4096M -Xms1024M -jar server.jar nogui
You can then re-attach to the terminal upon subsequent login.
screen -r
The login that launches these need not be interactive, you can use ssh remotely (plink, Ansible, etc.) to spawn these in a "drive by" manner.