Related
I'll try to explain my problem in a time line history:
I've tried to run several external scripts from php and to return the exit code to the server with an ajax call again.
A single call should start or stop an service on that machine. That works fine on this developing machine.
OS : raspbian Os
Webserver : NginX 1.2.1
Php : 5.4.3.6
However I've exported the code to a larger machine with much more power and everything seemed to work fine but one thing:
A single call causes the php-fpm to freezes and never to come back. By detailed examination I found out, that the call created a zombie process I can not terminate (even with sudo).
OS : Ubuntu
Webserver : NginX 1.6.2
Php : 5.5.9
The only solution seemed to stop the php-fpm proc and than to restart it again. Then everything seems to work fine again, as long as I try to call that script again.
Calling php line
exec("sudo ".$script, $output, $return_var);
(With all variables are normal 'strings' with no special chars)
Start script
#!/bin/sh
service radicale start 2>&1
The service by the way started, but every time the webserver freezes and I had to restart php manually, but that is not acceptable (even for a web server). But only for that single script and only for that service (radicale) with that solemn command (start).
Searching in Google brought me to the point that there is a conflict between the php commands exec() and session_start().
Links:
https://bugs.php.net/bug.php?id=44942
https://bugs.php.net/bug.php?id=44994
Their conclusion was, that that bug could be worked around with such a construct:
...
session_write_close();
exec("sudo ".$script, $output, $return_var);
session_start();
...
But that, for my opinion, was no debugging, but more a helplessly workaround, because you loose the functionality of letting the user know, that his actions have fully functioned, but more let him believe an error has occurred. Much more confusing is the fact, that it runs fully on the Raspberry Pi A, but not on a 64-bit machine with a much larger CPU and 8 GB RAM.
So is there a real solution anywhere or is this workaround the only way to solve that problem? I've read a article about php having some probs with exec/shell_exec and the recognition of the return value? How can that be lost? Someone's having a guess?
THX for reading that long awful English, but I'm no native speaker and was no well listening student in my lessons.
It is likely the case that the new machine simply is not set up the way the Raspberry PI was setup -
You need to do a few things in your shell before this will work on your larger machine:
1). Allow php to use sudo.
sudo usermod -G sudo -a your-php-user
Note that to get the username for your-php-user, you can just run a script that says:
<?php echo get_current_user(); ?> - or alternatively:
<?php echo exec('whoami'); ?> -
2). Allow that user to use sudo without a password
sudo visudo - this command will open /etc/sudoers with a failsafe to keep you from botching anything.
Add this line to the very end:
your-php-user ALL=(ALL) NOPASSWD: /path/to/your/script,/path/to/other/script
You can put as many scripts there, separated by commas, as you need.
Now, your script should work just fine.
AGAIN, please note that you need to change your-php-user to whatever your php user is.
Hope this helps!
This is not a real solution, but it's a better solution than none.
Calling a bash script with
<?php
...
exec("sudo ".$script, $output, $return_var);
...
?>
ends only in this special case in a zombie Thread. As php-fpm waits in expectation for a result, it still holds the line, not giving up nor time outs for the rest of its thread still living. So every other request to the php server is still in queue and will never be processed. That may be okay for some long living or working threads, but my request was done in some [ms].
I did not found the cause for this. As far as I could do debugging, I wasn't the triggered Radicale process fault, for this on gave a any time clean and brave 0 as in return. It seemed that a php process just couldn't get a return line from it and so it still waits and waits.
No time left I changed the malfunction script from
#!/bin/sh
service radicale start 2>&1
to
#!/bin/sh
service radicale start > /dev/null 2>&1 &
... so signaling every returning line to nirvana and disconnecting all subroutines. For now the server did not hung itself up and works as desired. But the feeling this may be a major bug in php still stays in the back of my head, with the hope, that - someday - someone may defeat that bug.
I have a development / testing setup on a windows box and need to test calling a background process. I am using http://www.somacon.com/p395.php as a list of options for running a background service.
Here is the sample code I am trying to run:
$string = "PsExec.exe -d cmd /c \"mspaint\"";
echo $string;
exec($string, $data);
This works when I type it into the command line.
I haven't attempted to do a lot of exec's on Windows, but it would be nice to be able to test it locally before moving to a Linux box.
Right now, I am thinking it has something to do with the fact that psexec opens a new window? I don't know how to fix that, however.
There are no apache or PHP error logs being generated, the page just never stops. This also seems to override PHP's max execution time.
EDIT:
This is not fully correct answer. The command psexec \\machine cmd.exe /C 1 & dir won't hang because psexec first return saying that command 1 doesn't exists in remote machine and then dir is evaluated in the local machine. I got tricked by cmd operator order. The & operator is being invoked in the local cmd.exe process, not the remote one. But the logic still applies if you quote the command: psexec \\machine cmd.exe /C "1 & dir".
Original Answer:
There is something strange going on while invoking psexec within PHP in windows. No matter if you are using shell_exec(), exec(), popen() or proc_open(). It will hang anyway.
Honestly I don't know what's going on, you could also download PsExplorer in order to trace your process command line, arguments, etc. You'll find this tool very useful.
I'm using XAMPP on Windows, and after certain tests I found this:
First of all create a test file (i.e. test.php) and place it in your web server so you can access it with this content:
<?php
echo "<pre>".shell_exec("psexec \\\\machine <options> cmd.exe /C dir")."</pre>";
?>
Note that you could use GET arguments in order to create a more flexible example. To debunk that this indeed is working before you test over a webpage, issue the command php <path-to-the-file>\test.php. That will work perfectly. But, it won't work on a web browser.
If you kill the process, you'll get the first line: El volumen de la unidad C no tiene etiqueta.. Next line, in my language there's an accent included so I thought It could be encoding (or code pages) issues. So instead of cmd.exe /C dir I tested cmd.exe /C chcp 850 && dir. That, for my surprise, works.
Then, I realize that no matter what command you set before it will also work, for instance, cmd.exe /C 123 & dir (note the single & as I'm not evaluating the command output).
Honestly, I don't know what is going on with psexec and PHP via web browser. More surprisingly, commands like cmd.exe /C copy cmd.exe cmd.exe.deleteme and cmd.exe /C mkdir deletethis will work!
So, you could give it a try to cmd.exe /C 1 & \"mspaint\" as your problem and my seems similars. For single commands could be a workaround but I'm developing an unattended framework for installing software and all my files are .cmd or .exe or .bat files.
Let me know If you found something interesting :)
I have a solution!
This topic is old but still relevant. I wanted to use PsExec 2.2 with PHP 7.2.0 RC3 to execute a script remotely. I need the output of the script and so i was using PsExec.exe -accepteula \\HOST -u xxx -p xxx with >> LOGFILE 2>&1.
Because I dont want to wait the script to finish, I used exec() like this: exec('start /p cmd /c PsExec.exe -accepteula \\HOST -u xxx -p xxx >> LOGFILE 2>&1').
My php script continues working and PsExec does his job too.
Everything works fine but my LOGFILE always contains only the first line of the STDOUT output.
I have tried many solutions mentioned here, but none of them worked for me. Executing the command in the CMD directly, PsExec returns all lines of STDOUT but inside php it does not.
At the php doc for exec() I have found comments where some people used PsExec to run a programm without waiting (http://php.net/manual/de/function.exec.php#86438). As a last chance I tried the following: exec('PsExec.exe -accepteula -i -d cmd /c PsExec.exe -accepteula \\HOST -u xxx -p xxx >> LOGFILE 2>&1')... And it works!
PsExec now returns everthing into my LOGFILE. I hope I could help you and save you some time.
My current solution was to use WScript.Shell:
$string = "cmd /c \"cd {$full_base}{$newSource} && ant release > compile.txt\"";
$Shell = new COM("WScript.Shell");
$exec = $Shell->Run($string, 0, false);
Not sure if this was definitively answered or not, but I was having the same problem with PSEXEC hanging, though in my case, on some machines it did, and others it didn't. It turned out to be accepting the EULA. On machines that I'd run it from a command line and used the same username/password as my PHP script, it ran without hanging.
Further research showed that I could specify the EULA acceptance by including it in the command being executed from PHP:
This DID hang: psexec \\MACHINENAME ...
This did NOT: psexec /accepteula \\MACHINENAME ...
ok here are the things. I also jumped into similar condition as you were and hopefully since i am commenting this late you must have figured out a way too .But for those who are still in this problem , i suggest them few tips .
1)The code that has been posted is super fine , no problem with that thing. If your page hangs out infinitely then try to add -i -d params before the source exe. (see the documentation HERE ). You can also add -accepteula (EULA) flag in the command to let the page load and not to wait for that command to finish(solves the infinite wait problem).
2)For me these things didnt work . It doesnt mean that you guys dont try these things .these are the first steps if your code starts working then thats fine for you.Else ,make another account as an adminstrative one ,type services.msc in the start menu , you will see the page there. Search for wampapache or xammp services ,excluding mysqld one. Then simply click on that service .GO to log on tab ,select This account .Select the previously made admin acc , type its password if you set that else leave the textbox empty.Type the name of that account as \\accname at the name field . If u dont know the account name ,Click to Browse->Advanced->FInd now you will see that name account . Select, apply settings and restart the wamp app and you are good to go :)
Also make sure to double check the string you feed into your exec command. Seems trivial, but had me stumped for awhile. In many cases PHP escapes the double backslash in your machine address which throws everything off. To get around this, create your string like:
$string = 'PSExec /accepteula \\\\MACHINENAME ...'
I want to run wget as follows
shell_exec('wget "'http://somedomain.com/somefile.mp4'"');
sleep(20);
continue my code...
What I want is to let PHP wait for the shell_exec wget file download to finish before continuing on with the rest of the code. I don't want to wait a set number of seconds.
How do I do this, because as I run shell_exec wget, the file will start downloading and run in background and PHP will continue.
Does your URL contain the & character? If so, your wget might be going into the background and shell_exec() might be returning right away.
For example, if $url is "http://www.example.com/?foo=1&bar=2", you would need to make sure that it is single-quoted when passed on a command line:
shell_exec("wget '$url'");
Otherwise the shell would misinterpret the &.
Escaping command line parameters is a good idea in general. The most comprehensive way to do this is with escapeshellarg():
shell_exec("wget ".escapeshellarg($url));
shell_exec does wait for the command to finish - so you don't need the sleep command at all:
<?php
shell_exec("sleep 10");
?>
# time php c.php
10.14s real 0.05s user 0.07s system
I think your problem is likely the quotes on this line:
shell_exec('wget "'http://somedomain.com/somefile.mp4'"');
it should be
shell_exec("wget 'http://somedomain.com/somefile.mp4'");
I have this in one PHP file:
echo shell_exec('nohup /usr/bin/php -f '.CRON_DIRECTORY.'testjob.php > /dev/null 2>&1 &');
and in testjob.php I have:
file_put_contents('test.txt',time()); exit;
And it all runs just dandy. However if I go to processes it's not terminating testjob.php after it runs.
(Having to post this as an answer instead of comment as stackoverflow still won't let me post comments...)
Works for me. I made testjob.php exactly as described, and another file test.php with just the given line (except I removed CRON_DIRECTORY, because testjob.php was in the same directory for me).
To be sure I was measuring correctly, I added "sleep(5)" at the top of testjob.php, and in another window I have:
watch 'ps a |grep php'
running. This happens:
I run test.php
test.php exits immediately but testjob.php appears in my list
After 5 seconds it disappears.
I wondered if shell might matter, so I switched from bash to sh. Same result.
I also wondered if it might be because your outer script is long-running. So I put "sleep(10)" at the bottom of test.php. Same result (i.e. testjob.php finishes after 5 seconds, test.php finishes 5 seconds after that).
So, unhelpfully, your problem is somewhere other than the code you've posted.
Remove & from the end of your command. This symbol says nohup to continue running in background, thus shell_exec is waiting for task to complete... and waiting... and waiting... till the end of times ;)
I don't even understan why would you perform this command with nohup.
echo shell_exec('/usr/bin/php -f '.CRON_DIRECTORY.'testjob.php > /dev/null 2>&1');
should be enough.
You're executing PHP and make that execution a background task. That means it will run in background until it is finished. shell_exec will not kill that process or something similar.
You might want to set an execution limit, PHP cli has a setting of unlimited by default. See as well set_time_limit PHP Manual;
So if you wonder why the php process does not terminate, you need to debug the script. If that's too complicated and you're unable to find out why the script runs that long, you might just want to terminate the process after some time, e.g. 1 minute.
I need to have a script execute (bash or perl or php, any will do) another command and then exit, while the other command still runs and exits on its own. I could schedule via at command, but was curious if there was a easier way.
#!/bin/sh
your_cmd &
echo "started your_cmd, now exiting!"
Similar constructs exists for perl and php, but in sh/bash its very easy to run another command in the background and proceed.
edit
A very good source for generic process manipulation are all the start scripts under /etc/init.d. They do all sorts of neat tricks such as keep track of pids, executing basic start/stop/restart commands etc.
To run a command in the background, you can append an '&' to the command.
If you need the program to last past your login session, you can use nohup.
See this similar stackoverflow discussion: how to run a command in the background ...
The usual way to run a command and have it keep running when you log out is to use nohup(1). nohup prevents the given command from receiving the HUP signal when the shell exits. You also need to run in the background with the ampersand (&) command suffix.
$ nohup some_command arg1 arg2 &
&?
#!/usr/bin/bash
# command1.sh: execute command2.sh and exit
command2.sh &
I'm not entirely sure if this is what you are looking for, but you can background a process executed in a shell by appending the ampersand (&) symbol as the last character of the command.
So if you have script, a.sh
and a.sh needs to spawn a seperate process, like say execute the script b.sh, you'd:
b.sh &
So long as you mentioned Perl:
fork || exec "ls";
...where "ls" is anything at all. Repeat for as many commands as you need to fire off.
Most answers are correct in showing..
mycmd &
camh's answer goes further to keep it alive with nohup.
Going further with advanced topics...
mycmd1 &
mycmd2 &
mycmd3 &
wait
"wait" will block processing until the backgrounded tasks are all completed. This can be useful if response times are significant such as for off-system data collection. It helps if you an be sure they will complete.
How do I subsequently foreground a process?
If it is your intent to foreground a process on a subsequent logon, look into screen or tmux.
screen -dmS MVS ./mvs
or (Minecraft example).
screen -dm java -Xmx4096M -Xms1024M -jar server.jar nogui
You can then re-attach to the terminal upon subsequent login.
screen -r
The login that launches these need not be interactive, you can use ssh remotely (plink, Ansible, etc.) to spawn these in a "drive by" manner.