I know there's been similar questions but they don't solve my problem...
After checking out the folders from the repo (which works fine).
A method is called from jquery to execute the following in php.
exec ('svn cleanup '.$checkout_dir);
session_write_close(); //Some suggestion that was supposed to help but doesn't
exec ('svn commit -m "SAVE DITAMAP" '.$file);
These would output the following:
svn cleanup USER_WORKSPACE/0A8288
svn commit -m "SAVE DITAMAP" USER_WORKSPACE/0A8288/map.ditamap
1) the first line (exec ('svn cleanup')...executes fine.
2) as soon as I call svn commit then my server hangs, and everything goes to hell
The apache error logs show this error:
[notice] Child 3424: Waiting 240 more seconds for 4 worker threads to finish.
I'm not using the php_svn module because I couldn't get it to compile on windows.
Does anyone know what is going on here? I can execute the exact same cmd from the terminal windows and it executes just fine.
since i cannot find any documentation on jquery exec(), i assume this is calling php. i copied this from the documentation page:
When calling exec() from within an apache php script, make sure to take care of stdout, stderr and stdin (as in the example below). If you forget this and your shell command produces output the sh and apache deamons may never return (they will normally time out after a few minutes). From the calling web page the script may seem to not return any data.
If you want to start a php process that continues to run independently from apache (with a different parent pid) use nohub. Example:
exec('nohup php process.php > process.out 2> process.err < /dev/null &');
hope it helps
Okay, I've found the problem.
It actually didn't have anything to do with the exec running the background, especially because a one file commit doesn't take a lot of time.
The problem was that the commit was expecting a --username and --password that didn't show up, and just caused apache to hang.
To solve this, I changed the svnserve.conf in the folder where I installed svn and changed it to allow non-auth users write access.
I don't think you'd normally want to do this, but my site already authenticates the user name and pass upon logging in.
Alternatively you could
Related
I'll try to explain my problem in a time line history:
I've tried to run several external scripts from php and to return the exit code to the server with an ajax call again.
A single call should start or stop an service on that machine. That works fine on this developing machine.
OS : raspbian Os
Webserver : NginX 1.2.1
Php : 5.4.3.6
However I've exported the code to a larger machine with much more power and everything seemed to work fine but one thing:
A single call causes the php-fpm to freezes and never to come back. By detailed examination I found out, that the call created a zombie process I can not terminate (even with sudo).
OS : Ubuntu
Webserver : NginX 1.6.2
Php : 5.5.9
The only solution seemed to stop the php-fpm proc and than to restart it again. Then everything seems to work fine again, as long as I try to call that script again.
Calling php line
exec("sudo ".$script, $output, $return_var);
(With all variables are normal 'strings' with no special chars)
Start script
#!/bin/sh
service radicale start 2>&1
The service by the way started, but every time the webserver freezes and I had to restart php manually, but that is not acceptable (even for a web server). But only for that single script and only for that service (radicale) with that solemn command (start).
Searching in Google brought me to the point that there is a conflict between the php commands exec() and session_start().
Links:
https://bugs.php.net/bug.php?id=44942
https://bugs.php.net/bug.php?id=44994
Their conclusion was, that that bug could be worked around with such a construct:
...
session_write_close();
exec("sudo ".$script, $output, $return_var);
session_start();
...
But that, for my opinion, was no debugging, but more a helplessly workaround, because you loose the functionality of letting the user know, that his actions have fully functioned, but more let him believe an error has occurred. Much more confusing is the fact, that it runs fully on the Raspberry Pi A, but not on a 64-bit machine with a much larger CPU and 8 GB RAM.
So is there a real solution anywhere or is this workaround the only way to solve that problem? I've read a article about php having some probs with exec/shell_exec and the recognition of the return value? How can that be lost? Someone's having a guess?
THX for reading that long awful English, but I'm no native speaker and was no well listening student in my lessons.
It is likely the case that the new machine simply is not set up the way the Raspberry PI was setup -
You need to do a few things in your shell before this will work on your larger machine:
1). Allow php to use sudo.
sudo usermod -G sudo -a your-php-user
Note that to get the username for your-php-user, you can just run a script that says:
<?php echo get_current_user(); ?> - or alternatively:
<?php echo exec('whoami'); ?> -
2). Allow that user to use sudo without a password
sudo visudo - this command will open /etc/sudoers with a failsafe to keep you from botching anything.
Add this line to the very end:
your-php-user ALL=(ALL) NOPASSWD: /path/to/your/script,/path/to/other/script
You can put as many scripts there, separated by commas, as you need.
Now, your script should work just fine.
AGAIN, please note that you need to change your-php-user to whatever your php user is.
Hope this helps!
This is not a real solution, but it's a better solution than none.
Calling a bash script with
<?php
...
exec("sudo ".$script, $output, $return_var);
...
?>
ends only in this special case in a zombie Thread. As php-fpm waits in expectation for a result, it still holds the line, not giving up nor time outs for the rest of its thread still living. So every other request to the php server is still in queue and will never be processed. That may be okay for some long living or working threads, but my request was done in some [ms].
I did not found the cause for this. As far as I could do debugging, I wasn't the triggered Radicale process fault, for this on gave a any time clean and brave 0 as in return. It seemed that a php process just couldn't get a return line from it and so it still waits and waits.
No time left I changed the malfunction script from
#!/bin/sh
service radicale start 2>&1
to
#!/bin/sh
service radicale start > /dev/null 2>&1 &
... so signaling every returning line to nirvana and disconnecting all subroutines. For now the server did not hung itself up and works as desired. But the feeling this may be a major bug in php still stays in the back of my head, with the hope, that - someday - someone may defeat that bug.
One of my lines of shell command is not executing despite other similar lines working. I am running on a linux machine using a Ubuntu 12.04 based OS. I have tried using exec as well, still doesn't work.
I actually had this working at some point, where I ran into the hanging issue (waiting for command output), which is why I'm redirecting output to /dev/null. So some where in the development something changed. We did create a debian package to install with and I had run that install package so I thought maybe in overwriting a file the permissions got changed so I added read/write/execute to all users/groups/owners but that didn't work either.
The code is here:
if(isset($_POST['activateXML']))
{
if (videoConsistencyCheck())
{
`cp {$fileXML} /apps/video/xml.xml`;
`sudo /apps/video/vsss restart >/dev/null 2>&1 &`;
systemUnvalidate();
header('Location: index.php?app='.$_GET['app']);
die();
}
}
I know that the first line in the if statement gets executed. The line of code works fine in the actually terminal, so that isn't the problem either. I did lots of Googling and all I could find is an unanswered question, any advice would be helpful.
EDIT: so what appeared to be not working was in fact calling the command as intended but in the bash script I was calling the start-stop daemon was not working
EDIT 2: I made a test php file and ran the code from the terminal, fixed the start-stop-daemon error by adding sudo to the commands but it still doesn't work in my code. I am calling this code when a submit button is pressed.
use additional parameters, especially output:
exec($command,$output);
var_dump($output);
to determine what can be wrong with your command. If it doesn't work, please show us your code where you use your exec's.
The issue lay with a call to a binary file in the vsss script that could only be run as root. We did not want to allow access to that binary file to just anyone. The solution we came up with involves calling chmod +s on the vsss script which allows permissions for user and group IDs but keeps its owner permissions. We then added the PHP user, which was www-data, to the sudoers file using the NOPASSWD parameter. In my PHP code I then used the line:
exec('sudo /apps/video/vsss restart >/dev/null 2>&1 &')
The shell_exec()/backticks would not work with this method.
I work with PHP 5.4, IIS 7.5.
If execute a simple command, it works:
<?php
exec("dir", $r);
print_r($r);
?>
But if open .exe file, it doesn't work, the page is loading until the php timeout and doesn't open the notepad:
<?php
exec("notepad.exe", $r);
print_r($r);
?>
And if execute the notepad's php in command line, it works:
php -f <file>
I think that the problem is with IIS, but I don't know what. Thanks!
UPDATE
I did another test case and doesn't work, the page finishes loading but doesn't delete the task:
<?php
$r = exec("SCHTASKS.exe /Delete /TN TaskTest /F");
print_r($r);
?>
The IIS_IUSRS have permission for execute the schtasks.
SOLUTION
Notepad doesn't open because is a interactive program.
For Tasks scheduler, gives read and write permissions to the task folder (C:\Windows\System32\Tasks) to IUSR.
What makes you think it isn't working?
Be aware that windows services cannot normally interact with the desktop, so it may be the case that notepad is starting, just not anywhere you can see it - and as PHP will wait for it to terminate, and nobody can see it to terminate it, it'll timeout, as you're seeing.
It may also be the case that the user that the web server is running as does not have execute permissions on the folder that notepad is in (assuming it had the relevant path).
The problem is that you are instructing exec to gather and return the output of the spawned process and the process must terminate for this to happen. Since Notepad does not terminate immediately PHP is stuck waiting forever (you can test this by running any non-interactive process instead, for example net.exe).
Takeaway: exec and friends are not meant to launch interactive processes.
In any case, exec will spawn a command interpreter which in turn will spawn Notepad. However, due to security features introduced in recent Windows versions, and depending on the user that IIS is running as, these processes will not create visible windows on your current desktop so there will be nothing for you to see. You will be able to verify that they were spawned using Task Manager or another equivalent program.
My project calls for 3 php scripts that are run with if-else conditions. The first script is loaded on the index page of the site to check if a condition is set, and if it is, it calls for the second script. The second script check to see if other conditions are set and it finally calls for the last script if everything is good.
Now I could do this by just including the scripts in the if statement, but since the final result is a resource hogging MySQL dump, i need it to be run independently of the original trigger page.
Also those scripts should continue doing their things once triggered, regardless of the user actions on the index page.
One last thing: it should be able to run on win and nix.
How would you do this?
Does the following code make any sense?
if ($blah != $blah-size){
shell_exec ('php first-script.php > /dev/null 2>/dev/null &');
}
//If the size matches, die
else {
}
Thanks a million in advance.
UPDATE: just in case someone else is going through the same deal.
There seem to be a bug in php when running scripts as cgi but command line in Apache works with all the versions I've tested.
See the bug https://bugs.php.net/bug.php?id=11430
so instead i call the script like this:
exec("php-cli mybigfile.php > /dev/null 2>/dev/null &");
Or you could call it as shell. It works on nix systems but my local windows is hopeless so if anyone run it on windows and it works, please update this.
I would not do this by shell exec because you'd have no control over how many of these resource-hogging processes would be running at any one time. Thus, a user could go click-click-click-click and essentially halt your machine.
Instead, I'd build a work queue. Instead of running the dump directly, the script would submit a record to some sort of FIFO queue (could be a database table or a text file in a dir somewhere) and then immediately return. Next you'd have a cron script that runs at regular intervals and checks the queue to see if there's any work to do. If so, it picks the oldest thing, and runs it. This way, you're assured that you're only ever running one dump at a time.
The easiest way I can think is that you can do
exec("screen -d -m php long-running-script.php");
and then it will return immediately and run in the background. screen will allow you to connect to it and see what's happening.
You can also do what you're doing with 'nohup php long-running-script.php', or by writing a simple C app that does daemonize() and then execs your script.
I am trying to run a php script on my remote Virtual Private Server through the command line. The process I follow is:
Log into the server using PuTTY
On the command line prompt, type> php myScript.php
The script runs just fine. BUT THE PROBLEM is that the script stops running as soon as I close the PuTTY console window.
I need the script to keep on running endlessly. How can I do that? I am running Debian on the server.
Thanks in advance.
I believe that Ben has the correct answer, namely use the nohup command. nohup stands for nohangup and means that your program should ignore a hangup signal, generated when you're putty session is disconnected either by you logging out or because you have been timed out.
You need to be aware that the output of your command will be appended to a file in the current directory named nohup.out (or $HOME/nohup.out if permissions prevent you from creating nohup.out in the current directory). If your program generates a lot of output then this file can get very large, alternatively you can use shell redirection to redirect the output of the script to another file.
nohup php myscript.php >myscript.output 2>&1 &
This command will run your script and send all output (both standard and error) to the file myscript.output which will be created anew each time you run the program.
The final & causes the script to run in the background so you can do other things whilst it is running or logout.
An easy way is to run it though nohup:
nohup php myScript.php &
If you run the php command in a screen, detach the screen, then it won't terminate when you close your console.
Screen is a terminal multiplexer that allows you to manage many processes through one physical terminal. Each process gets its own virtual window, and you can bounce between virtual windows interacting with each process. The processes managed by screen continue to run when their window is not active.