Webserver is configured as php-fpm pool per site configuration.
I am tryin to get my php "kind-a" asynchronous by using php
exec('bash -c "exec php index.php" > /dev/null 2>&1 &')
I am using successfully php apc cache but the cache of php-pool is obviously not shared with cache of above executed cli command. Is there any possibility to execute a php script with exec() IN the php-fpm pool from which the script is called by exec()?
An option like
exec('php-fpm --pool=domain.tld index.php')
I could not find any information about how apache calls a php pool process in a php-fpm pool configuration to get an idea of how it's done.
Related
So I am running a lumen application which spins up processes.
these processes are meant to run in the backround and they run as a daemon, we send a start command to the api in php and that spins up the process using shell_exec, this used to be absolutely flawless but something strange has happened in later version of php / apache2 where the process is forked and completely dependent on either apache or php-fpm depending on what mpm module / webserver you're using.
this could be tested quite easily with index.php containing
shell_exec("screen -dmS testing bash -c 'while true; do echo test && sleep 1 ; done'")
Then restart php-fpm or apache
is there any way of stopping this behaviour? I have tried > /dev/null etc etc and with & at the end without success, I really need the process to not be dependent on php-fpm or apache2.
I've written a PHP CLI script that executes on a Continuous Integration environment. One of the things it does is it runs Protractor tests.
My plan was to get the built-in PHP 5.4's built-in web server to run in the background:
php -S localhost:9000 -t foo/ bar.php &
And then run protractor tests that will use localhost:9000:
protractor ./test/protractor.config.js
However, PHP's built-in web server doesn't run as a background service. I can't seem to find anything that will allow me to do this with PHP.
Can this be done? If so, how?
If this is absolutely impossible, I'm open to alternative solutions.
You could do it the same way you would run any application in the background.
nohup php -S localhost:9000 -t foo/ bar.php > phpd.log 2>&1 &
Here, nohup is used to prevent the locking of your terminal. You then need to redirect stdout (>) and stderr (2>).
Also here's the way to stop built-in php server running in the background.
This is useful when you need to run the tests at some stage of CI:
# Run in background as Devon advised
nohup php -S localhost:9000 -t foo/ bar.php > phpd.log 2>&1 &
# Get last background process PID
PHP_SERVER_PID=$!
# running tests and everything...
protractor ./test/protractor.config.js
# Send SIGQUIT to php built-in server running in background to stop it
kill -3 $PHP_SERVER_PID
You can use &> to redirect both stderr and stdout to /dev/null (noWhere).
nohup php -S 0.0.0.0:9000 -t foo/bar.php &> /dev/null &
I have a working shell script using killall to kill all instances of a program like below:
killall abc
Now, I write a php webpage to execute this script using shell_exec function:
shell_exec('sh ./myscript.sh');
Problem is that my php code works correct on commandline with "php myscript.php", but not works in browsers!. However, I know that the user in commandline is "root" and in php is "apache" (I get this with 'whoami').
The linux distribution is Centos 6 which uses SElinux. I changed the status of selinux to permissive.
Things I've checked:
PHP safe_mode is off
shell_exec() is not present in disable_functions in php.ini
Is there a way to run scripts with kill command using php?
Thank you for your help.
you either have to run apache as root (insecure) or, which would be much safer, you have to run the commands you try to kill as 'apache', or you configure your sudoers file to grant apache rights to killall command:
# vim /etc/sudoers
apache localhost=(ALL) NOPASSWD:/usr/bin/killall
and then change the myscript.sh to do sudo killall abc
I'm running an Apache server on CentOS and would like to be able restart the webserver from a protected page using the following:
PHP:
<?php
ignore_user_abort(true);
shell_exec('sh sh/restart.sh');
?>
restart.sh:
service httpd restart
My question is if the web server shuts down and the PHP stops executing will the sh script continue running to bring the web server back online?
You should be fine since Apache doesn't shutdown until after the command is issued. But if you really wanted to be safe, use nohup:
shell_exec('nohup sh sh/restart.sh');
If your PHP runs as apache module, then once you kill httpd your script will be terminated instantly. So you need to delegate restart to i.e. command line script (called using exec() or shell_exec())
You might be able to add an & at the end of the command. This will fork the process and run it in the background. This way it will not depend on apache still running.
shell_exec('sh sh/restart.sh &');
If this works, you should not need ignore_user_abort().
I'm parallelizing a PHP script using Supervisor. When my script gets a certain response from the database, it executes a command to stop all the processes under control by the Supervisord daemon using supervisorctl. Here is the command:
// gracefully stop all processes in supervisor's group called processes
$cmd = 'sudo /usr/bin/supervisorctl stop processes:*';
exec( $cmd, $outputs );
The problem is that this command seems to have no affect when triggered from within a PHP script under Supervisor's control. Why?
If I start this group of processes running within Supervisor, and then trigger another instance of the script directly from the command line, it works and all the Supervisor processes are stopped.
What is going on? Can daemonized PHP scripts not exec() shell commands?
I checked Supervisor's log files and found the error message "sorry, you must have a tty to run sudo." From what I can figure, this is happening because Supervisor has daemonized my PHP scripts. Because of Linux security, I'm not allowed to invoke commands with sudo from within a daemonized process.
The solution was to run Supervisor as the current user, which is does by default, unless you execute it with the sudo command like I had been doing ( sudo /usr/bin/supervisord ). I was doing this because my scripts didn't have all the access they needed to run ( I guess I was lazy long ago when I set up my process ).
After fixing this, I could start Supervisor without using the sudo command ( /usr/bin/supervidord ), and then I didn't need to execute supervisorctl ( /usr/bin/supervisorctl ) with sudo to control it, thus solving the root problem of not being able to invoke sudo from a daeomonized process.