Using nohup or setsid in PHP as Fast CGI - php

I'm trying to do a potentially long background process in response to an incoming AJAX request, but using nohup or setsid via shell_exec causes the server to fork bomb. We use suexec and FastCGI, and when it bombs it took the entire server to a crawl.
shell_exec("nohup /home/me/myscript.php");
The script doesn't do anything lengthy right now, just outputs to an non-existant file (which never happens, because it blows up first)
Thanks!

I've always seen the warnings at http://php.net/manual/en/intro.pcntl.php (although you're using nohup, I knoẁ) as a warning that forking from webserver processes is not a safe way to go. If a background process needs starting, I'll create a daemon / always running job process which can receive such requests (and hasn't got anything to do with the webserver), which forks/nohups at will.

Related

Best practise to execute long running PHP scripts

We need to distribute software which should contain a PHP script which will run for some minutes. Therefore I am searching a best practise way to do this in 2017.
It has to be invoked by an HTTP request. There should be no HTTP request waiting for some minutes so the script has to run still AFTER the visitor got his HTTP response.
It has to run periodically (every night). It also should run every night per default (like a cron job). Notice: since the software is going to be distributed to clients there is no way for us to add a cronjob manually (we have no access to our clients servers). Everything should be accomplished within PHP code.
(Please note that I read existing blog posts and Stackoverflow questions myself but I could not find a satisfying answer)
Maybe anyone knows how frameworks like Symfony and Laravel or webshops like Magento accomplish such tasks? Still I want to know how to do it by myself in plain PHP without using frameworks or libraries.
Many solutions exist:
using exec (rather insecure), that triggers a background job (recommended in comments, I would probably prefer symfony process, but insecure nevertheless).
using a cron to trigger a symfony process every so often, not over http so way more secure.
using php-fpm, you can send a response without stopping the process using fastcgi_finish_request
using a queue system (SQS, RabbitMQ, Kafka and so on).
using a cron manager in PHP
using a daemon and something like supervisord to make sure it runs continuously.
The best solutions are definitely queues and cron, then PHP-FPM, rest of it is just rubbish.
There is absolutely no way for you to run on someone's server without doing something that won't work at some point.
Sidenote: you said you did not want libraries in order to know how to do it yourself, I added links to libraries as reading them may give you a deeper knowledge of the technology, these libraries are really high quality.
Magento only runs it cronjobs, if you setup a regular cronjob for Magento. It has a cron.sh, that runs every minute an executes jobs in Magento's queue.
Any Solution to execute long-running tasks via http involves web-server configuration.
Finally I think there are two ways to start a long running process in PHP via an HTTP request (without letting the user wait for a long time):
Using FPM and send the response to the user with fastcgi_finish_request(). After the response is sent you can do whatever you want, for example long running tasks. (Here you don't have to start a new process, just continue with PHP).
fastcgi_finish_request();
longrunningtask();
Create a new process using one of the following functions. Redirect STDOUT and STDERR to null and put it to the background (you have to do both). To still get output of the new process the new process can write to some log file.
exec('php longrunningtask.php >/dev/null 2>/dev/null &');
shell_exec('php longrunningtask.php >/dev/null 2>/dev/null &');
system('php longrunningtask.php >/dev/null 2>/dev/null &');
passthru('php longrunningtask.php >/dev/null 2>/dev/null &');
Or use proc_open() like symfony/process does.
Notes:
symfony/process: on the docs you can read that you can use FPM and fastcgi_finish_request() for long running tasks.
Security: I can see no built in security risk, you just have to do things right then everything is fine (you could use password protection, validate possible inputs to your commands, etc.).
For the cron-job-part of the question there is no answer. I don't think it's possible.

How do I get the progress of an rsync run by a php exec command

I have a shell script (rsync.sh) with an rsync command in it:
rsync -rtlv --password-file=/path/to/password/file/file.txt --bwlimit=5000 /local/root/path/$1 username#remoteserver::remote/root/path/$2
I then run that from PHP (rsync.php) with an exec command:
exec('/path/to/shell/script/rsync.sh local/specific/path/ remote/specific/path/', $progress, $errors);
This all works fine. I get the progress once it's finished and I parse it. So far I've only been testing on a few smaller files. However, once this is put into production I am expecting this to be done on several large files that will take over an hour to finish. I would like to be able to view the progress as it's happening. If I put the --progress flag in there I'm not sure exactly how I'll get the progress back through the exec. Any ideas?
I think I'll have to make the exec asynchronous and somehow post the progress to a database where it can be collected and displayed.
You can't. The PHP exec() function does not return until the command completes.
This is really not a very good task for PHP — web servers generally do not cope well with PHP scripts (and, hence, web requests) that take hours to complete. That being said, you will be able to get closer to the intended behavior using the proc_open() family of functions, which allow you to start a process that runs in parallel with a PHP script.

PHP simple daemon without using pcntl_fork

Part of my web application is a background script that polls from a beanstalkd server and process data.
This script needs to run continuously (like a daemon). If it crashes, it needs to be started again. It also can't be started twice (more precisely run twice).
As I want to ease the deployment and development process, I want to avoid using pcntl_fork. It's not available on Windows, it necessitates recompiling PHP on Mac, sometimes on Linux too...
Can I do this simply using a bash script to launch the PHP script in background?
# verify that the script is not already running
...
/usr/bin/php myScript.php &
If I execute this batch with crontab every hour or so, my process should run continuously and be restarted in maximum one hour if it crashes?
Assuming blindly that you control the server(s) on which your scripts run, Supervisor is probably a good solution for you.
It's a process control daemon, written in Python. You can configure it to start your PHP script and keep it running. The PHP script itself doesn't need to do anything special. No forking, no manual process control, nothing.
On the other hand, you've also expressed concern about pcntl_fork not being available on Windows. If you're really running this thing on Windows, Supervisor isn't going to work out for you, as it isn't Windows friendly. Keep in mind that Windows isn't really friendly to Unix-style daemonization either, as it would want to control the daemon as a Service. While that's possible, it's not exactly an easy or elegant solution.

Persistent PHP socket server

I'm planning the development of a server written in PHP that can service socket requests. I use a free host (Heliohost) for testing, and it has cPanel. So far the only thing I've been able to think of to have a PHP script always running is to write a cron job that runs a bash script to check ps to see if the PHP is already running, and if it isn't, start it.
Is there a better way? Perhaps a way for a PHP thread to be started on an HTTP request and continue to run in Apache after the request has been serviced?
You will almost certainly not have success running persistent processes from Apache. It is designed to prevent that scenario (though if you can get to the fork(2) system call, it is probably do-able). I wouldn't recommend trying it though.
What would make more sense is if you use a hosting provider that gives you the ability to write your own crontab(5) specifications and run the PHP interpreter directly. Then you could just add a line to your crontab(5) like:
#reboot /path/to/php /path/to/script.php
Your script should probably perform the usual daemonization tasks so that cron(8) isn't stuck waiting for your process to exit.

Constantly Running Gearman Worker

I have a process I'd like to be able to run in the background by starting up a Gearman Client any time.
I've found success by opening up two SSH connections to my server, and in one starting the worker and in the other then running the client. This produces the desired output.
The problem is that, I'd like to have a worker constantly running in the background so I can just call up a client whenever I need to have the process done. But as soon as I close the terminal which has the worker PHP file running, a call to the client does not work - the worker seems to die.
Is there a way to have the worker run constantly in the background, so calling a new client will work without having to start up a new worker?
Thanks!
If you want a program to keep running even after its parent is dead (i.e. you've closed your terminal), you must invoke it with nohup :
nohup your-command &
Quoting the relevant Wikipedia page I linked to :
nohup is a POSIX command to ignore
the HUP (hangup) signal, enabling
the command to keep running after the
user who issues the command has logged
out.The HUP (hangup) signal is
by convention the way a terminal warns
depending processes of logout.
For another (possibly more) interesting solution, see the following article : Dæmonize Your PHP.
It points to Supervisord, which makes sures a process is still running, relaunching it if necessary.
Is there a way to have the worker run constantly in the background, so calling a new client will work without having to start up a new worker?
Supervisor!
The 2009 PHP Advent Calendar has a quick article on using Supervisor (and other tricks) to create constantly-running PHP scripts without having to deal with the daemonization process in PHP itself.

Categories