Start a background symfony/process from symfony/console - php

I'm attempting to create a cli (symfony/console) that manages long-running child processes (symfony/process) that consume a message queue. I have two commands, Consume and Listen. Consume is a wrapper for Listen so it can be run in the background. Listen is a long running dump of the MQ that shows additional messages as they're added to the message queue.
Problem: When attempting to invoke the cli command for Listen from within Consume, it starts the process and gives me a PID, but then the child process immediately dies. I need to figure out how to get Consume to spin off multiple Listen processes that actually stay running.
In case it's relevant, the OS(es) that this will run on are SLES 12 and Ubuntu 14.04 using PHP 5.5.
Some Code (relevant snippets)
Listen
// Runs as: php mycli.php mq:listen
// Keeps running until you ctrl+C
// On the commandline, I can run this as
// nohup php mycli.php mq:listen 2>&1 > /dev/null &
// and it works fine as a long running process.
protected function execute(InputInterface $input, OutputInterface $output)
{
$mq = new Mq::connect();
while (true) {
// read the queue
$mq->getMessage();
}
}
Consume
// Runs as: php mycli.php mq:consume --listen
// Goal: Run the mq:listen command in the background
protected function execute(InputInterface $input, OutputInterface $output)
{
if ($input->getOption('listen')) {
$process = new Process('php mycli.php mq:listen');
$process->start();
$pid = $process->getPid();
$output->writeln("Worker started with PID: $pid");
}
}

A task like this would normally be delegated to a task scheduler like Supervisor. It is quite dangerous to leave processes as orphans like this. If your message queue client loses connection but the processes are still running, they effectively turn into zombies.
You need to keep the mq:consume command running for the duration for each sub process. This is achievable as indicated in the last three examples at: http://symfony.com/blog/new-in-symfony-2-2-process-component-enhancements

Related

How to launch a Symfony Command asynchronously during a request using the Symfony Process component?

I'm trying to execute a Symfony Command using the Symfony Process component so it executes asynchronously when getting an API request.
When I do so I get the error message that Code: 127(Command not found), but when I run it manually from my console it works like a charm.
This is the call:
public function asyncTriggerExportWitnesses(): bool
{
$process = new Process(['php /var/www/bin/console app:excel:witness']);
$process->setTimeout(600);
$process->setOptions(['create_new_console' => true]);
$process->start();
$this->logInfo('pid witness export: ' . $process->getPid());
if (!$process->isSuccessful()) {
$this->logError('async witness export failed: ' . $process->getErrorOutput());
throw new ProcessFailedException($process);
}
return true;
}
And this is the error I get:
The command \"'php /var/www/bin/console app:excel:witness'\" failed.
Exit Code: 127(Command not found)
Working directory: /var/www/public
Output:================
Error Output:================
sh: exec: line 1: php /var/www/bin/console app:excel:witness: not found
What is wrong with my usage of the Process component?
Calling it like this doesn't work either:
$process = new Process(['/usr/local/bin/php', '/var/www/bin/console', 'app:excel:witness']);
this results in following error:
The command \"'/usr/local/bin/php' '/var/www/bin/console' 'app:excel:witness'\" failed.
Exit Code: ()
Working directory: /var/www/public
Output:
================
Error Output:
================
First, note that the Process component is not meant to run asynchronously after the parent process dies. So triggering async jobs to run during an API request is a not a good use case.
These two comments in the docs about running things asynchronously are very pertinent:
If a Response is sent before a child process had a chance to complete, the server process will be killed (depending on your OS). It means that your task will be stopped right away. Running an asynchronous process is not the same as running a process that survives its parent process.
If you want your process to survive the request/response cycle, you can take advantage of the kernel.terminate event, and run your command synchronously inside this event. Be aware that kernel.terminate is called only if you use PHP-FPM.
Beware also that if you do that, the said PHP-FPM process will not be available to serve any new request until the subprocess is finished. This means you can quickly block your FPM pool if you’re not careful enough. That is why it’s generally way better not to do any fancy things even after the request is sent, but to use a job queue instead.
If you want to run jobs asynchronously, just store the job "somewhere" (e.d a database, redis, a textfile, etc), and have a decoupled consumer go through the "pending jobs" and execute whatever you need without triggering the job within an API request.
This above is very easy to implement, but you could also just use Symfony Messenger that will do it for you. Dispatch messages on your API request, consume messages with your job queue consumer.
All this being said, your use of process is also failing because you are trying mixing sync and async methods.
Your second attempt at calling the command is at least successful in finding the executable, but since you call isSuccessful() before the job is done.
If you use start() (instead of run()), you cannot simply call isSuccessful() directly, because the job is not finished yet.
Here is how you would execute an async job (although again, this would very rarely be useful during an API request):
class ProcessCommand extends Command
{
protected static $defaultName = 'process_bg';
protected function execute(InputInterface $input, OutputInterface $output)
{
$phpBinaryFinder = new PhpExecutableFinder();
$pr = new Process([$phpBinaryFinder->find(), 'bin/console', 'bg']);
$pr->setWorkingDirectory(__DIR__ . '/../..');
$pr->start();
while ($pr->isRunning()) {
$output->write('.');
}
$output->writeln('');
if ( ! $pr->isSuccessful()) {
$output->writeln('Error!!!');
return self::FAILURE;
}
$output->writeln('Job finished');
return self::SUCCESS;
}
}
I like to use exec().
You'd need to add a couple of bits to the end of your command:
Use '2>&1' so the output has somewhere to go. From memory, this is important so that PHP isn't waiting for the output to be returned (or streamed or whatever).
Put '&' on the end to make the command run in the background.
Then it's a good idea to return a 202 (Accepted) rather than 200, because we don't yet know whether it was successful, as the command hasn't completed.
public function runMyCommandIntheBackground(string $projectDir): Response
exec("{ProjectDir}/bin/console your:command:name 2>&1 &");
return new Response('', Response::HTTP_ACCEPTED);
}

running a single process in the bakground at all times

How can I run process in the background at all times?
I want to create a process that manages some work queues based on the info from a database.
I'm currently doing it using a cron job, where I run the cron job every minute, and have 30 calls with a sleep(2) interval. While this is working OK, I've noticed from time to time that there is a race condition.
Is it possible to just run the same process all the time? I would still have the cron job attempt to start periodically, but it would just shut down if it sees itself running.
Or is this a bad idea? any possibility of a memory leak or other issues occurring?
some years ago I didn't know about MQ systems and nodejs and etc.
so then I used code like this and added to cron to run every minute:
<?php
// defining path to lock file. example: /home/user1/bin/cronjob1.lock
define('LOCK_FILE', __DIR__."/".basename(__FILE__).'.lock');
// function to check if process is running or not
function isLocked()
{
// lock file exists, but let's check if it's running?
if(is_file(LOCK_FILE))
{
$pid = trim(file_get_contents(LOCK_FILE)); // reading process id from .lock file
$pids = explode("\n", trim(`ps -e | awk '{print $1}'`)); // running process ids
if(in_array($pid, $pids)) // $pid exists in process ids
return true; // it's ok, process running
}
// making .lock file with new process id in it
file_put_contents(LOCK_FILE, getmypid()."\n" );
return false; // previous process was not running
}
// if previous process locked to run same script
if(isLocked()) die("Already running.\n"); // locked, exiting
// from this point we run our new process
set_time_limit(0);
while(true) {
// some ops
sleep(1);
}
// cleanup before finishing
unlink(LOCK_FILE);
You could use something called forever which requires nodejs.
Once you have node installed,
Install forever with:
$ [sudo] npm install forever -g
To run a script forever:
forever start app.js

Queueing shells with CakePHP

I'm using CakePHP 2.3.8 and I'm using the CakePHP Queue Plugin.
I'm inserting data in a queue to be processed by a shell. I can process the queue (which runs a shell), and it runs properly. However, when I process the queue the page hangs until the queue is done processing. The queue can take a while to processes because it makes external requests, and if there are failures I have some retries and waiting periods. Basically, I need to find a way to process the queue in the background so the user doesn't have to wait for it.
This is my code right now
App::uses('Queue', 'Queue.Lib');
//queue up the items!
foreach($queue_items as $item){
$task_id = Queue::add("ExternalSource ".$item, 'shell');
$this->ExternalSource->id = $item;
$this->ExternalSource->saveField('queue_task_id', $task_id['QueueTask']['id']);
}
//now process the queue
Queue::process();
echo "Done!";
I've also tried calling the shell directly but the same thing happens. I have to wait until it's done being processed.
Is there any way to call the shell so that the user doesn't have to wait for it to finish being processed? I'd prefer not to do it with a cronjob checking frequently.
Edit
I've also tried using exec but doesn't seem to be working
$exec = exec('/Applications/XAMPP/xamppfiles/htdocs/app/Console/cake InitiateQueueProcess');
var_dump($exec);
The dump shows string '' (length=0). When I change exec to exec('pwd'); it shows the directory. I can use that exact path and call the shell from the terminal so I know it's correct. I also changed the permission but still nothing happens. The shell doesn't run.

Beanstalk/Pheanstalk not working in background

I am having troubles in setting up Pheanstalk on Ubuntu server.
I am relative new to programming, I have done all steps:
- installed beanstalk with sudo apt-get install beanstalk
- got pheanstalk from https://github.com/pda/pheanstalk/
And here is my code:
require_once('pheanstalk_init.php');
$pheanstalk = new Pheanstalk_Pheanstalk('127.0.0.1');
$pheanstalk
->useTube('testtube')
->put(exec("cat ../uploads/databases/app_data/Filename.sql | sqlite3 ../uploads/databases/Filename.sqlite"));
$job = $pheanstalk
->watch('testtube')
->ignore('default')
->reserve();
echo $job->getData();
$pheanstalk->delete($job);
The problem is that it takes 4-5 mins to run this code and for some reason the exec command does not run in background.
Any ideas on what am I doing wrong?
Many thanks in advance!
This is an old question, and probably you have solved it already ... anyway here is a short answer and more detailed explanation:
TL;DR : short answer: the problem was, that you run your (cpu/disk io) intensive code inside the put() method, and you needed to run it in the worker after the $job->getData() .
Explanation:
Beanstalk acts as a message queue, and allows you to decouple information producers from consumers. The idea is that you post just a description of the job that you want done to a queue and generating and posting that description to the queue is very quick. The actual resource (CPU/RAM/disk IO/network) consuming processing, happens in the consumer(s) (workers), when there are available ones, meanwhile the producer is free to do other stuff. If there are no workers available, then the jobs will just pile up in the queue.
So in your case, for example, separate your producer and consumers to chief.php and worker.php:
chief.php
<?php
require_once('pheanstalk_init.php');
$pheanstalk = new Pheanstalk_Pheanstalk('127.0.0.1');
$pheanstalk
->useTube('testtube')
->put("cat ../uploads/databases/app_data/Filename.sql | sqlite3 ../uploads/databases/Filename.sqlite");
?>
worker.php:
<?php
require_once('pheanstalk_init.php');
$pheanstalk = new Pheanstalk_Pheanstalk('127.0.0.1');
while(true){
$job = $pheanstalk->watch('testtube')->ignore('default')->reserve();
$cmd = $job->getData();
exec($cmd);
$pheanstalk->delete($job);
}
?>
So, when you run a worker, it will connect to the beanstalkd server, and wait for a job to be posted on the 'testtube' queue/tube . Then it will do the described job, and remove it from the queue, and loop, ready to process another job, when it arrives. You can run several workers in parallel too, if you want.
NB: The beanstalkd server does NOT run your workers; it only distributes jobs from producers to consumers. So you need to manage running worker processes yourself. I personally prefer to run workers as runit services, so that when there are any problem with them, they will be restarted by runit, and their output will be logged. You can of course use any process monitor tool you like: runit, systemd, monit, etc ...
It is also a good idea to run beanstalkd itself, under your favourite process monitor, just in case it crashes, which is unlikely, but can happen (it had happened to me an year ago).

How can i write a cron job using a symfony task

I'm in the process of creating a automation process to my symfony application which is written as a plugin in symfony.In this plugin i need to add a cron job.now im adding it manually. Can i open cron tab from symfony and write some thing to the cron tab???if so
i simply run a task and add the cron job from cli.
any suggestions to get this done.
I'm assuming you know how to do it from the command line. From a Symfony task you call the system command using the function passthru() from PHP standard.
The following 2 functions in PHP might help.
function mysystem($cmd)
{
passthru($cmd, $val);
if ($val != 0)
{
mydie("Command $cmd exited with nonzero exit status $val\n");
}
}
// PHP's die() exits with a happy status, which is useless in scripts
function mydie($s)
{
fwrite(STDERR, $s);
exit(1);
}
From a script you would do something similar to the following.
crontab -l | grep -v "the job you are adding" >oldcrontab # remove
any duplicate jobs
echo "cron job you are adding" >>oldcrontab # add the new job
crontab oldcrontab # reload the crontab
Just use the same strategy, but from PHP using the above functions. That should work.

Categories