I have the following (simple) lock code for a Laravel 5.3 command:
private $hash = null;
public final function handle() {
try {
$this->hash = md5(serialize([ static::class, $this->arguments(), $this->options() ]));
$this->info("Generated signature ".$this->hash,"v");
if (Redis::exists($this->hash)) {
$this->hash = null;
throw new \Exception("Method ".$this->signature." is already running");
}
Redis::set($this->hash, true);
$this->info("Running method","vv");
$this->runMutuallyExclusiveCommand(); //Actual command is not important here
$this->cleanup();
} catch (\Exception $e) {
$this->error($e->getMessage());
}
}
public function cleanup() {
if (is_string($this->hash)) {
Redis::del($this->hash);
}
}
This works fine if the command is allowed to go through its execution cycle normally (including handling when there's a PHP exception). However the problem arises when the command is interrupted via other means (e.g. CTRL-C or when the terminal window is closed). In that case the cleanup code is not ran and the command is considered to be still "executing" so I need to manually remove the entry from the cache in order to restart it. I have tried running the cleanup code in a __destruct function but that does not seem to be called either.
My question is, is there a way to set some code to be ran when a command is terminated regardless how it was terminated?
Short answer is no. When you kill the running process, either by Ctrl-C or just closing the terminal, you terminate it. You would need to have an interrupt in your shell that links to your cleanup code, but that is way out of scope.
There are other options however. Cron jobs can be run at intermittent intervals to perform clean up tasks and other helpful things. You could also create a start up routine that runs prior to your current code. When you execute the start up routine, it could do the cleanup for you, then call your current routine. I believe your best bet is to use a cron job that simply runs at given intervals that then looks for entries in the cache that are no longer appropriate, and then cleans them. Here is a decent site to get you started with cron jobs https://www.linux.com/learn/scheduling-magic-intro-cron-linux
Related
I want to run some vendor-scripts of Composer from my PHP-script.
Calling each command takes long although the command itself finishes quick. I assume creating a new shell by shell_exec() takes some time.
I wanted to call the PHP-scripts directly via the require keyword but changing the global $argv to contain the parameters for a script does not apply to the called script. Is $argv implicitly immutable across script-files or do I have another error in my way of thinking?
Here is some sample code (should be executed via CLI, not tested):
namespace Foo;
class Bar
{
public static function call_cs_fixer()
{
$GLOBALS['argv'] = [
'/path/to/vendor/bin/php-cs-fixer',
'fix',
'--config',
'"/path/to/.php-cs-fixer.php"',
'"/path/to/project"',
];
return require $GLOBALS['argv'][0];
}
}
echo \Foo\Bar::call_cs_fixer();
Is $argv implicitly immutable across script-files or do I have another error in my way of thinking?
That depends on how the script/utility works that you try to invoke. Which means you can't expect it to work stable and I would refrain from it unless you know it has this interface. As you don't know it - otherwise you would not ask the question that way - throw that idea into the bin in this case.
I assume creating a new shell by shell_exec() takes some time.
This may be (we can't look into your system configuration), but if it is a linux system this is very likely not the case.
In practice, the use of a new shell sub-process to invoke the tooling is the much, much better way to do things here. This is also how composer(1) invokes scripts (see Scripts) - unless they are bound as (static) methods - and is always true for the composer exec command.
The reason is that you can control not only the command line arguments much better but also the working directory and the environment parameters (a.k.a. environment variables or environment in short), compare proc_open(php). The standard streams are available as well.
As you're running in context of composer, and if you have access to the sources of it (e.g. you bind a composer script or hook in your composer.json configuration), you can use the process components that ship with composer itself (its all PHP), it has quite some utility in there.
If you just want to start lightly, I found the passthru(php) function a good fit for quickly getting started:
// the command you'd like to execute
$command = '/path/to/vendor/bin/php-cs-fixer';
$args = [
'fix',
'--config',
'/path/to/.php-cs-fixer.php',
'/path/to/project'
];
// build the command-line
$commandLine = sprintf(
'%s %s',
$command,
implode(' ', array_map('escapeshellarg', $args))
);
// execute
$result = passthru($commandLine, $exitStatus);
// be verbose and give some debug info
fprintf(
STDERR,
"debug: command %s exited with status %d\n",
$commandLine,
$exitStatus
);
// throw on exit status != 0, a convention only but you often want this
if (false === $result || $existStatus !== 0) {
throw new \RuntimeException(sprintf(
'command "%s" exited with non-zero status %d (result=%s).\n',
addcslashes($commandLine, "\0..\37\"\\\177..\377"),
$exitStatus,
var_export($result, true)
), (int)$exitStatus);
}
I have a problem with terminating processes started from a queue job.
I use the yii2-queue extension to run some long running system commands that have a total execution time limit controlled by the getTtr method of the RetryableInterface. The command may take anywhere from minutes to hours to fully complete, but I need to kill it after it hits the 60-minute mark.
<?php
use Symfony\Component\Process\Process;
use yii\base\BaseObject;
use yii\queue\RetryableJobInterface;
class TailJob extends BaseObject implements RetryableJobInterface
{
public function getTtr()
{
return 10;
}
public function execute($queue)
{
$process = new Process('tail -f /var/log/dpkg.log');
$process->setTimeout(60);
$process->run();
}
public function canRetry($attempt, $error)
{
return false;
}
}
Now, the problem that I face is that even when queue/listen kills the job, the tail command (it's just an example; in production I need to run a different command) keeps running in the background. Is there any way I can force the system to kill the tail command when the job is killed?
Your script needs to keep checking if the timeout was reached; e.g.
while($process->isRunning()) {
$process->checkTimeout();
usleep(200000);
}
Read more about "Process Timeout" here:
https://symfony.com/doc/current/components/process.html
Run the command with a timeout
$process = new Process('timeout 3600 tail -f /var/log/dpkg.log');
Will limit the process to a maximum of 60 minutes.
If your script kills it first that's fine and if it doesn't then the process will die at the timeout time.
https://linux.die.net/man/1/timeout
I just discovered that it exists a ConsoleEvents::TEMINATE event in Symfony.
I want to use it to execute some additional process after the command execution (and not delaying the command).
But the fact is that i want to execute some process when a specific command is finish, not for all the commands (because i think that consoleevent.terminate is fired for all the commands.
I really don't know how to do that.
Regards.
You can access instance of the command from ConsoleTerminateEvent
It's almost copy paste from documentation of Console component. with full symfony registering listener looks a little different but you should get the idea.
$dispatcher->addListener(
ConsoleEvents::TERMINATE,
function(ConsoleTerminateEvent $event) {
$command = $event->getCommand();
// if it's not the command you want
if (!$command instanceof YourDesiredCommand) {
return;
}
// put your logic here
}
);
On a project i am working we use Symfony2 console commands to run image converting (using LaTeX and some imagick). Due to the nature of project, not all conditions may be met during the console command run so the execution will fail, to be later restarted with a cron job, only if attempts count is not higher that predefined limit.
We already hove logging in our project, we use Monolog logger. What i basically want is to somehow duplicate everything that goes to the main log file in another log file, created specifically for that console command execution and only if attempts limit is reached.
So, if we run command once and it fails - it's ok and nothing should be created.
But if we run command for the 10th time, which is attempt limit, i want to have a separate log file named, say '/logs/failed_commands//fail.log'. That log file should only have messages for the last failed attempt, but not for all the previous ones.
How to do that? Do i need some combination of special logger handler (like FingersCrossed) and proper exceptions handling? Should i rather create additional instance of logger (if so, how can i pass it over to dependent services?)
This is simplified and cleaned piece of command that runs images converting. The attempts limit is checked withing the $this->checkProcessLimit() method
public function execute(InputInterface $input, OutputInterface $output)
{
try {
set_time_limit(0); //loose any time restrictions
$this->checkingDirectories();
$this->checkProcessLimit();
$this->isBuildRunning();
$this->checkingFiles();
try {
$this->startPdfBuilding();
} catch (InternalProjectException $e) {
throw PdfBuildingException::failedStartBuilding($this->pressSheet->getId());
}
} catch (PdfBuildingException $e) {
$this->printError($output, $e);
return;
}
try {
$this->logger->info('Building Image.');
$this->instantiatePdfBuilder();
$buildingResult = $this->pdfBuilder->outputPdf();
$this->generatePreview($buildingResult);
$this->movePDFs($buildingResult);
$this->unlinkLockFile();
$output->writeln('<info>Image successfully built</info>');
} catch (LaTeXException $e) {
$this->unlinkLockFile();
$this->abortPdfBuilding($e->getMessage());
$this->printError($output, $e);
return;
}
}
UPD: It seems that for dumping a bunch of log entries i need to use BufferHandler bundled with Monolog Logger. But i still need to figure out the way to set it up to get dumps only when errors limit (not error level) reached.
UPD2: I've managed to make it work, but i don't like the solution.
Since in Symfony2 you have to define loggers in config.yml and have to rebuild cache for any changes in configuration, i had to resort to dynamically adding a handler to a logger. But the logger itself is considered to be of Psr\Log\LoggerInterface interface, which does not have any means to add handlers. The solution i had to use actually checks if used logger is an instance of Monolog\Logger and then manually adding a BufferHandler to it in Symfony2 Console command's initialize() method.
Then, when it comes to the point where I check for attempts limit, i close buffer handler and delete actual log file (since BufferHandler has no means to removing/closing itself without flushing all it's contents) if limit is not yet reached. If it is, i just let the log file to stay.
This way it works, but it always writes the log, and i have to remove logs if condition (reached attempt limit) is not met.
i think you must create a custom handler.
With Monolog, you can log in a database (see for example https://github.com/Seldaek/monolog/blob/master/doc/04-extending.md)
Thus, it's easy to know how many times an error was raised since x days.
(something like : "select count(*) from monolog where channel='...' and time>...")
I am making php cli application in which I need Key Event listener
let's say this is my code
Task To do
for($i=0;$i<=n;$i++){
$message->send($user[$i]);
}
Once I'm done with sending messages, I will have to keep connection alive with following code to receive delivery receipts.
I use $command = fopen ("php://stdin","r"); to receive user commands.
while(true){
$connection->refresh();
}
Connection is automatically kept alive during any activities but on idle i have to keep above loop running.
How can I run the event on pressing any key which will make this event break and execute some function?
PHP is not developed, to handle this kind of problems. The Magic Word would be Thread
The Cleanest way, I can Imagine, is taking a look into the Extension PThreads. With it, you can do Threads like in other Language:
class AsyncOperation extends Thread {
public function __construct($arg){
$this->arg = $arg;
}
public function run(){
if($this->arg){
printf("Hello %s\n", $this->arg);
}
}
}
$thread = new AsyncOperation("World");
if($thread->start())
$thread->join();
The other way would be to do tasks via a queue in a different script. There are some Queue Server out there, but it can be done simple calling shell_execute your Queue Script via PHP.exe. On Linux you need something like ...script.php > /dev/null 2>/dev/null &, Windows start /B php..., to stop waiting on Script is finished.