In the handle of your custom Laravel command, can you call the command again? Like this, described using sort of pseudo-code:
public function handle() {
code..
code..
$this->importantValue = $this->option('value'); //value is 'hello'
if(something) {
//call of the same command is made, but with different arguments or options
//command does stuff and ends successfully
$this->call('myself' [
'value' => 'ahoy'
];
//I expect the handle to be returned to the original command
}
var_dump($this->importantValue); //this equals 'ahoy'
}
Why is this? What does that newly called command has in common with the original within which it had been called?
EDIT: The newly called command would not reach the condition something it would not call itself again (forever). The original command seems to pick up from where it left (before calling itself the first and only time) yet it seems it has had inherited the "children's" variables.
I do think that calling Artisan::call() instead of $this->call() might avoid that problem (note that avoiding is not the same as solving)...
#t-maxx: I'm getting the exact same issue and I'm not sure that #ben understands.
I have a command that is recursive, based on an argument, depth. The depth argument is set to a protected property as one of the first steps in handle(). Then, if depth is greater than zero, it calls itself (via $this->call()), but passing $this->depth - 1. I watch each successive call and it just goes down and down and down, never plateauing or bouncing up was the recursion would allow and as one would expect.
So...while I'm not 100% sure what's going on, I'm thinking of getting the depth option once, but passing it around as a variable (versus a property on the object). This is ugly, I think, but it may be the only solution until this is recognized and resolved. On the other hand, it could be that we're both doing the wrong thing.
Calling Artisan::call() for me leads to other issues that I'd rather avoid. The command I'm working with writes to a file and I don't want a bunch of separate commands competing for the same file.
Yes, you can Programmatically Executing Commands using Artisan::call
Artisan::call('myself', [
'value' => 'ahoy'
]);
Related
I'm building an application in CakePHP 3.8 which uses Console Commands to execute several processes.
These processes are quite resource intensive so I've written them with Commands because they would easily time-out if executed in a browser.
There are 5 different scripts that do different tasks: src/Command/Stage1Command.php,
... src/Command/Stage5Command.php.
The scripts are being executed in order (Stage 1 ... Stage 5) manually, i.e. src/Command/Stage1Command.php is executed with:
$ php bin/cake.php stage1
All 5 commands accept one parameter - an ID - and then perform some work. This has been set up as follows (the code in buildOptionsParser() exists in each command):
class Stage1Command extends Command
{
protected function buildOptionParser(ConsoleOptionParser $parser)
{
$parser->addArgument('filter_id', [
'help' => 'Filter ID must be passed as an argument',
'required' => true
]);
return $parser;
}
}
So I can execute "Stage 1" as follows, assuming 428 is the ID I want to pass.
$ php bin/cake.php stage1 428
Instead of executing these manually, I want to achieve the following:
Create a new Command which loops through a set of Filter ID's and then calls each of the 5 commands, passing the ID.
Update a table to show the outcome (success, error) of each command.
For (1) I have created src/Command/RunAllCommand.php and then used a loop on my table of Filters to generate the IDs, and then execute the 5 commands, passing the ID. The script looks like this:
namespace App\Command;
use Cake\ORM\TableRegistry;
// ...
class RunAllCommand extends Command
{
public function execute(Arguments $args, ConsoleIo $io)
{
$FiltersTable = TableRegistry::getTableLocator()->get('Filters');
$all_filters = $FiltersTable->find()->toArray();
foreach ($all_filters as $k => $filter) {
$io->out($filter['id']);
// execute Stage1Command.php
$command = new Stage1Command(['filter_id' => $filter['id']]);
$this->executeCommand($command);
// ...
// execute Stage5Command.php
$command5 = new Stage5Command(['filter_id' => $filter['id']]);
$this->executeCommand($command5);
}
}
}
This doesn't work. It gives an error:
Filter ID must be passed as an argument
I can tell that the commands are being called because these are my own error messages from buildOptionsParser().
This makes no sense because the line $io->out($filter['id']) in RunAllCommand.php is showing that the filter IDs are being read from my database. How do you pass an argument in this way? I'm following the docs on Calling Other Commands (https://book.cakephp.org/3/en/console-and-shells/commands.html#calling-other-commands).
I don't understand how to achieve (2). In each of the Commands I've added code such as this when an error occurs which stops execution of the rest of that Command. For example if this gets executed in Stage1Command it should abort and move to Stage2Command:
// e.g. this code can be anywhere in execute() in any of the 5 commands where an error occurs.
$io->error('error message');
$this->abort();
If $this->abort() gets called anywhere I need to log this into another table in my database. Do I need to add code before $this->abort() to write this to a database, or is there some other way, e.g. try...catch in RunAllCommand?
Background information: The idea with this is that RunAllCommand.php would be executed via Cron. This means that the processes carried out by each Stage would occur at regular intervals without requiring manual execution of any of the scripts - or passing IDs manually as command parameters.
The arguments sent to the "main" command are not automatically being passed to the "sub" commands that you're invoking with executeCommand(), the reason for that being that they might very well be incompatible, the "main" command has no way of knowing which arguments should or shouldn't be passed. The last thing you want is a sub command do something that you haven't asked it to do just because of an argument that the main command makes use of.
So you need to pass the arguments that you want your sub commands to receive manually, that would be the second argument of \Cake\Console\BaseCommand::executeCommand(), not the command constructor, it doesn't take any arguments at all (unless you've overwritten the base constructor).
$this->executeCommand($stage1, [$filter['id']]);
Note that the arguments array is not associative, the values are passed as single value entries, just like PHP would receive them in the $argv variable, ie:
['positional argument value', '--named', 'named option value']
With regards to errors, executeCommand() returns the exit code of the command. Calling $this->abort() in your sub command will trigger an exception, which is being catched in executeCommand() and has its code returned just like the normal exit code from your sub command's execute() method.
So if you just need to log a failure, then you could simply evaluate the return code, like:
$result = $this->executeCommand($stage1, [$filter['id']]);
// assuming your sub commands do always return a code, and do not
// rely on `null` (ie no return value) being treated as success too
if ($result !== static::CODE_SUCCESS) {
$this->log('Stage 1 failed');
}
If you need additional information to be logged, then you could of course log inside of your sub commands where that information is available, or maybe store error info in the command and expose a method to read that info, or throw an exception with error details that your main command could catch and evaluate. However, throwing an exception would not be overly nice when running the commands standalone, so you'll have to figure what the best option is in your case.
I have an artisan command that fires a job called PasswordResetJob which iterates as it calls a method forcePasswordReset in a repository class OrgRepository, the method updates a user's table. The whole process works fine.
Now I'm trying to write a Laravel test to mock the OrgRepository class and assert that the forcePasswordReset method is called at least once, which should be the case, based on the conditions I provided to the test. In the test, I call the artisan command to fire job; (I'm using sync queue for testing) this works fine as the job gets called and the user's table gets updated as I can view my database updates directly.
However, the test fails with the error: Mockery\Exception\InvalidCountException : Method forcePasswordReset() from Mockery_2_Repositories_OrgRepository should be called
at least 1 times but called 0 times.
The artisan call within the test is:
Artisan::call('shisiah:implement-org-password-reset');
I have tried to make the artisan call before, as well as after this mock initialization, but I still get the same errors. Here is the mock initialization within the test
$this->spy(OrgRepository::class, function ($mock) {
$mock->shouldHaveReceived('forcePasswordReset');
});
What am I missing? I have gone through the documentation and searched through Google for hours. Please let me know if you need any additional information to help. I'm using Laravel version 6.0
edit
I pass the OrgRepository class into the handle method of the job class, like this:
public function handle(OrgRepository $repository)
{
//get orgs
$orgs = Org::where('status', true)->get();
foreach ($orgs as $org){
$repository->forcePasswordReset($org);
}
}
The problem is that you are initializing your spy after your job has already run, which means during the job it will use the real class instead of the spy.
You have to do something like this in your test:
$spy = $this->spy(OrgRepository::class);
// run your job
$spy->shouldHaveReceived('forcePasswordReset');
We tell laravel to use the spy instead of the repository, run the job and then assert that the method was called.
Jeffrey Way explains it pretty well in this screencast.
Problem / What I've tried:
Getting the currently used controller and action in Laravel 5 is easy (but not as easy as it should be), however I'm stuck with getting the currently used artisan console command.
To fetch the controller name I do this:
$route = Route::getRoutes()->match(Request::capture());
$listAction = explode('\\', $route->getActionName());
$rawAction = end($listAction);
// controller name and action in a simple array
$controllerAndAction = explode('#', $rawAction);
But when calling from a console action, it always returns the default index controller's name ("IndexController" or so in Laravel). Does anybody know how to make this ?
By the way I've also worked throught Request::capture() but this still gives no info about the command.
The simplest way is to just to look at the arguments specified on the command line:
if (array_get(request()->server(), 'argv.1') === 'cache:clear') {
// do things
}
Yes, you can use $_SERVER directly, but I like to use the helper functions or the Facades, as those will give you the current data.
I go from the assumption that - during unit tests - the superglobals might not always reflect the currently tested request.
By the way: Obviously can also do array_get(request()->server('argv'), '1') or something alike. (request()->server('argv.1') doesnt work at this point). Or use \Request::server(). Depends on what you like most.
As per the Symfony\Component\Console\Command\Command class, the method to return the name of the command (eg. my:command) is:
$this->getName();
You should use it from within an Artisan command extending Illuminate\Console\Command (default on Artisan commands).
Remember that it will return only the command name and not the available parameters (eg. for the command signature my:command {--with-params=} it will only return my:command).
Reflection might be of help? Try this:
$var = new \ReflectionClass($this);
dd($var);
I have configured Laravel 5 to use a custom logging configuration (default is way too simple). I've added monolog's IntrospectionProcessor to log the file name and line number of the log call.
The problem is that all lines get the same file and line number:
[2015-06-29 17:31:46] local.DEBUG (/home/vagrant/project/vendor/laravel/framework/src/Illuminate/Log/Writer.php#201): Loading view... [192.168.10.1 - GET /loans/create]
Is there a way to config the IntrospectionProcessor to print the actual lines and not the facade ones?
If I do Log::getMonolog()->info('Hello'); it works and prints the correct file and line number... but I don't know how safe is to avoid calling the Writer.writeLog function because it fires a log event (is it safe to not fire that event?).
(Only tried in Laravel 4.2!)
When pushing the Introspection Processor to Monolog it is possible to give an skipClassesPartial array as second parameter in the IntrospectionProcessor contructor. With this array it is possible to skip the Laravel Illuminate classes and the logger logs the class calling the log method.
$log->pushProcessor(new IntrospectionProcessor(Logger::DEBUG, array('Illuminate\\')));
also see: https://github.com/Seldaek/monolog/blob/master/src/Monolog/Processor/IntrospectionProcessor.php
I know this is an old question but I thought I'd give a quick update because it's pretty easy to get this done now.
I haven't tried with Laravel but My own logging mechanism is within a LoggingService wrapper class. As such the introspection was only giving details about the service rather than the caller.
after reading Matt Topolski's answer, I had a look in the IntrospectionProcessor.php. the constructor looks like this:
__construct($level = Logger::DEBUG, array $skipClassesPartials = array(), $skipStackFramesCount = 0)
All I had to do was add the processor like this:
log->pushProcessor(new IntrospectionProcessor(Logger::DEBUG, array(), 1));
This is actually the expected functionality unless you're having the handler process the logs directly (check out the comments at the top of IntrospectionProcessor.php). My guess is you have a wrapper function around the logger and you're calling it from Writer.php -- BUT
If you look at the code for IntrospectionProcessor.php you'll see a bit of code on lines 81 to 87 that decides how to format that stack trace, and it still has access to the stack. If you bump the $i values for $trace[$i - 1] / $trace[$i] up one (aka $trace[$i]/$trace[$i + 1] respectively) you can 'climb' the stack back to where you want.
It's important to note that the 'class' and 'function' parts of the trace need to be one level of the stack higher than the 'file' and 'line.'
On a personal (plz dont mod me bruhs) note, I'd like to see functionality to include a stack offset when throwing the log in. I know what function I want to blame if an error shoots out when I write the error_log('ut oh') but I might(will) forget that by the time the 'ut oh' comes.
in php i wrote my own debug function which have two arguments: text and a level of message. However i could be also you the php functions for triggering errors. But to debug in development i use sometimes like this:
debug($xmlobject->asXML(),MY_CONSTANT);
now i want to know whether it is a lack of performance in non debug executing because the arguments are calculated indepent whether they will be used inside function? and how to do that right that is only calculated if i need?
Thanks for your help,
Robert
If you write the following portion of code :
debug($xmlobject->asXML(),MY_CONSTANT);
Then, no matter what the debug() function does, $xmlobject->asXML() will be called and executed.
If you do not want that expression to be evaluated, you must not call it; I see two possible solutions :
Remove the useless-in-production calls to the debug() function, not leaving any debugging code in your source files,
Or make sure they are only executed when needed.
In the second case, a possibility would be to define a constant to configure whether or not you are in debug-mode, and, then, only call debug() when needed :
if (DEBUG) {
debug($xmlobject->asXML(),MY_CONSTANT);
}
Of course, the makes writting debbuging-code a bit harder... and there is a bit of performance-impact (but far smaller than executing the actual code for nothing).
The arguments are sended by value, ergo the method ->asXML() is executed always.