I'm using Laravel 5.4 and I want to call my queue:work from a build script. Problem is I want to determine which queue to use based on an env variable i'm setting on the server. I'm i safe to use a command like the one I wrote.
What happens if the worker stops working. Can i restart the worker gracefully?
Anny tips or suggestions are welcome!
namespace App\Console\Commands;
use Illuminate\Console\Command;
use Illuminate\Support\Facades\Artisan;
class SetupQueueWorker extends Command
{
protected $signature = 'queue:setup-worker';
const QUEUE_TO_USE_ENV_KEY = 'USE_QUEUE';
public function handle()
{
//php artisan queue:work --tries=2 --queue=so_test --sleep=5
$queueToUse = env(self::QUEUE_TO_USE_ENV_KEY);
if(is_null($queueToUse))
{
throw new \RuntimeException('Environment variable to determine which queue should be used is not set. Env key: '.self::QUEUE_TO_USE_ENV_KEY);
}
$exit = Artisan::call('queue:work', [
'--tries' => 2, '--queue' => $queueToUse, '--sleep' => 5
]);
throw new \RuntimeException('Queue worker stopped listing.');
}
}
Related
I have a Laravel project that has been deployed with Forge and had OPCache enabled using Forge. I noticed last week that when I pushed some changes, the changes that were in the views and in the controllers were present on the server, but custom artisan commands that I run don't recognize updates.
Put another way, updates to the blades are showing on the screen. Updates that I have added to the controllers are changing the way information is passed to the blade files, but I have a custom artisan command that runs a series of methods in a trait. The actual file on the server shows the new method that I pushed, but when I run the artisan command in the CLI, it says that the method cannot be found.
I have stopped, restarted, and reloaded OPCache countless times. I have restarted Nginx. I have disabled OPCache and restarted PHP. It is still saying that the method is not found. Does anyone have any ideas?
<?php
namespace App\Console\Commands;
use Illuminate\Console\Command;
use App\Traits\FTPImportsTrait;
class CheckFTPImports extends Command
{
use FTPImportsTrait;
protected $signature = 'checkForImports';
protected $description = 'Check for imports...';
/**
* Create a new command instance.
*
* #return void
*/
public function __construct()
{
parent::__construct();
}
/**
* Execute the console command.
*
* #return mixed
*/
public function handle()
{
$this->checkBankImports();
}
}
-----------
<?php
namespace App\Traits;
trait FTPImportsTrait;
{
public function checkBankImports()
{
dd('YOU ARE NOT CRAZY');
}
}
$ php artisan checkForImports
$ method checkBankImports does not exist.
UPDATE:
It has to be some sort of configuration issue on the server. I just deployed the project to a fresh DO droplet and the command works as expected.
It only happened in the production environment for me.
Running:
php artisan clear-compiled
deleted the cached version and solved my issue.
Thanks a ton to #num8er.
I want to achieve task scheduling in my laravel 5.8 project. For that, I have created a custom artisan command artisan send: credentials which send emails to specific users based on their status.
sendUserCredentials.php
namespace App\Console\Commands;
use Illuminate\Console\Command;
use App\Mail\credentialsEmail;
use App\Models\userModel;
use Mail;
class sendUserCredentials extends Command
{
protected $signature = 'send:credentials';
protected $description = 'Credentials send Successfully!';
public function __construct()
{
parent::__construct();
}
public function handle()
{
$users = userModel::select(["email","username","role","id"])->where("credentials","NO")->get();
foreach ($users as $key => $user) {
Mail::to($user->email)->send(new credentialsEmail($user));
userModel::where("id",$user->id)->update(["credentials"=>"SEND"]);
}
}
}
I added this command in kernel.php so that I can run this command using the laravel task scheduler.
kernel.php
namespace App\Console;
use Illuminate\Console\Scheduling\Schedule;
use Illuminate\Foundation\Console\Kernel as ConsoleKernel;
class Kernel extends ConsoleKernel
{
protected $commands = [
Commands\sendUserCredentials::class,
];
protected function schedule(Schedule $schedule)
{
$schedule->command('send:credentials')
->everyMinute();
}
protected function commands()
{
$this->load(__DIR__.'/Commands');
require base_path('routes/console.php');
}
}
so on my local server, everything works like a charm when I run this command php artisan schedule:run
but on the shared server when I run scheduler using the cron command *****/path/to/project/artisan schedule:run >> /dev/null 2>&1 it gives me an error like this
local.ERROR: The Process class relies on proc_open, which is not available on your PHP installation. {"exception":"[object] (Symfony\\Component\\Process\\Exception\\LogicException(code: 0): The Process class relies on proc_open, which is not available on your PHP installation. at /path/to/vendor/vendor/symfony/process/Process.php:143)
BUT when I run the artisan command directly *****/path/to/project/artisan send:credentials >> /dev/null 2>&1 using the cron job then there is no error and emails send successfully!
I am using laravel 5.8 and deployed my website on namecheap shared hosting. Following command help me to execute cron job properly:
*/5 * * * * /usr/local/bin/php /home/YOUR_USER/public_html/artisan schedule:run >> /home/YOUR_USER/public_html/cronjobs.txt
As namecheap allow minimum of 5 min interval, so above command will execute after 5 min and output will be displayed in a text file.
The Error
The Process class relies on proc_open, which is not available on your PHP installation.
is because of Flare error reporting service enabled in debug mode. To solve this please follow the steps shared below.
Add the File /config/flare.php and add the below content
'reporting' => [
'anonymize_ips' => true,
'collect_git_information' => false,
'report_queries' => true,
'maximum_number_of_collected_queries' => 200,
'report_query_bindings' => true,
'report_view_data' => true,
],
And Clear the Bootstrap cache with below command
php artisan cache:clear && php artisan config:clear
Most probably the issue will be solved. Otherwise check once this link
I have an application running many different kinds of jobs that run for several seconds to an hour. This requires me to have two seperate queue connection due to retry_after being bound to a connection instead of a queue (even though they both use Redis, which is annoying but not the issue right now). One connection has a retry_after of 600 seconds, the other 3600 seconds (or one hour). My jobs implement ShouldQueue and use traits Queueable, SerializesModels, Dispatchable, InteractsWithQueue.
Now for the problem. I created a TestJob that sleeps for 900 seconds to make sure we pass the retry_after limit. When I try to dispatch a job on a specific connection using:
dispatch(new Somejob)->onConnection('redis-long-run')
or as I used to do it previously in the constructor of the job (which always used to work):
public function __construct() {
$this->onConnection('redis-long-run');
}
The job gets picked up by the queue worker, it runs for 600 seconds after which the worker restarts job, notices it has already run once and fails it. 300 seconds later, the job processes successfully. If my worker allow for more than one try, the duplicate jobs will run in parallel for 300 seconds.
In my test job I'm also printing out $this->connection which does show the correct connection being used so my guess is the broadcaster is just ignoring it completely.
I'm using Laravel 5.8.35 and PHP 7.3 in a Docker environment. Supervisor handles my workers.
Edit: I've confirmed the behavior persists after upgrading to Laravel v6.5.1
Steps To Reproduce:
Set your queue driver to Redis.
Create two different Redis connections in queue.php, one named redis having a retry_after of 600, the other named redis-long-run with a retry_after of 3600. In my case they also have different queue's, though I'm not sure this is required for this test.
'connections' => [
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 600,
'block_for' => null,
],
'redis-long-run' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'long_run',
'retry_after' => 3600,
'block_for' => null,
]
]
Create a little command to dispatch our test job three times
<?php
namespace App\Console\Commands;
use App\Jobs\TestFifteen;
use Illuminate\Console\Command;
class TestCommand extends Command
{
/**
* The name and signature of the console command.
*
* #var string
*/
protected $signature = 'test:fifteen';
/**
* Create a new command instance.
*/
public function __construct()
{
parent::__construct();
}
/**
* Execute the console command.
*
* #return void
*/
public function handle()
{
for ($i = 1; $i <= 3; $i++) {
dispatch(new TestFifteen($i))->onConnection('redis-long-run');
}
}
}
Create the test job
<?php
namespace App\Jobs;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Events\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use function sleep;
use function var_dump;
class TestFifteen implements ShouldQueue
{
use Queueable, SerializesModels, Dispatchable, InteractsWithQueue;
private $testNumber;
public function __construct($testNumber)
{
$this->onConnection('redis-long-run');
$this->testNumber = $testNumber;
}
public function handle()
{
var_dump("Started test job {$this->testNumber} on connection {$this->connection}");
sleep(900);
var_dump("Finished test job {$this->testNumber} on connection {$this->connection}");
}
}
Run your queue workers. I use supervisor with the follow config for these workers.
[program:laravel-queue-default]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work redis --queue=default --tries=3 --timeout=600
numprocs=8
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:laravel-queue-long-run]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work redis --queue=long_run --tries=1 --timeout=3540
numprocs=8
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
Execute the artisan command php artisan test:fifteen
So am I doing something wrong or is the applied connection really not respected?
Also, what's the design philosophy behind not being able to just decide on a per job or queue basis what the retry_after should be and thus being able to use REDIS as my actual queue driver? Why can't I pick Redis as my queue handler and decide that queue-1 retries after 60 seconds and queue-2 retries after 120 seconds? I feels so unnatural having to set up two connections for this, when they use exactly the same Redis instance and everything.
Anyway, here's hoping some can shine some light on this issue. Thank you in advance.
From my understanding your connection is always redis, so instead you should specify the queue instead, when dispatching:
dispatch(new TestFifteen($i))->onQueue('long_run');
The connection is a different driver, like redis, sync, AWS, etc. and your queue is different configuration or multiple stacks of jobs to that connection.
I have a cloned laravel application but when I try to generate a APP_KEY via php artisan key:generate it gives me an error:
In EncryptionServiceProvider.php line 42:
No application encryption key has been specified.
Which is obvious because that is exactly what I'm trying to create. Does anybody know how to debug this command?
update: Kind of fixed it with this post laravel 4: key not being generated with artisan
If I fill APP_KEY in my .env file php artisan key:generate works. But a newly created app via laravel new with a deleted APP_KEY can run php artisan key:generate without issue for some reason.
For some reason php artisan key:generate thinks it needs a app_key when it doesn't. It won't do any other commands either, they all error "No application encryption key has been specified."
php artisan key:generate needs an existing key to work. Fill the APP_KEY with 32 characters and rerun the command to make it work.
Edit: A newly created app via laravel new with a deleted APP_KEY can run php artisan key:generate without issue for some reason.
Edit a year later:
The real problems lays in 2 added provider services. The boot() functions are badly written which causes the problem. Still not exactly sure why it doesn't work but I'll try and figure it out for somebody who may have the same problem later.
The two files in question
<?php
namespace App\Providers;
use Illuminate\Pagination\LengthAwarePaginator;
use Illuminate\Support\ServiceProvider;
use Illuminate\Contracts\Routing\ResponseFactory;
class ResponseServiceProvider extends ServiceProvider
{
public function boot(ResponseFactory $factory){
parent::boot();
$factory->macro('api', function ($data=null, $code=null, $message=null) use ($factory) {
$customFormat = [
'status' => 'ok',
'code' => $code ? $code : 200,
'message' => $message ? $message : null,
'data' => $data
];
if ($data instanceof LengthAwarePaginator){
$paginationData = $data->toArray();
$pagination = isset($paginationData['current_page']) ? [
"total" => $paginationData['total'],
"per_page" => (int) $paginationData['per_page'],
"current_page" => $paginationData['current_page'],
"last_page" => $paginationData['last_page'],
"next_page_url" => $paginationData['next_page_url'],
"prev_page_url" => $paginationData['prev_page_url'],
"from" => $paginationData['from'],
"to" => $paginationData['to']
] : null;
if ($pagination){
$customFormat['pagination'] = $pagination;
$customFormat['data'] = $paginationData['data'];
}
}
return $factory->make($customFormat);
});
}
public function register(){
//
}
}
<?php
namespace App\Providers;
use App\Http\Controllers\Auth\SocialTokenGrant;
use Laravel\Passport\Bridge\RefreshTokenRepository;
use Laravel\Passport\Bridge\UserRepository;
use Laravel\Passport\Passport;
use Laravel\Passport\PassportServiceProvider;
use League\OAuth2\Server\AuthorizationServer;
/**
* Class CustomQueueServiceProvider
*
* #package App\Providers
*/
class SocialGrantProvider extends PassportServiceProvider{
/**
// * Bootstrap any application services.
// *
// * #return void
// */
public function boot(){
parent::boot();
app(AuthorizationServer::class)->enableGrantType($this->makeSocialRequestGrant(), Passport::tokensExpireIn());
}
/**
* Register the service provider.
*
* #return void
*/
public function register(){
}
/**
* Create and configure a SocialTokenGrant based on Password grant instance.
*
* #return SocialTokenGrant
*/
protected function makeSocialRequestGrant(){
$grant = new SocialTokenGrant(
$this->app->make(UserRepository::class),
$this->app->make(RefreshTokenRepository::class)
);
$grant->setRefreshTokenTTL(Passport::refreshTokensExpireIn());
return $grant;
}
}
php artisan key:generate is a command that create a APP_KEY value in your .env file.
When you run composer create-project laravel/laravel command it will generate a APP_Key in .env file, but when you checkout a new branch by git or clone a new project, the .env file will not include, so you have to run artisan key:generate to create a new APP_KEY.
You changed your question. In this case, you can try it.
php artisan key:generate
php artisan config:cache
If you don't have a vendor folder then,
1) Install composer dependencies
composer install
2) An application key APP_KEY need to be generated with the command
php artisan key:generate
3) Open Project in a Code Editor, rename .env.example to .env and modify DB name, username, password to your environment.
4) php artisan config:cache to effect the changes.
check your .env file. Is it exists?
I've taken over a Laravel 5.2 project where handle() was being called successfully with the sync queue driver.
I need a driver that supports dispatch(..)->delay(..) and have attempted to configure both database and beanstalkd, with numerous variations, unsuccessfully - handle() is no longer getting called.
Current setup
I am using Forge for server management and have set up a daemon, which is automatically kept running by Supervisor, for this command:
php /home/forge/my.domain.com/envoyer/current/artisan queue:listen --sleep=3 --tries=3
I've also tried queue:work, naming 'database/beanstalkd', with and without specifying --sleep, --tries , --deamon
I have an active beanstalkd worker running on forge.
I have set the default driver to beanstalkd in \config\queue.php and QUEUE_DRIVER=beanstalkd in my .env from within Envoyer, which has worked fine for other environment variables.
After build deployment Envoyer runs the following commands successfully:
php artisan config:clear
php artisan migrate
php artisan cache:clear
php artisan queue:restart
Debug information
My queue:listen daemon produces log within .forge says it processed a job!
[2017-07-04 08:59:13] Processed: App\Jobs\GenerateRailwayReport
Where that job class is defined like this:
class GenerateRailwayReport extends Job implements ShouldQueue
{
use InteractsWithQueue, SerializesModels;
protected $task_id;
public function __construct($task_id)
{
$this->task_id = $task_id;
clock("GenerateRailwayReport constructed"); // Logs Fine
}
public function handle()
{
clock("Handling Generation of railway report"); // Never Logs
//Bunch of stuff all commented out during my testing
}
public function failed(Exception $e)
{
clock("Task failed with exception:"); // Never Logs
clock($e);
}
}
My beanstalkd worker log within .forge has no output in it.
Nothing in my failed_jobs table.
-Really, really appreciate any help at this point!