Laravel Redis Jobs are not Being Queued - php

I am using Laravel with Phpredis and I've created a webhook that adds a job to the queue. I've followed the docs for the interrogation but my jobs are not being queued.
.env
QUEUE_CONNECTION=redis
config/database.php
'client' => env('REDIS_CLIENT', 'phpredis'),
config/queue.php
...
'connections' => [
...
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 90,
'block_for' => null,
],
...
],
...
I am using Windows with Xampp and redis-server.exe is running. This is what I am getting when the job is being added to the queue:
[9672] 03 Nov 21:44:03 - Accepted 127.0.0.1:52945
[9672] 03 Nov 21:44:03 - Client closed connection
This is how I'm adding the jobs to queue:
ProcessPhotos::dispatch($settings, $data, $id);
And this is how I'm trying to run the queued jobs:
php artisan queue:work
or
php artisan queue:listen
I am running one of the previous commands and nothing is happening and I'm also not receiving any errors. It's like the queue is empty (I've also checked the queue length using this code and I've got 000).
I've also tried to set a key into redis and that seemed to work. Does somebody know what's happening? I'm thinking to move to database if i can't get this solved ...

I've fixed the issue!
It turned out that it was something wrong with the server. (I've reinstalled again the Redis extension and it still wasn't working, then I changed the server version and it was working)
I reinstalled the Redis extension from here and switched to this server version. The rest of the settings were the same as in my previous post.

Related

Laravel Job throwing Symfony\Component\Process\Exception\ProcessTimedOutException

I have web application that runs a job to convert videos into HLS using the aminyazdanpanah/php-ffmpeg-video-streaming package. However, after about 2 minutes, the job fails and throws the error:
Symfony\Component\Process\Exception\ProcessTimedOutException: The process '/usr/bin/ffmpeg -y -i...'
exceeded the timeout of 300 seconds. in /var/www/vendor/symfony/process/Process.php:1206
The Laravel job has it's time out set to 7200s.
My supervisor setup also specifies a timeout of 7200s:
[program:app_worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work --tries=1 --timeout=7200 --memory=2000
autostart=true
autorestart=true
I have also set my php max_execution_time to 7200s in the ini file.
In the job handle() function I also call set_time_limit(7200); to set the time limit.
I have restarted the queue worker and cleared my cache but that doesn't seem to solve the issue.
It seems Symfony just ignores the timeout specification from Laravel.
I noticed that it failed after about 2 minutes because in my config/queue.php file redis retry_after was set to 90.
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 90,
'block_for' => null,
'after_commit' => false,
],
I increased that to 3600 so the job stopped failing after 2 minutes but kept failing after 300s.
I later traced down the timeout to be coming from aminyazdanpanah/php-ffmpeg-video-streaming FFmpeg::create().
By default, the function sets a timeout of 300s. So I had to pass a config to the function to increase the time out:
use Streaming\FFMpeg;
$ffmpeg = FFMpeg::create([
'timeout' => 3600,
]);
And this solved the timeout issue.

Timed out job hangs for 15 or 30 minutes and then runs

Horizon Version: 3.7.2 / 3.4.7
Laravel Version: 6.17.0
PHP Version: 7.4.4
Redis Driver & Version: predis 1.1.1 / phpredis 5.2.1
Database Driver & Version: -
We are having strange errors with our Horizon. Basically this is what happens:
- A job is queued. And starts processing.
After 90 seconds (our timeout config value) it times out.
After 120 seconds (our retry_after value) job is retried.
Retried job succeeds.
After 15 minutes or 30 minutes, the original job(the one timed out) finishes. With running the actual job.
Seems like this can happen to any kind of job. For example if it's mailable that is queued, the user gets an email first, then 15 or 30 minutes later user gets another email. Same one.
Here our config files
config/database.php:
'redis' => [
'client' => env('REDIS_CLIENT', 'predis'),
'default' => [
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
],
config/queue.php:
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('DEFAULT_QUEUE_NAME', 'default'),
'retry_after' => 120, // 2 minutes
'block_for' => null,
],
config/horizon.php:
'environments' => [
'production' => [
'supervisor-1' => [
'connection' => env('HORIZON_CONNECTION', 'redis'),
'queue' => [env('DEFAULT_QUEUE_NAME', 'default')],
'balance' => 'simple',
'processes' => 10,
'tries' => 3,
'timeout' => 90,
],
],
]
Here how it looks in Horizon Dashboard
This when the initial job times out. It stays like this in Recent Jobs while the retries are working.
After almost half an hour it changes to this:
It's the same tags, I just blacked out names.
Here are the logs we are seeing (times here are in UTC)
[2020-04-22 11:24:59][88] Processing: App\Mail\ReservationInformation
[2020-04-22 11:29:00][88] Failed: App\Mail\ReservationInformation
[2020-04-22 11:29:00][88] Processing: App\Mail\ReservationInformation
[2020-04-22 11:56:21][88] Processed: App\Mail\ReservationInformation
Note: With Predis we also see some logs like Error while reading line from the server. [tcp://REDIS_HOST:6379] but with PHPRedis there was none.
We tried a lot of different combinations, to eliminate the problem. And it happened in every combination. So we think it must be something with Horizon.
We tried:
- Predis with Redis 5 and Redis 3
Predis with different read_write_timeout values
PHPRedis with Redis 5 and Redis 3
THP was enabled on one server. So we also tried all combinations with a server that has THP disabled.
We were at Laravel 6.11 and Horizon 3.4.7 then upgraded to Laravel 6.14 and Horizon 3.7.2
There is only one instance of Horizon running. And no other queue is handled in this Horizon instance.
Any information or tips to try are welcome!
For us this turned out to be a configuration error in our systems. We were using OpenShift and Docker. We adjusted these values in our containers/systems
net.ipv4.tcp_keepalive_intvl
net.ipv4.tcp_keepalive_probes
net.ipv4.tcp_keepalive_time
and for now everything works normally.

Problem with Laravel queued Jobs. Strange behavior different for dev and production

I have a strange behavior when a Job runs:
On dev server (win 7 php 7.2.10) everything work fine,
on the production server Linux centOS php 7.0.10 it throws an Exception:
Illuminate\Queue\MaxAttemptsExceededException: A queued job has been attempted too many times. The job may have previously timed out.
config/queue.php
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 90,
],
this happens after a job is queued ... when it starts working ... after about 30 seconds (Failed)
the exception is in the failed_jobs table
I though it could be dependent by the php max_execution_time directive but when i do
php -r "echo ini_get('max_execution_time') . PHP_EOL;"
it shows me zero (no timeout ... which is correct)
the Job is queued in this way:
dispatch((new Syncronize($file))->onQueue('sync'));
The Sincronize Job has no timeout (has 1 try) and simply calls two artisan commands which work perfecly both on prod and on dev server if called from the shell.
https://pastebin.com/mnaHWq71
to start jobs on the dev server I use
php artisan queue:work --queue=sync,newsletter,default
on prod server I use this
https://pastebin.com/h7uv5gca
any idea of what can be the cause ?
Found the problem ...
was in my service /etc/init.d/myservice
cd /var/www/html/
case "$1" in
start)
php artisan queue:work --queue=sync,newsletter,default &
echo $!>/var/run/myservice.pid
echo "server daemon started"
;;
I didn't check if the process was already running so I launch it twice.
I saw 2 processes in ps axu and seems that this was the cause
This check solved
if [ -e /var/run/myservice.pid ]; then
echo "Service is running. Call stop first".
exit 1
fi

spatie/laravel-backup "mysqldump" doesn't recognized when I run it through Artisan class

I'm using spatie/laravel-backup in a WAMP localhost.
It works fine when I type manually in the windows cmd:
php artisan backup:run
But when I try to run the backup using the laravel Artisan class:
Artisan::call('backup:run');
It throw an error:
'mysqldump' not recognized ...
In the laravel mysql config I've also specified the path to the dumper:
'mysql' => [
'driver' => 'mysql',
// ...
'dump' => [
'dump_binary_path' => 'E:/wamp/wamp64/bin/mysql/mysql5.7.9/bin',
],
],
How can i fix that?
EDIT
Probably it's just support "bug" for windows (finded out thanks Loek's answer), as the author says, so can I run a backup in a controller without a command safely? maybe with something like:
use Spatie\Backup\Tasks\Backup\BackupJobFactory;
BackupJobFactory::createFromArray(config('laravel-backup'))->run();
As the command itself.
I believe it's the forward slashes. Try this:
'mysql' => [
'driver' => 'mysql',
// ...
'dump' => [
'dump_binary_path' => 'E:\\wamp\\wamp64\\bin\\mysql\\mysql5.7.9\\bin',
],
],
EDIT
Support for Windows is wonky at best, with multiple "This package doesn't support Windows" comments from the creators on GitHub issues. This one is the latest: https://github.com/spatie/laravel-backup/issues/311
It could also be a permission problem. Executing from the command line is probably happening from another user than executing from the web server, so Windows is denying access to mysqldump.
2nd edit
As long as you make sure the controller only gets called when it needs to be, I don't see why this wouldn't work!

Laravel Redis configuration

I am currently creating an app with Laravel and Redis. Almost everything is working fine. I extended the Authentication as explained in the documentation, users can subscribe, login, logout ... I can create content and everything is stored in Redis.
But I have one issue. I can't run commands like "php artisan route:list", I have an error message : "[InvalidArgumentException] Database [redis] not configured.".
Th question is, is there anything special to do to make Artisan commands work when you set Redis as you database ? (basic configurations explained in the documention have been done and almost everything else is working fine).
Config:
In config/database.php I have:
return [
...
'default' => 'redis',
...
'redis' => [
'cluster' => false,
//'connection' => 'default',
'default' => [
'host' => '127.0.0.1',
'port' => 6379,
'database' => 7,
],
],
...
PS : You have the same error when you try to access the /password/email (password reset url).
InvalidArgumentException in DatabaseManager.php line 246:
Database [redis] not configured.
As Robert says in the comments, it looks like there is this error because there is no support for Redis as database for laravel.

Categories