Laravel Mail Queue Not Working - php

I am using queue function of laravel to send email. But I think it is not working because it slow down the page process when sending large emails and do data is being saved in jobs table. I am using following code:
Mail::to('test#gmail.com')->queue(new Test($mailContent,$subject));
Configuration in queue.php file is:
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 90,
],

Related

Laravel custom failed_jobs table for specific connection

I'm in trouble with one issue... I'm not finding a way!
Basically, in my API layer I need to decouple the queues on 2 different databases in order to maintain the backups safely and independents.
For jobs queueing no issues, I resolved creating 2 different connection type in queue.php configuration file, but I'm not finding a way to customize the failed_jobs table... seems that it's necessary one, without particular configurations.
'connections' => [
'database_custom' => [
'connection' => 'mysql_custom',
'driver' => 'database',
'table' => env('QUEUE_TABLE', 'co_jobs'),
'queue' => 'default',
'retry_after' => 90,
// -- add here potentially configurations for custom failed jobs table????
],
'database' => [
'driver' => 'database',
'table' => env('QUEUE_TABLE', 'jobs'),
'queue' => 'default',
'retry_after' => 90,
],
],
/*
|--------------------------------------------------------------------------
| Failed Queue Jobs
|--------------------------------------------------------------------------
|
| These options configure the behavior of failed queue job logging so you
| can control which database and table are used to store the jobs that
| have failed. You may change them to any database / table you wish.
|
*/
'failed' => [
'database' => env('DB_CONNECTION', 'mysql'),
'table' => env('QUEUE_FAILED_TABLE', 'failed_jobs'),
],
Has anyone ever experienced the same problem?
Thank you in advance for your help.
Marco
I've tried a lot of possibilities, without any feasible outcome.
To decouple the failed jobs table for two different databases, you can create a new database connection and specify the table name for the failed jobs.
Here's how you can modify the code to achieve this:
'connections' => [
'database_custom' => [
'connection' => 'mysql_custom',
'driver' => 'database',
'table' => env('QUEUE_TABLE', 'co_jobs'),
'queue' => 'default',
'retry_after' => 90,
],
'database' => [
'driver' => 'database',
'table' => env('QUEUE_TABLE', 'jobs'),
'queue' => 'default',
'retry_after' => 90,
],
],
'failed' => [
'database_custom' => [
'connection' => 'mysql_custom',
'table' => env('QUEUE_FAILED_TABLE', 'co_failed_jobs'),
],
'database' => [
'connection' => 'mysql',
'table' => env('QUEUE_FAILED_TABLE', 'failed_jobs'),
],
],
by this idea, you have two separate failed jobs tables in two different databases, so you can maintain the backups safely and independently.

Queues do not start laravel 8

php artisan queue:work - don't work.
ErrorException: Trying to access array offset on value of type null
vendor/laravel/framework/src/Illuminate/Queue/QueueManager.php:156
protected function resolve($name)
{
$config = $this->getConfig($name);
return $this->getConnector($config['driver'])
->connect($config)
->setConnectionName($name);
}
config/queue.php
<?php
return [
'default' => env('QUEUE_CONNECTION', 'sync'),
'connections' => [
'sync' => [
'driver' => 'sync',
],
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 90,
],
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 90,
'block_for' => null,
],
]
];
As already mentioned above, this error can be caused by mis-configurations so try:
check your .env file for the proper QUEUE_CONNECTION.
add the connection name to your work command e.g php artisan queue:work database --queue=queue_name to run on database connection
In my case however, I ran into this error when using supervisor on a production server. The supervisor process failed with that error but for a newly created configuration. I had a configuration that was already running perfectly so I ended up copying the working configuration and editing it and it worked. I suspect was copying incorrectly formatted text because I pre-created the configurations locally on a text file.

Can we have multiple job table each for a specific queue in laravel.?

I know that we can have multiple queues sharing a single database table. But what I am trying to do is to have every queue having its own separate job table in the database. If we can do this, please show me how.
I already tried to put multiple database entries in config/queue.php under connections as shown in the code.
return [
'connections' => [
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'run_script',
'retry_after' => 90,
],
'database' => [
'driver' => 'database',
'table' => 'que',
'queue' => 'notify',
'retry_after' => 90,
],
],
];
All I am getting that both queues ( notify and run_script) are being dispatched to the que table. :(
I want the notify queue goes to que table whereas run_script queue goes to jobs table.
Thank you in advance...
This is an old post, but i believe i can solve your problem: http://laravel.at.jeffsbox.eu/laravel-5-queues-multiple-queues
You can make your queue config something like this:
'connections' => [
'table1' => [
'driver' => 'database',
'table' => 'TABLE1',
'queue' => 'table1',
'retry_after' => 90,
],
'table2' => [
'driver' => 'database',
'table' => 'table2',
'queue' => 'normal',
'retry_after' => 90,
],
'table3' => [
'driver' => 'database',
'table' => 'TABLE3',
'queue' => 'table3',
'retry_after' => 90,
],
],
And then dispatching job you just do:
$job = (new SomeJob())->onQueue('table1');
dispatch($job);
onQueue('queue_name') allows you to pick any queue (and in your case table) you want.

Laravel + Redis Cache via SSL?

I am trying to connect to Redis with predis 1.1 and SSL, using information https://github.com/nrk/predis, where in the example the following configuration is used:
// Named array of connection parameters:
$client = new Predis\Client([
'scheme' => 'tls',
'ssl' => ['cafile' => 'private.pem', 'verify_peer' => true],
]);
My Laravel configuration looks like below:
'redis' => [
'client' => 'predis',
'cluster' => env('REDIS_CLUSTER', false),
'default' => [
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
'options' => [
'cluster' => 'redis',
'parameters' => ['password' => env('REDIS_PASSWORD', null)],
'scheme' => 'tls',
],
],
Unfortunately I am getting the following error:
ConnectionException in AbstractConnection.php line 155:
Error while reading line from the server. [tcp://MY_REDIS_SERVER_URL:6380]
Suggestions are appreciated :)
I was able to get it to work!
You need to move 'scheme' from 'options' to 'default':
My working config:
'redis' => [
'client' => 'predis',
'cluster' => env('REDIS_CLUSTER', false),
'default' => [
'scheme' => 'tls',
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
'options' => [
'parameters' => ['password' => env('REDIS_PASSWORD', null)],
],
],
Note: I had also removed the 'cluster' option from 'options', but I don't suspect this to be the make-or-break with this problem.
In my final-final config, I changed it to: 'scheme' => env('REDIS_SCHEME', 'tcp'), and then defined REDIS_SCHEME=tls in my env file instead.
Tested with AWS ElastiCache with TLS enabled.
Edit:
The above config only works with single-node redis. If you happen to enable clustering and TLS then you'll need a different config entirely.
'redis' => [
'client' => 'predis',
'cluster' => env('REDIS_CLUSTER', false),
// Note! for single redis nodes, the default is defined here.
// keeping it here for clusters will actually prevent the cluster config
// from being used, it'll assume single node only.
//'default' => [
// ...
//],
// #pro-tip, you can use the Cluster config even for single instances!
'clusters' => [
'default' => [
[
'scheme' => env('REDIS_SCHEME', 'tcp'),
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => env('REDIS_DATABASE', 0),
],
],
'options' => [ // Clustering specific options
'cluster' => 'redis', // This tells Redis Client lib to follow redirects (from cluster)
]
],
'options' => [
'parameters' => [ // Parameters provide defaults for the Connection Factory
'password' => env('REDIS_PASSWORD', null), // Redirects need PW for the other nodes
'scheme' => env('REDIS_SCHEME', 'tcp'), // Redirects also must match scheme
],
]
]
Explaining the above:
'client' => 'predis': This specifies the PHP Library Redis driver to use (predis).
'cluster' => 'redis': This tells Predis to assume server-side clustering. Which just means "follow redirects" (e.g. -MOVED responses). When running with a cluster, a node will respond with a -MOVED to the node that you must ask for a specific key.
If you don't have this enabled with Redis Clusters, Laravel will throw a -MOVED exception 1/n times, n being the number of nodes in Redis cluster (it'll get lucky and ask the right node every once in awhile)
'clusters' => [...] : Specifies a list of nodes, but setting just a 'default' and pointing it to the AWS 'Configuration endpoint' will let it find any/all other nodes dynamically (recommended for Elasticache, because you don't know when nodes are comin' or goin').
'options': For Laravel, can be specified at the top-level, cluster-level, and node option. (they get combined in Illuminate before being passed off to Predis)
'parameters': These 'override' the default connection settings/assumptions that Predis uses for new connections. Since we set them explicitly for the 'default' connection, these aren't used. But for a cluster setup, they are critical. A 'master' node may send back a redirect (-MOVED) and unless the parameters are set for password and scheme it'll assume defaults, and that new connection to the new node will fail.
Thank you CenterOrbit!!
I can confirm the first solution does allow Laravel to connect to a Redis server over TLS. Tested with Redis 3.2.6 on AWS ElastiCache with TLS, configured as single node and single shard.
I can also confirm the second solution does allow Laravel to connect to a Redis Cluster over TLS. Tested with Redis 3.2.6 on AWS ElastiCache with TLS, configured with "Cluster Mode Enabled", 1 shard, 1 replica per shard.
I was receiving the following error when I first tried to implement the cluster solution:
Error: Unsupported operand types
I missed the additional set of array brackets when I moved the "default" settings into the "clusters" array.
INCORRECT
'clusters' => [
'default' => [
'scheme' ...
]
]
CORRECT
'clusters' => [
'default' => [
[
'scheme' ...
]
]
]
I hope this saves someone else a bit of troubleshooting time.
The accepted solution by CenterOrbit worked for me, as I was using AWS I had to add tls:// in my .env
Laravel
tls://username:password#URL:PORT?database=0
Try it. It will work

Laravel Iron Queue::push doesn't seem asynchronous

I have a form which allows the user to input some text and upload an image (the image is then resized and sent to TinyPNG.com for optimisation).
Upon clicking on the submit button the form sends data via JQuery AJAX. I'd like to show the user some message via On Success in the AJAX function, after the data posting is complete but without waiting for the image manipulation processes. To do this, I created a Laravel Queue with Iron, with the code below:
\Queue::push('RenameClassImage',[$_POST['temp_img_id'], $class_id,$final_path,$_POST['crop_w'],$_POST['crop_h'],$_POST['crop_x'],$_POST['crop_y']]);
Overall everything works fine, except the AJAX success function only triggers AFTER the entire image manipulation process is complete (which takes a really long time).
Below is my queue config file. If you'd like me to include any other code please let me know. Thanks in advance
<?php
return [
/*
|--------------------------------------------------------------------------
| Default Queue Driver
|--------------------------------------------------------------------------
|
| The Laravel queue API supports a variety of back-ends via an unified
| API, giving you convenient access to each back-end using the same
| syntax for each one. Here you may set the default queue driver.
|
| Supported: "null", "sync", "database", "beanstalkd",
| "sqs", "iron", "redis"
|
*/
'connections' => [
'sync' => [
'driver' => 'sync',
],
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'expire' => 60,
],
'beanstalkd' => [
'driver' => 'beanstalkd',
'host' => 'localhost',
'queue' => 'default',
'ttr' => 60,
],
'sqs' => [
'driver' => 'sqs',
'key' => 'your-public-key',
'secret' => 'your-secret-key',
'queue' => 'your-queue-url',
'region' => 'us-east-1',
],
'iron' => [
'driver' => env('QUEUE_DRIVER'),
'host' => env('QUEUE_HOST'),
'token' => env('QUEUE_TOKEN'),
'project' => env('QUEUE_PROJECT'),
'queue' => env('QUEUE_NAME'),
'encrypt' => true,
],
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'expire' => 60,
],
],
/*
|--------------------------------------------------------------------------
| Failed Queue Jobs
|--------------------------------------------------------------------------
|
| These options configure the behavior of failed queue job logging so you
| can control which database and table are used to store the jobs that
| have failed. You may change them to any database / table you wish.
|
*/
'failed' => [
'database' => 'mysql', 'table' => 'failed_jobs',
],
];
In your .env file you have to set the queue:
QUEUE_DRIVER=iron

Categories