php artisan queue:work - don't work.
ErrorException: Trying to access array offset on value of type null
vendor/laravel/framework/src/Illuminate/Queue/QueueManager.php:156
protected function resolve($name)
{
$config = $this->getConfig($name);
return $this->getConnector($config['driver'])
->connect($config)
->setConnectionName($name);
}
config/queue.php
<?php
return [
'default' => env('QUEUE_CONNECTION', 'sync'),
'connections' => [
'sync' => [
'driver' => 'sync',
],
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 90,
],
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 90,
'block_for' => null,
],
]
];
As already mentioned above, this error can be caused by mis-configurations so try:
check your .env file for the proper QUEUE_CONNECTION.
add the connection name to your work command e.g php artisan queue:work database --queue=queue_name to run on database connection
In my case however, I ran into this error when using supervisor on a production server. The supervisor process failed with that error but for a newly created configuration. I had a configuration that was already running perfectly so I ended up copying the working configuration and editing it and it worked. I suspect was copying incorrectly formatted text because I pre-created the configurations locally on a text file.
Related
I'm in trouble with one issue... I'm not finding a way!
Basically, in my API layer I need to decouple the queues on 2 different databases in order to maintain the backups safely and independents.
For jobs queueing no issues, I resolved creating 2 different connection type in queue.php configuration file, but I'm not finding a way to customize the failed_jobs table... seems that it's necessary one, without particular configurations.
'connections' => [
'database_custom' => [
'connection' => 'mysql_custom',
'driver' => 'database',
'table' => env('QUEUE_TABLE', 'co_jobs'),
'queue' => 'default',
'retry_after' => 90,
// -- add here potentially configurations for custom failed jobs table????
],
'database' => [
'driver' => 'database',
'table' => env('QUEUE_TABLE', 'jobs'),
'queue' => 'default',
'retry_after' => 90,
],
],
/*
|--------------------------------------------------------------------------
| Failed Queue Jobs
|--------------------------------------------------------------------------
|
| These options configure the behavior of failed queue job logging so you
| can control which database and table are used to store the jobs that
| have failed. You may change them to any database / table you wish.
|
*/
'failed' => [
'database' => env('DB_CONNECTION', 'mysql'),
'table' => env('QUEUE_FAILED_TABLE', 'failed_jobs'),
],
Has anyone ever experienced the same problem?
Thank you in advance for your help.
Marco
I've tried a lot of possibilities, without any feasible outcome.
To decouple the failed jobs table for two different databases, you can create a new database connection and specify the table name for the failed jobs.
Here's how you can modify the code to achieve this:
'connections' => [
'database_custom' => [
'connection' => 'mysql_custom',
'driver' => 'database',
'table' => env('QUEUE_TABLE', 'co_jobs'),
'queue' => 'default',
'retry_after' => 90,
],
'database' => [
'driver' => 'database',
'table' => env('QUEUE_TABLE', 'jobs'),
'queue' => 'default',
'retry_after' => 90,
],
],
'failed' => [
'database_custom' => [
'connection' => 'mysql_custom',
'table' => env('QUEUE_FAILED_TABLE', 'co_failed_jobs'),
],
'database' => [
'connection' => 'mysql',
'table' => env('QUEUE_FAILED_TABLE', 'failed_jobs'),
],
],
by this idea, you have two separate failed jobs tables in two different databases, so you can maintain the backups safely and independently.
I'm a bit confuse about how to run the job only once, because when I set the parameter "tries" to 1 and the job fails, it execute one more time. If I set the tries parameter to 3, the job runs 4 times. And finally if I set to 0, the jobs run indefinitely. Below my settings in config/horizon.php:
'production' =
'default' => [
'connection' => 'redis',
'queue' => [
'default',
'notifications',
'dom'
],
'balance' => 'auto',
'maxProcesses' => env('MAX_PROCESSES', 45),
'timeout' => 60,
'tries' => 1,
],
],
And below my settings in config/queue.php
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => 90,
],
And other question, what setting dispatch the "has been attempted to many times or run too along"?
Just set an attribute $tries = 1 to the Job, and on the catch of possible errors, call $this->fail();
I am using queue function of laravel to send email. But I think it is not working because it slow down the page process when sending large emails and do data is being saved in jobs table. I am using following code:
Mail::to('test#gmail.com')->queue(new Test($mailContent,$subject));
Configuration in queue.php file is:
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 90,
],
I have memcached selected as my cache driver. However , ran into a weird issue.
Once I am doing:
Cache::put('name','John',15);
In the very next line if I give
var_dump(Cache::get('name'))
it shows me :
bool(false)
Couldn't understand what's going wrong here. I have memcached running on port 11211 on my localhost which I can telnet.
Also phpinfo() shows php-memcached library is installed.
My config/cache.php file reads:
'default' => env('CACHE_DRIVER', 'memcached'),
'stores' => [
'apc' => [
'driver' => 'apc',
],
'array' => [
'driver' => 'array',
],
'database' => [
'driver' => 'database',
'table' => env('CACHE_DATABASE_TABLE', 'cache'),
'connection' => env('CACHE_DATABASE_CONNECTION', null),
],
'file' => [
'driver' => 'file',
'path' => storage_path('framework/cache'),
],
'memcached' => [
'driver' => 'memcached',
'servers' => [
[
'host' => env('MEMCACHED_HOST', '127.0.0.1'), 'port' => env('MEMCACHED_PORT', 11211), 'weight' => 100,
],
],
],
'redis' => [
'driver' => 'redis',
'connection' => env('CACHE_REDIS_CONNECTION', 'default'),
],
],
'prefix' => env('CACHE_PREFIX', 'laravel'),
Please help.
You have a typo. The method to set a value in the cache is put(), but you used get() twice. Try this:
Cache::put('name','John',15);
Finally after an entire day of Googling , I found the solution.
It seems I had to add the following line to bootstrap/app.php :
$app->configure('cache');
Also , please note that if you are running your application inside a VM/docker container , you need to provide the Host ip .
i spent 3 hours figuring out why my code was not working and can’t get the data from the cache and finally i figured out why :
Cache::put('name','John',15);
and when you do that You put that on the Cache for Only 15s or maybe 15ms depends on your configuration on the code.
You have to also check if you have permission on the storage Folder :
sudo CHOWN -R www:data:www:data storage
You can verify that you inserted the data on cache manually and you will see that you already cached that data but the expire date ttl is expired.
Good luck
I am trying to connect to Redis with predis 1.1 and SSL, using information https://github.com/nrk/predis, where in the example the following configuration is used:
// Named array of connection parameters:
$client = new Predis\Client([
'scheme' => 'tls',
'ssl' => ['cafile' => 'private.pem', 'verify_peer' => true],
]);
My Laravel configuration looks like below:
'redis' => [
'client' => 'predis',
'cluster' => env('REDIS_CLUSTER', false),
'default' => [
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
'options' => [
'cluster' => 'redis',
'parameters' => ['password' => env('REDIS_PASSWORD', null)],
'scheme' => 'tls',
],
],
Unfortunately I am getting the following error:
ConnectionException in AbstractConnection.php line 155:
Error while reading line from the server. [tcp://MY_REDIS_SERVER_URL:6380]
Suggestions are appreciated :)
I was able to get it to work!
You need to move 'scheme' from 'options' to 'default':
My working config:
'redis' => [
'client' => 'predis',
'cluster' => env('REDIS_CLUSTER', false),
'default' => [
'scheme' => 'tls',
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => 0,
],
'options' => [
'parameters' => ['password' => env('REDIS_PASSWORD', null)],
],
],
Note: I had also removed the 'cluster' option from 'options', but I don't suspect this to be the make-or-break with this problem.
In my final-final config, I changed it to: 'scheme' => env('REDIS_SCHEME', 'tcp'), and then defined REDIS_SCHEME=tls in my env file instead.
Tested with AWS ElastiCache with TLS enabled.
Edit:
The above config only works with single-node redis. If you happen to enable clustering and TLS then you'll need a different config entirely.
'redis' => [
'client' => 'predis',
'cluster' => env('REDIS_CLUSTER', false),
// Note! for single redis nodes, the default is defined here.
// keeping it here for clusters will actually prevent the cluster config
// from being used, it'll assume single node only.
//'default' => [
// ...
//],
// #pro-tip, you can use the Cluster config even for single instances!
'clusters' => [
'default' => [
[
'scheme' => env('REDIS_SCHEME', 'tcp'),
'host' => env('REDIS_HOST', 'localhost'),
'password' => env('REDIS_PASSWORD', null),
'port' => env('REDIS_PORT', 6379),
'database' => env('REDIS_DATABASE', 0),
],
],
'options' => [ // Clustering specific options
'cluster' => 'redis', // This tells Redis Client lib to follow redirects (from cluster)
]
],
'options' => [
'parameters' => [ // Parameters provide defaults for the Connection Factory
'password' => env('REDIS_PASSWORD', null), // Redirects need PW for the other nodes
'scheme' => env('REDIS_SCHEME', 'tcp'), // Redirects also must match scheme
],
]
]
Explaining the above:
'client' => 'predis': This specifies the PHP Library Redis driver to use (predis).
'cluster' => 'redis': This tells Predis to assume server-side clustering. Which just means "follow redirects" (e.g. -MOVED responses). When running with a cluster, a node will respond with a -MOVED to the node that you must ask for a specific key.
If you don't have this enabled with Redis Clusters, Laravel will throw a -MOVED exception 1/n times, n being the number of nodes in Redis cluster (it'll get lucky and ask the right node every once in awhile)
'clusters' => [...] : Specifies a list of nodes, but setting just a 'default' and pointing it to the AWS 'Configuration endpoint' will let it find any/all other nodes dynamically (recommended for Elasticache, because you don't know when nodes are comin' or goin').
'options': For Laravel, can be specified at the top-level, cluster-level, and node option. (they get combined in Illuminate before being passed off to Predis)
'parameters': These 'override' the default connection settings/assumptions that Predis uses for new connections. Since we set them explicitly for the 'default' connection, these aren't used. But for a cluster setup, they are critical. A 'master' node may send back a redirect (-MOVED) and unless the parameters are set for password and scheme it'll assume defaults, and that new connection to the new node will fail.
Thank you CenterOrbit!!
I can confirm the first solution does allow Laravel to connect to a Redis server over TLS. Tested with Redis 3.2.6 on AWS ElastiCache with TLS, configured as single node and single shard.
I can also confirm the second solution does allow Laravel to connect to a Redis Cluster over TLS. Tested with Redis 3.2.6 on AWS ElastiCache with TLS, configured with "Cluster Mode Enabled", 1 shard, 1 replica per shard.
I was receiving the following error when I first tried to implement the cluster solution:
Error: Unsupported operand types
I missed the additional set of array brackets when I moved the "default" settings into the "clusters" array.
INCORRECT
'clusters' => [
'default' => [
'scheme' ...
]
]
CORRECT
'clusters' => [
'default' => [
[
'scheme' ...
]
]
]
I hope this saves someone else a bit of troubleshooting time.
The accepted solution by CenterOrbit worked for me, as I was using AWS I had to add tls:// in my .env
Laravel
tls://username:password#URL:PORT?database=0
Try it. It will work