I am using https://laravel.com/docs/6.x/horizon to manage the queue on my application (using redis backend).
The jobs on the queue are ephemeral - each queued job is supposed to make an api call (to a third party) and write the response to db.
Today I noticed lots of jobs are stuck / processing very slowly:
I am running 10 queue workers with sufficient memory on the host node. This is my config/horizon.php
return [
'domain' => null,
'path' => 'horizon',
'use' => 'default',
'prefix' => env('HORIZON_PREFIX', 'horizon:'),
'middleware' => ['web', 'basic.api.credential.auth'],
'waits' => [
'redis:default' => 60,
],
'trim' => [
'recent' => 60,
'completed' => 60,
'recent_failed' => 40320,
'failed' => 40320,
'monitored' => 40320,
],
'fast_termination' => false,
'memory_limit' => 512,
'environments' => [
'production' => [
'queue-1' => [
'connection' => 'redis',
'queue' => ['default'],
'balance' => 'simple',
'processes' => 10,
'tries' => 2,
'timeout' => 1800, // 30 Minutes
'memory' => 512,
],
],
// ...
],
];
I checked the remote api server to see if the bottleneck is there, but it's responding quite fast (less than 1 sec per api call). Checking the server, I can't see any load on it and the overall memory/cpu utilisation is quite low.
Laravel v6.18.37
PHP v7.3.21
Horizon v3.7.2
Any ideas what's causing such a huge slowdown? How do I debug this to find out what's going on?
I even tried restarting the server and it did not help. I am not seeing any timeouts / errors in the logs either.
The problem turned out to be on the database, a few table that we use for lookup was not properly optimised - after adding indexes, the speed improved tremendously.
Sorry false alarm.
Related
I used mongodb db.version() 5.0.9 with PHP 7.4.30
I worked with queue based on mongo db and in architecure present possibility to scaling, so could be several workers. My question - how I can protect my record from the concurence query, when two workers try to execute findOneAndUpdate in the same time ? Or maybe findOneAndUpdate has this opportunity by default ?
$document = $this->collection->findOneAndUpdate(
[
'$and' => [
[
'job_type' => 'metrics'
],
$conditionDelivered,
[
'id.pub_key_digest' => [
'$exists' => true
]
],
[
'timestamp' => [
'$gt' => $timestampOffset
]
],
],
],
[
'$set' => $setSection,
],
[
'sort' => [
'timestamp' => 1,
],
'writeConcern' => new WriteConcern(WriteConcern::MAJORITY),
'returnDocument' => FindOneAndUpdate::RETURN_DOCUMENT_AFTER
]
);
$filter
{"$and":[{"job_type":"metrics"},{"alert_delivered_at":{"$exists":false},"timescale_delivered_at":{"$exists":false}},{"id.pub_key_digest":{"$exists":true}},{"timestamp":{"$gt":1291852800000000000}}]}
$update
{"$set":{"alert_consumer_id":"consumer_62b2d4e4520339.76461865","alert_delivered_at":{"$date":{"$numberLong":"1655887076643"}},"timescale_delivered_at":{"$date":{"$numberLong":"1655887076643"}}}}
and $options
{"sort":{"timestamp":1},"writeConcern":{"w":"majority"},"returnDocument":2}
Could you explain please best way for find and update data with guarantee which will be executed by only one worker, some lock, the other workers will be work with another records ?
So I'm using beyondcode/laravel-websockets to setup a WS server and I want to work with multiple apps so I did this in config\websockets.php:
'apps' => [
[
'id' => env('A_APP_ID'),
'name' => env('A_APP_NAME'),
'key' => env('A_APP_KEY'),
'secret' => env('A_APP_SECRET'),
'path' => env('A_APP_PATH'),
'capacity' => null,
'enable_client_messages' => false,
'enable_statistics' => true,
],
[
'id' => env('B_APP_ID'),
'name' => env('B_APP_NAME'),
'key' => env('B_APP_KEY'),
'secret' => env('B_APP_SECRET'),
'path' => env('B_APP_PATH'),
'capacity' => null,
'enable_client_messages' => false,
'enable_statistics' => true,
],
],
However, I want to implement custom handlers for each app and I've been trying this, routes\web.php:
WebSocketsRouter::webSocket('app/{appKey}/bapp', \App\WebSockets\BAppWebSocketHandler::class);
//Also tried this..
WebSocketsRouter::webSocket('app/{appKey}', \App\WebSockets\AAppWebSocketHandler::class);
//and created `AAppWebSocketHandler` which does nothing but calling parent (WebSocketHandler) methods
Problem is it's always using one handler for all apps despite the difference in routes.
Any ideas?
Thanks!
You don't have to define routes while defining multiple apps in the config. Instead configure Echo to work with separate app key and secret. If you want to use custom handler with own logic, then remove them from configs. Also note that, you won't get any channel or pusher client library support. You will have to implement your own authentication as well.
I have two memcached servers. I set up the following config and my expectation was that my PHP app can use alive memcached even if one of the two servers is down. But it did not work. I got "No Memcached servers added" error when I execute memcached's get() method.
'memcached' => [
'driver' => 'memcached',
'options' => [
Memcached::OPT_CONNECT_TIMEOUT => 10,
Memcached::OPT_DISTRIBUTION => Memcached::DISTRIBUTION_CONSISTENT,
Memcached::OPT_SERVER_FAILURE_LIMIT => 2,
Memcached::OPT_REMOVE_FAILED_SERVERS => true,
Memcached::OPT_RETRY_TIMEOUT => 1,
],
'servers' => [
[
'host' => 'xxx.0.0.1', 'port' => 11211, 'weight' => 100,
],
[
'host' => 'xxx.0.0.2', 'port' => 11211, 'weight' => 100,
],
],
],
I'm using the latest version of memcached servers and client.
memcached 1.4.25-2ubuntu1
php-memcached version 3.0.0b1
libmemcached version 1.0.18
Do you have any ideas?
Edit 1
"No Memcached servers added" error is came from here.
https://github.com/illuminate/cache/blob/master/MemcachedConnector.php
Edit 2
I found Memcached::XXX options are integer values. So options values are not passed to the server. I fixed this but the result was not changed.
'options' => array('10', '1', '2', true, '1')
Edit 3
Laravel's cache example setting is memtioned here
https://github.com/laravel/laravel/blob/master/config/cache.php#L60
Edit 4
I tried it by using PHP without Laravel then I figured out that getVersion() returned null when one of the two server is dead.
<?php
$m = new Memcached();
$m->addServer('127.0.0.1', 11211);
$m->addServer('127.0.0.1', 11212);
$status = $m->getVersion();
if ($status == null){
echo "null";
} else {
echo "not null";
}
* when both the two servers are alive
// not null
// $status = array('127.0.0.1:11211' => '1.4.25', '127.0.0.1:11212' => '1.4.14')
* when one of the two server is dead
// null
// $status = null
I was struggling with this for a couple of days now and I finally found a solution for it https://packagist.org/packages/fingo/laravel-cache-fallback
It is quite easy to install:
composer require fingo/laravel-cache-fallback
Add the provider Fingo\LaravelCacheFallback\CacheFallbackServiceProvider::class into 'config/app.php`
Default fallback order is: redis, memcached, database, cookie, file, array.If needed to change fallback order, publish vendors: php artisan vendor:publish --provider="Fingo\LaravelCacheFallback\CacheFallbackServiceProvider"
I'm using Laravel 5.2 with PhP7, the requirements for the library are:php: ~5.6|~7.0, illuminate/cache: ~5.1
LE: I've implemented a different way to failover from one Memcached server to the other one and if both of them are down then it will failover to the 'file' driver.
config/cache.php:
'memcached' => [
'driver' => 'memcached',
// To use the options we need to install the PHP driver: sudo yum install php-memcached
'options' => [
Memcached::OPT_CONNECT_TIMEOUT => 3,
Memcached::OPT_DISTRIBUTION => Memcached::DISTRIBUTION_CONSISTENT,
Memcached::OPT_SERVER_FAILURE_LIMIT => 2,
Memcached::OPT_REMOVE_FAILED_SERVERS => true,
Memcached::OPT_RETRY_TIMEOUT => 1,
],
'servers' => [
[
'host' => 'xxx.xxx.xxx.xxx',
'port' => 11211,
'weight' => 90,
],
]
],
'memcached249' => [
'driver' => 'memcached',
// To use the options we need to install the PHP driver: sudo yum install php-memcached
'options' => [
Memcached::OPT_CONNECT_TIMEOUT => 3,
Memcached::OPT_DISTRIBUTION => Memcached::DISTRIBUTION_CONSISTENT,
Memcached::OPT_SERVER_FAILURE_LIMIT => 2,
Memcached::OPT_REMOVE_FAILED_SERVERS => true,
Memcached::OPT_RETRY_TIMEOUT => 1,
],
'servers' => [
[
'host' => 'xxx.xxx.xxx.xxx',
'port' => 11211,
'weight' => 100,
],
]
],
config/cache_fallback.php
return [
'fallback_order' => [
'memcached',
'memcached249',
'file',
]
];
The only problem is that if the first memcached server is down, it slows down the entire system (but at least it's not breaking the main server). Hope this helps :)
I working with Queue Component for Yii2 from reference https://libraries.io/github/cybercog/yii2-queue
I wants to know that how can i post data into 2-3 different queue url for different feature
for that i have to configure 2-3 user here in below configuration code plsease let me know how can i provide multiple url in configuration
'components' => [
'queue' => [
'class' => 'UrbanIndo\Yii2\Queue\SqsQueue',
'module' => 'task',
'url' => 'https://sqs.ap-southeast-1.amazonaws.com/123456789012/queue',
'config' => [
'key' => 'AKIA1234567890123456',
'secret' => '1234567890123456789012345678901234567890',
'region' => 'ap-southeast-1',
],
]]
or how can i set url while posting data into queue
Yii::$app->queue->post(new Job(['route' => ['sync' => 'emailSync'], 'data' => $options]));
I'm trying to use an external cache engine, memcached, to power my CakePHP app.
I have an AWS EC2 instance running the app and also an AWS ElastiCache Cluster with one node using memcached. The memcache and memcached php modules are also installed and enabled.
The configuration in the app.php file is as follows:
'Cache' => [
'default' => [
'className' => 'File',
'path' => CACHE,
],
'elastic' => [
'className' => 'Cake\Cache\Engine\MemcachedEngine',
'compress' => false,
'duration' => '+2 minutes',
'groups' => [],
'host' => 'yyy.euw1.cache.amazonaws.com:11211',
'username' => null,
'password' => null,
'persistent' => false,
'prefix' => 'cake_',
'probability' => 100,
'serialize' => 'php',
'servers' => ['yyy.euw1.cache.amazonaws.com:11211'],
'options' => [],
'lock' => true
]
To select whether or not to query the database, this condition is used:
if (($car = Cache::read('car', 'elastic')) === false) {
$car = $this->Cars->get();
Cache::write('car', $car, 'elastic');
}
Unfortunately, after a long page load I get this error:
"elastic cache was unable to write to DebugKit\Cache\Engine\DebugEngine cache"
Does anyone knows the origin of this error? Can someone guide me through the configuration of memcached for cakephp, using an external cache engine?
Thank you upfront!
Thank you for your reply.
This issue is now closed. We had to allow IP access between EC2 and CloudCache Cluster.