In my Symfony project, there is a queue message handler, and I have an error that randomly appears during the execution:
[2022-10-12T07:31:40.060119+00:00] console.CRITICAL: Error thrown while running command "messenger:consume async --limit=10". Message: "Library error: a socket error occurred" {"exception":"[object] (Symfony\\Component\\Messenger\\Exception
TransportException(code: 0): Library error: a socket error occurred at /var/www/app/vendor/symfony/amqp-messenger/Transport/AmqpReceiver.php:62)
[previous exception] [object] (AMQPException(code: 0): Library error: a socket error occurred at /var/www/app/vendor/symfony/amqp-messenger/Transport/Connection.php:439)","command":"messenger:consume async --limit=10","message":"Library error: a socket error occurred"} []
The handler executes HTTP requests that could last some seconds and the whole process of a single message could even take more than one minute if APIs are slow. The strange thing is that the problem disappears for hours but then it randomly appears again. The more messages are sent to the queue, the easier it's to see the exception.
config\packages\messenger.yaml
framework:
messenger:
transports:
# https://symfony.com/doc/current/messenger.html#transport-configuration
async:
dsn: "%env(MESSENGER_TRANSPORT_DSN)%"
options:
exchange:
name: async_exchange
queues:
async: ~
heartbeat: 45
write_timeout: 90
read_timeout: 90
retry_strategy:
max_retries: 0
routing:
# Route your messages to the transports
'App\Message\MessageUpdateRequest': async
App\MessageHandler\MessageUpdateRequestHandler.php
<?php
declare(strict_types=1);
namespace App\MessageHandler;
use App\Message\MessageUpdateRequest;
use Symfony\Component\Messenger\Handler\MessageHandlerInterface;
class MessageUpdateRequestHandler implements MessageHandlerInterface
{
public function __invoke(MessageUpdateRequest $message)
{
// Logic executing API requests...
return 0;
}
}
Environment
Symfony Messenger: 5.4.17
PHP: 8.1
RabbitMQ: 3.11.5
Things that I tried
upgrading Symfony Messenger to 5.4.17, using the fix available here;
adding the following options: heartbeat, write_timeout and read_timeout in the messenger.yaml file.
Related issues/links
https://github.com/php-amqp/php-amqp/issues/258
https://github.com/symfony/symfony/issues/32357
https://github.com/symfony/symfony/pull/47831
How can I fix this issue?
Regarding a socket error that occurs in Symfony Messenger, I always suggest following the step-wise approach and checking if you are missing anything. It should fix this type of error almost every time. Please follow these guidelines:
Verify that the RabbitMQ service is active and accessible.
Verify that the hostname, port, username, and password are listed in the messenger.yaml file are accurate.
In the messenger.yaml file, increase the heartbeat, write timeout, and read timeout settings.
Verify your use case to determine whether the max retries number in messenger.yaml is appropriate.
Look for any network problems that could be causing the socket error.
Make sure your PHP version is compatible with RabbitMQ and Symfony Messenger.
Verify that the server's resources (CPU, Memory, and Disk) are not used up.
Look for any relevant error messages in the PHP error log.
Determine whether there is a problem with the MessageUpdateRequestHandler class's logic.
Tip: Still stuck? Try to reproduce the error with a smaller message set and in a controlled environment to isolate the root cause that we may be missing.
Hope it helps. Happy debugging and good luck.
Related
I'm using a worker with the Symfony 4 messenger component.
This worker is
receiving a message (from rabbitMQ)
launch ffmpeg
do a treatment on a video
and save something in a database.
To configure this worker on Symfony I've done this (middleware are important):
// config/packages/framework.yaml
framework:
messenger:
buses:
command_bus:
middleware:
# each time a message is handled, the Doctrine connection
# is "pinged" and reconnected if it's closed. Useful
# if your workers run for a long time and the database
# connection is sometimes lost
- doctrine_ping_connection
# After handling, the Doctrine connection is closed,
# which can free up database connections in a worker,
# instead of keeping them open forever
- doctrine_close_connection
transports:
ffmpeg:
dsn: '%env(CLOUDAMQP_URL)%'
options:
auto_setup: false
exchange:
name: amq.topic
type: topic
queues:
ffmpeg: ~
routing:
# Route your messages to the transports, for now all are AMQP messages
'App\Api\Message\AMQPvideoFFMPEG': ffmpeg
## Handle multiple buses ? https://symfony.com/doc/current/messenger/multiple_buses.html
## When queries and command should be distinguished
Then in order to understand what may cause this issue I've try to debug the messenger to see if the middleware are correctly configured
root#b9eec429cb54:/var/www/html# php bin/console debug:messenger
Messenger
=========
command_bus
-----------
The following messages can be dispatched:
------------------------------------------------------
App\Api\Message\AMQPvideoFFMPEG
handled by App\Api\Message\Handler\FFMPEGHandler
------------------------------------------------------
Everything seems ok right ?
So how is this possible to see this :
[2019-08-23 10:25:26] messenger.ERROR: Retrying App\Api\Message\AMQPvideoFFMPEG - retry #1. {"message":"[object] (App\Api\Message\AMQPvideoFFMPEG: {})","class":"App\Api\Message\AMQPvideoFFMPEG","retryCount":1,"error":"[object] (Doctrine\DBAL\Exception\ConnectionException(code: 0): An exception occurred in driver: SQLSTATE[HY000] [2002] Connection timed out at /var/www/html/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/AbstractMySQLDriver.php:93, Doctrine\DBAL\Driver\PDOException(code: 2002): SQLSTATE[HY000] [2002] Connection timed out at /var/www/html/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php:31, PDOException(code: 2002): SQLSTATE[HY000] [2002] Connection timed out at /var/www/html/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php:27)"} []
I'm completely lost, Have I missed something ?
This happens sometimes, but it works most of the time, I suppose this bug happen when my worker has lost the connection to DB especially if ffmpeg treatment last 7 minutes or higher, but this should be avoided by the ping and the close connection's middlewares. So i don't clearly understand what is the problem here.
After reading the code of my middlewares and especially this block
https://github.com/symfony/symfony/blob/4.4/src/Symfony/Bridge/Doctrine/Messenger/DoctrinePingConnectionMiddleware.php
class DoctrinePingConnectionMiddleware extends AbstractDoctrineMiddleware
{
protected function handleForManager(EntityManagerInterface $entityManager, Envelope $envelope, StackInterface $stack): Envelope
{
$connection = $entityManager->getConnection();
if (!$connection->ping()) {
$connection->close();
$connection->connect();
}
if (!$entityManager->isOpen()) {
$this->managerRegistry->resetManager($this->entityManagerName);
}
return $stack->next()->handle($envelope, $stack);
}
}
We can see that my handler is called right after the connection openning.
This behaviour is supposed to work, I assume it is, but FFMPEG can work during a long time with the same RabbitMQ's message. So the last step of my handler that would insert something into the database can provide a mySQL has gone away error, or connection timed out.
That's why, I took this snippet and put it into a method without the call of the handler stuff only the code related to doctrine connect, then I call this just before any insert into my DB like this :
public function __invoke(AMQPvideoFFMPEG $message)
{
// reset connection if not found
$this->processService->testConnection();
$process = $this->processService->find($message->getProcess());
$this->renderServcie->updateQueue($process->getQueue(), "processing");
// some other stuff
}
Where testConnection() method is
/**
* Reconnect if connection is aborted for some reason
*/
public function testConnection()
{
$connection = $this->entityManager->getConnection();
if (!$connection->ping()) {
$connection->close();
$connection->connect();
}
}
But I’ve experimented another issue after that
Resetting a non-lazy manager service is not supported. Set the
"doctrine.orm.default_entity_manager" service as lazy and require
"symfony/proxy-manager-bridge" in your composer.json file instead.
After installing "symfony/proxy-manager-bridge", the error was gone.
So far no connection timed out was experienced. Wait and see.
Simply disconnect before any insert operation:
public function handle(…)
{
// your time-consuming business logic
// disconnect if needed
if (!$this->entityManager->getConnection()->ping()) {
$this->entityManager->getConnection()->close();
}
// save your work
$this->entityManager->flush();
}
After adding GELF logging according to this how to https://medium.com/#vaidaslungis/setup-graylog-in-laravel-5-6-logging-d2276bcb9cfa
the php artisan config:cache command isn't working anymore.
The error message is:
In ConfigCacheCommand.php line 68:
Your configuration files are not serializable.
In config.php line 382:
Call to undefined method Gelf\Publisher::__set_state()
Is it still possible to cache the config? If so, what needs to be changed?
In case an error occurs while logging, the exception is stored in a class variable of the IgnoreErrorTransportWrapper ($lastError). The exception is not serializable, therefore the serialization of the logger fails.
I've encountered http 504 gateway timeout error for several times when I was trying to call a GET API that was programmed by PHP.
Here is my server and AWS environment.
An ec2 instance with Amazon Linux that is running php code (5.4.40) with apache server (2.4.12) to serve api calling from client.
An AWS elastic load balancer to balance traffic to one of my instances. (for now, I only have one instance, just set ELB for the future if I need more instances to handle traffic.)
An AWS RDS database (MySQL 5.6.21) for saving data.
From some articles about 504 gateway timeout, I've already tried to modify these settings:
# ELB
idle timeout => 300
# php.ini
max_execution_time => 301
max_input_time => 301
# httpd conf
MaxKeepAliveRequests => 100
KeepAliveTimeout => 30
But all of them are not helpful for me, it's still get 504 gateway timeout sometimes.
My php script is not a long script, it just get data from mysql database (AWS RDS) from 3 tables and return data to client, no uploading file or generateing big file, so I think the execution time is not the problem.
The strange thing is that 504 gateway timeout error is not always happened, most of time it is normal, just happened SOMETIMES, for now, I still don't understand when 504 error will happen, it's really strange, if anyone can give me some suggestions about how to resolve this problem, it's really a big favor for me.
=== New Update ===
I've just found a problem in my php code, I thought that's namespace with autoload problem.
I have 2 php files in the same folder, it means 2 classes with the same namespace
files:
My/Namespace
- Class1.php
- Class2.php
Class and namespace:
Class1
// Class1
namespace My\Namespace;
class Class1 {
public static function getInstance() {
//return...
}
}
Class2
// Class2
namespace My\Namespace;
class Class2 {
public static function getInstance() {
//return...
}
public function getClass1Instance() {
$class1 = Class1::getInstance();
return $class1;
}
}
In Class2.php I try to call Class1's static function, but I didn't add "use namespace", so I add the following line to Class2.php
use My\Namespace\Class1;
Problem was solved! But I still not really sure why I should add "use namespace" to Class2.php, Class1 and Class2 are both in the same namespace, should I add "use namespace" even through they are in the same namespace?
p.s. I found this namespace problem because when 504 gateway error happened, I tried to call the API many times in a short period, and the php error message show up and tell me
"Class1 is not found in Class2.php"
but sometimes php error message show
"Cannot call a overloaded function in Class2.php, getClass1Instance()"
Wish I provide enough message about this question, and thanks for everyone who left comment or answered my question, m(_ _)m
I suggest you take a look at the Health Check of ELB.
Health Check is a source of seemingly-random 504 errors when it is not properly configured. When the ELB thinks your server is not 'healthy' then ELB answers 504 to the end user, and that 504 error is not logged anywhere in your PHP environment because it was generated in the ELB.
See http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/ts-elb-healthcheck.html
I'm migrating a symfony project from 2.0 to 2.4 version.
I've correctly configured all the parameters and services.
But the problem occured with JMS vendor, this is the error shown:
Fatal error: Uncaught exception
'Symfony\Component\DependencyInjection\Exception\ServiceNotFoundException'
with message 'You have requested a non-existent service
"payment.encryption_service".' in
C:\wamp\www\symfony\app\bootstrap.php.cache on line 2027
This message is blocking me, do you have any issue, any idea?
I had the same problem. I don't know exactly what happened, but my problems went away after I manually cleared the cache using
rm -rf app/cache/prod/*
Considering your settings are on Windows, try manually deleting entries at app\cache\prod using Windows Explorer. I couldn't use app/console cache:clear --env=prod since the console would crash after showing that error message.
Another correct answer could be checking every service.yml in yml parser, for example http://yaml-online-parser.appspot.com
You have requested a non-existent service
could mean, that symfony can't parse correctly .yml files.
This means your the mentioned parameter is missing from your app/config/parameters.yml or other alike file that you are using to store your parameters. Set this parameter to a value and it should work.
E.G. I had the same error being "You have requested a non-existent parameter "domain".
I then added following line to the parameters.yml file:
domain: example.com
That did the trick.
I am using predis, it's subscribed to a channel and listening. It throws the following error (below) and dies after 60 secs exactly. It's surely not my web servers error or its timeout.
There is a similar issue being discussed here. Could not get much of it.
I tried setting connection_timeout in predis conf file to 0, but doesn't helps much.
Also if i keep using (send data to it and it processes) the worker it doesn't give any error. So its likely a timeout somewhere, and that too in connection.
Here is my code snippet, which is likely producing error, because if data is given to worker it runs this code and go forward, which produces no error after that.
$pubsub = $redis->pubSub();
$pubsub->subscribe($channel1);
foreach ($pubsub as $message) { //doing stuff here and unsubscribing from channel
}
Trace
PHP Fatal error: Uncaught exception 'Predis\Network\ConnectionException' with message 'Error while reading line from the server' in Predis/Network/ConnectionBase.php:159 Stack trace:
#0 library/vendor/predis/lib/Predis/Network/StreamConnection.php(195): Predis\Network\ConnectionBase->onConnectionError('Error while rea...')
#1 library/vendor/predis/lib/Predis/PubSub/PubSubContext.php(259): Predis\Network\StreamConnection->read()
#2 library/vendor/predis/lib/Predis/PubSub/PubSubContext.php(206): Predis\PubSub\PubSubContext->getValue()
#3 pdf/file.php(16): Predis\PubSub\PubSubContext->current()
#4 {main} thrown in Predis/Network/ConnectionBase.php on line 159
Checked the redis.conf timeout too, its also disabled.
Just set the read_write_timeout connection parameter to 0 or -1 to fix this. e.g.
$redis = new Predis\Client('tcp://10.0.0.1:6379'."?read_write_timeout=0");
Setting connection parameters is documented in the README. The author of Redis noted the relevance of the read_write_timeout parameter to this error in an issue on GitHub, in which he notes that:
If you are using Predis in a daemon-like script you should set read_write_timeout to -1 if you want to completely disable the timeout (this value works with older and newer versions of Predis). Also, remember that you must disable the default timeout of Redis by setting timeout = 0 in redis.conf or Redis will drop the connection of idle clients after 300 seconds of inactivity.
I had similar problem, better solution to the situation is not setting the timeout to 0 but using a exponential backoff and set the upper and the lower limit.
Change in the config parameter connection_timeout to 0 will also solve the issue.
I got the resolution to the problem. So, there is a limit to ports that a application server can connect to a particular application on another machine. These ports were getting exhausted.
We increased the limit and the problem got resolved.
How we got to know about this problem ?
In php, we were getting "Cannot assign requested address" error while creating a socket (error code 99).
At /etc/redis/redis.conf , set
timeout = 0
I'm using Heroku and solved this problem with switching from Redis Heroku to Redis Enterprise addon and then:
use Predis\Client as PredisClient;
To solve collision with GuzzleHttp\Client. You can leave the
as PredisClient
line if you are not usng GuzzleHttp.
And then connection:
$redisClient = new PredisClient(array(
'host' => parse_url(env('REDIS_URL'), PHP_URL_HOST),
'port' => parse_url(env('REDIS_URL'), PHP_URL_PORT),
'password' => parse_url(env('REDIS_URL'), PHP_URL_PASS)
)
);
(You can find your 'REDIS_URL' automatically prefilled in Heroku config vars).