I am using predis, it's subscribed to a channel and listening. It throws the following error (below) and dies after 60 secs exactly. It's surely not my web servers error or its timeout.
There is a similar issue being discussed here. Could not get much of it.
I tried setting connection_timeout in predis conf file to 0, but doesn't helps much.
Also if i keep using (send data to it and it processes) the worker it doesn't give any error. So its likely a timeout somewhere, and that too in connection.
Here is my code snippet, which is likely producing error, because if data is given to worker it runs this code and go forward, which produces no error after that.
$pubsub = $redis->pubSub();
$pubsub->subscribe($channel1);
foreach ($pubsub as $message) { //doing stuff here and unsubscribing from channel
}
Trace
PHP Fatal error: Uncaught exception 'Predis\Network\ConnectionException' with message 'Error while reading line from the server' in Predis/Network/ConnectionBase.php:159 Stack trace:
#0 library/vendor/predis/lib/Predis/Network/StreamConnection.php(195): Predis\Network\ConnectionBase->onConnectionError('Error while rea...')
#1 library/vendor/predis/lib/Predis/PubSub/PubSubContext.php(259): Predis\Network\StreamConnection->read()
#2 library/vendor/predis/lib/Predis/PubSub/PubSubContext.php(206): Predis\PubSub\PubSubContext->getValue()
#3 pdf/file.php(16): Predis\PubSub\PubSubContext->current()
#4 {main} thrown in Predis/Network/ConnectionBase.php on line 159
Checked the redis.conf timeout too, its also disabled.
Just set the read_write_timeout connection parameter to 0 or -1 to fix this. e.g.
$redis = new Predis\Client('tcp://10.0.0.1:6379'."?read_write_timeout=0");
Setting connection parameters is documented in the README. The author of Redis noted the relevance of the read_write_timeout parameter to this error in an issue on GitHub, in which he notes that:
If you are using Predis in a daemon-like script you should set read_write_timeout to -1 if you want to completely disable the timeout (this value works with older and newer versions of Predis). Also, remember that you must disable the default timeout of Redis by setting timeout = 0 in redis.conf or Redis will drop the connection of idle clients after 300 seconds of inactivity.
I had similar problem, better solution to the situation is not setting the timeout to 0 but using a exponential backoff and set the upper and the lower limit.
Change in the config parameter connection_timeout to 0 will also solve the issue.
I got the resolution to the problem. So, there is a limit to ports that a application server can connect to a particular application on another machine. These ports were getting exhausted.
We increased the limit and the problem got resolved.
How we got to know about this problem ?
In php, we were getting "Cannot assign requested address" error while creating a socket (error code 99).
At /etc/redis/redis.conf , set
timeout = 0
I'm using Heroku and solved this problem with switching from Redis Heroku to Redis Enterprise addon and then:
use Predis\Client as PredisClient;
To solve collision with GuzzleHttp\Client. You can leave the
as PredisClient
line if you are not usng GuzzleHttp.
And then connection:
$redisClient = new PredisClient(array(
'host' => parse_url(env('REDIS_URL'), PHP_URL_HOST),
'port' => parse_url(env('REDIS_URL'), PHP_URL_PORT),
'password' => parse_url(env('REDIS_URL'), PHP_URL_PASS)
)
);
(You can find your 'REDIS_URL' automatically prefilled in Heroku config vars).
Related
In my Symfony project, there is a queue message handler, and I have an error that randomly appears during the execution:
[2022-10-12T07:31:40.060119+00:00] console.CRITICAL: Error thrown while running command "messenger:consume async --limit=10". Message: "Library error: a socket error occurred" {"exception":"[object] (Symfony\\Component\\Messenger\\Exception
TransportException(code: 0): Library error: a socket error occurred at /var/www/app/vendor/symfony/amqp-messenger/Transport/AmqpReceiver.php:62)
[previous exception] [object] (AMQPException(code: 0): Library error: a socket error occurred at /var/www/app/vendor/symfony/amqp-messenger/Transport/Connection.php:439)","command":"messenger:consume async --limit=10","message":"Library error: a socket error occurred"} []
The handler executes HTTP requests that could last some seconds and the whole process of a single message could even take more than one minute if APIs are slow. The strange thing is that the problem disappears for hours but then it randomly appears again. The more messages are sent to the queue, the easier it's to see the exception.
config\packages\messenger.yaml
framework:
messenger:
transports:
# https://symfony.com/doc/current/messenger.html#transport-configuration
async:
dsn: "%env(MESSENGER_TRANSPORT_DSN)%"
options:
exchange:
name: async_exchange
queues:
async: ~
heartbeat: 45
write_timeout: 90
read_timeout: 90
retry_strategy:
max_retries: 0
routing:
# Route your messages to the transports
'App\Message\MessageUpdateRequest': async
App\MessageHandler\MessageUpdateRequestHandler.php
<?php
declare(strict_types=1);
namespace App\MessageHandler;
use App\Message\MessageUpdateRequest;
use Symfony\Component\Messenger\Handler\MessageHandlerInterface;
class MessageUpdateRequestHandler implements MessageHandlerInterface
{
public function __invoke(MessageUpdateRequest $message)
{
// Logic executing API requests...
return 0;
}
}
Environment
Symfony Messenger: 5.4.17
PHP: 8.1
RabbitMQ: 3.11.5
Things that I tried
upgrading Symfony Messenger to 5.4.17, using the fix available here;
adding the following options: heartbeat, write_timeout and read_timeout in the messenger.yaml file.
Related issues/links
https://github.com/php-amqp/php-amqp/issues/258
https://github.com/symfony/symfony/issues/32357
https://github.com/symfony/symfony/pull/47831
How can I fix this issue?
Regarding a socket error that occurs in Symfony Messenger, I always suggest following the step-wise approach and checking if you are missing anything. It should fix this type of error almost every time. Please follow these guidelines:
Verify that the RabbitMQ service is active and accessible.
Verify that the hostname, port, username, and password are listed in the messenger.yaml file are accurate.
In the messenger.yaml file, increase the heartbeat, write timeout, and read timeout settings.
Verify your use case to determine whether the max retries number in messenger.yaml is appropriate.
Look for any network problems that could be causing the socket error.
Make sure your PHP version is compatible with RabbitMQ and Symfony Messenger.
Verify that the server's resources (CPU, Memory, and Disk) are not used up.
Look for any relevant error messages in the PHP error log.
Determine whether there is a problem with the MessageUpdateRequestHandler class's logic.
Tip: Still stuck? Try to reproduce the error with a smaller message set and in a controlled environment to isolate the root cause that we may be missing.
Hope it helps. Happy debugging and good luck.
I am a user not a developer. The developer is not available.
This is the Google API library used in Google Shopping Products submission scripts.
The scripts worked successfully, every 20 minutes, for 2 years + the first 5 hours of yesterday.
Then the following error:
[18-Apr-2020 06:20:03 Europe/London] PHP Fatal error: Uncaught GuzzleHttp\Exception\RequestException: cURL error 2: easy handle already used in multi handle (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) in ../vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php:162
Stack trace:
#0 ../vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php(129): GuzzleHttp\Handler\CurlFactory::createRejection(Object(GuzzleHttp\Handler\EasyHandle), Array)
#1 ../vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php(89): GuzzleHttp\Handler\CurlFactory::finishError(Object(GuzzleHttp\Handler\CurlHandler), Object(GuzzleHttp\Handler\EasyHandle), Object(GuzzleHttp\Handler\CurlFactory))
#2 ../vendor/guzzlehttp/guzzle/src/Handler/CurlHandler.php(43): GuzzleHttp\Handler\CurlFactory::finish(ThObject(GuzzleHttp\Handler\CurlHandler), Object(GuzzleHttp\Handler\EasyHandle), Object(GuzzleHttp\Handler\CurlFactory))
#3 ../vendor/guzzlehttp/guzzle/src/Handl in ../vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php on line 162
The only server change at around the time the scripts stopped working was a security patch applied to the physical host and a server reboot.
PHP v7.3.16
I believe the Google library in use is v2.0
I can follow instructions although will probably not understand them!
TIA
Just in case anyone reading this is using Laravel. We suddenly started having the same problem a few days ago, tried installing different cURL versions and setting cURL options, nothing worked. I fixed it by changing the vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php file. Look for the line that says
if (count($this->handles) >= $this->maxHandles) {
curl_close($resource);
} else {
...
}
Comment this all out, and instead of the if/else just write
curl_close($resource);
In other words no matter what the handles count is you always close the cURL connection. This worked instantly for us!
Hope it helps :)
We solved this problem together with Stripe engineers yesterday (that's not to say your problem is Stripe-related, it isn't, but the problem/solution should be the same)
(These findings are not 100% confirmed, but appear to be the pattern): It's caused when making 2+ requests via cURL and appears to happen since one of the most recent versions of cURL or at least some other software (which may have updated automatically or been done by your hosting provider)
The solution we were provided is to disable persistent connections in cURL. There are different ways of how you could do that, depending on your implementation. But for inspiration, this is how we did it with Stripe:
$curl = new \Stripe\HttpClient\CurlClient();
$curl->setEnablePersistentConnections(false);
\Stripe\ApiRequestor::setHttpClient($curl);
I imagine it would be something similar to this for your libraries. And for those looking to solve this for Stripe, here it is :)
Notice: This solution will theoretically have an impact on latency, we have however not experienced this in practice yet. But now it's mentioned :)
I have reverted to curl 7.69.1 and all is well again.
For now, I have removed curl + libcurl from yum so they will not update.
Thanks for your help and advice and apologies if my style has been incorrect.
public_html/vendor/guzzlehttp/guzzle/src/Handler/Proxy.php
please comment these lines from this fn
public static function wrapSync(
callable $default,
callable $sync
) {
// return function (RequestInterface $request, array $options) use ($default, $sync) {
// return empty($options[RequestOptions::SYNCHRONOUS])
// ? $default($request, $options)
// : $sync($request, $options);
// };
}
I am not using guzzle, but I had same problem with other library
php 7.4.6
curl 7.19.7
CentOS release 6.10 (Final)
package "mercadopago/dx-php": "2.0.0"
On my dev server and other server i tested it works fine
I am not 100% but I think that is a bug on curl library that doesn't allow to reuse same curl connection for more than one request (again I am not sure about it).
I solved it with a hotfix on mercadopago/dx-php I edited ./vendor/mercadopago/dx-php/src/MercadoPago/RestClient.php
replacing line 150
from
$connect = $this->getHttpRequest();
to
$connect = new Http\CurlRequest();
in other words, force to use new connection for next request. on your code look where connection is reused and try to create new connection instead.
I know, it sucks because:
- is a hotfix on Third-party
- cannot reuse same connection
but it worked. Hope it can help you.
I am running a website with 2 servers for website code (in PHP), and 1 server as load-balancer. All 3 are also running couchbase instances, as part of one single cluster.
In PHP code, I using couchbase buckets as follows:
$cluster = new \CouchbaseCluster('http://127.0.0.1:8091');
$greyloftWebbucket = $cluster->openBucket('some_bucket');
$query = \CouchbaseViewQuery::from('abcd', 'pqrs');
This arrangement works fine when all couchbase instances are running. When any one of them is closed and I try to access buckets, I get following error randomly:
[2015-07-17 13:46:08] production.ERROR: exception 'CouchbaseException' with message 'Generic network failure. Enable detailed error codes (via LCB_CNTL_DETAILED_ERRCODES, or via `detailed_errcodes` in the connection string) and/or enable logging to get more information' in [CouchbaseNative]/CouchbaseBucket.class.php:282
Stack trace:
#0 [CouchbaseNative]/CouchbaseBucket.class.php(282): _CouchbaseBucket->http_request(1, 1, '/_design/abcd...', NULL, 1)
#1 [CouchbaseNative]/CouchbaseBucket.class.php(341): CouchbaseBucket->_view(Object(_CouchbaseDefaultViewQuery))
#2 /var/www/greyloft-laravel/app/couchbasemodel.php(25): CouchbaseBucket->query(Object(_CouchbaseDefaultViewQuery))
#3 /var/www/greyloft-laravel/app/Http/Controllers/Listing.php(42): App\couchbasemodel::listings()
#4 [internal function]: App\Http\Controllers\Listing->index()
That is, one time the page will load correctly and show bucket content, and then one time will show me above error. It doesn't matter if I access load balancer or any of the server directly.
Also, autofailover in enabled with replication set to 1 in couchbase cluster. In all 3 servers, I have set LCB_LOGLEVEL=5
What is happening? Is it problem in Couchbase PHP SDK or anything else? I would appreciate any help at all.
Update:
As per mnunberg's suggestion, I'm using new connection string
$cluster = new \CouchbaseCluster('http://127.0.0.1:8091?detailed_errcodes=1');
With this, the error message is now. It still pops over randomly (around half the times):
CouchbaseException in CouchbaseBucket.class.php line 74: The remote host refused the connection. Is the service up?
The autofailover is taking place. In Couchbase console log:
Failed over 'ns_1#<ip_address>': ok
Node ('ns_1#<ip_address>') was automatically failovered.
It seems to me that SDK is still trying to read the failed over node. Why is this happening? Any possible solution?
My development team is having trouble accessing a remote MongoDB database from their local development environments.
The remote Ubuntu development server is running the newest v2.4.3 of MongoDB and PHP 5.3 with the mongo-php-driver v1.3.7 built for PHP 5.3. mongodb.conf is nearly empty except for basic path setup. There are currently no shards or replica sets.
All team members are using OSX 10.8, PHP 5.3, and have the mongo-php-driver v1.3.7 built for PHP 5.3. Some team members use XAMPP, others are using the built-in OSX AMP stack. We test on all major desktop browsers.
Whenever a page needs to grab data from Mongo, we start by calling this connection function:
public static function connect($server, $db)
{
$connection = new MongoClient(
"mongodb://{$server}:27017",
array(
"connectTimeoutMS" => 20000,
"socketTimeoutMS" => 20000
)
);
return $connection->$db;
}
However, nearly 30% of page loads are experiencing the following error:
Failed to connect to: www.development-server.com:27017: send_package: error reading from socket: Timed out waiting for header data
It seems that a large portion of those errors occur when refreshing a page, rather than navigating to a new page, but that's more of a guess than a fact. I've checked everyone's php.ini file and confirmed that default_socket_timeout = 60 is set.
The development server also hosts a copy of the site, but has never thrown the error, presumably since it's only calling localhost to get there. When I installed MongoDB locally, the errors also went away.
This really appears to be a timeout issue, but I cannot find any further settings, parameters, or configurations to adjust the expiry period. Are there any?
The response from #hernan_arg got me thinking about another possibility. Instead of relying on the one-and-only connection attempt to succeed (which seems to take forever), is it acceptable to stick the connection in a loop until it succeeds?
public static function connect($server, $db)
{
$connection = null;
try {
$connection = new MongoClient("mongodb://{$server}");
} catch (MongoConnectionException $e) {
return self::connect();
exit;
}
return $connection->$db;
}
Logging indicates that when the connection does fail, it fails quickly and the loop will establish a new connection in a much more timely manner than the infinite timeout does. Supposing the database becomes unreachable I'm assuming I can rely on PHP execution timeout to eventually kill the process.
try connect without the port in connection or set
array(
"connectTimeoutMS" => -1,
"socketTimeoutMS" => -1
)
(infinite timeout)
The 1.4.1 release of the driver addresses some stability issues over unstable networks.
Assuming you are talking to a replicaset, the driver will discard of servers that are being unreasonably slow - rather then reattempt to connect to them the driver will now blacklist it for few seconds without throwing these exceptions upon connection (assuming we can connect to atleast one server)
My problem is that memcache logic inside my cakePHP application is not working on my local system, ever since I set it up here, by taking code from existing setup of teammates. I have checked that Memcached service is running on my system and phpinfo() shows memcache section enabled.
But, things like these are not working -
$this->Memcache->set($key,$value);
CakePHP uses its wrapper for Memcache, v. 0.3.
If I debug like this -
echo "<pre>";
echo "checking";
error_reporting(-1);
$this->Memcache->set($key,$countryNetworkWiseReportData,3600);
echo "finished";
exit;
I get -
checkingfinished
Strict standards: Non-static method Cache::write() should not be called statically, assuming $this from incompatible context in D:\cake1.2\cake\libs\configure.php on line 690
... and similar strict standard notifications for Cache::getInstance() etc.
But note the notifications appear after "checkingfinished", so I am confused if that is actually relevant.
I tried the command -
telnet 127.0.0.1:11211
Which gives -
Connecting To 127.0.0.1:11211...Could not open connection to the host, on port 23: Connect failed
Also tried -
telnet localhost:11211 (in case firewall issues prevent connection to 127.0.0.1)
But got the same error.
I also tried this script called memcache.php. I put $arr= "127.0.0.1:11211"; in the code and I got this result on my system -
What do I interpret from this data? I am getting a Warning in the Start time section - Warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the ....
So far, it appears to me that connection to the default port 11211 is due to some reason not allowed in my cakePHP project code. But again, how then memcache.php is able to connect to the memcache server and able to display those data.
Further Details
Thanks to Tudor Constantin, doing telnet 127.0.0.1 11211 does not throw any error anymore, shows an empty screen... which I guess is fine.
I checked further in code, we have model functions which have logic like this -
if(!$this->memcacheCommon())
{
$this->log('Error in memcache', LOG_DEBUG);
die('Error in memcache');
}
//like in others system, in my system too, it passes above condition
$memcachedata = $this->Memcache->get($key);
//does not return data in my system, because $this->Memcache->set($key,$data); never sets the data correctly.
if(got data from memcache)
{
//Return that data
}
else
{
//Get data from database
$this->Memcache->set($key,$data); //does not the data in my setup - set() function returns false
}
So, I went inside the set() function of class CakeMemcache(), and there is this line in the end
return #$this->_Memcache_cluster_new->set($key, $var, 0, time()+$expires);
this returns false, and I don’t know what to debug from here.
One more thing that is confusing me is memcacheCommon() inside app_model.php has these lines –
$this->Memcache = new CakeMemcache();
$this->Memcache->_connect();
And inside CakeMemcache()->_connect(), there are these lines –
$this->_Memcache_standalone =& new Memcache();
$this->_Memcache_cluster_new =& new Memcache();
I am not sure what exactly do they do.
Will appreciate any pointers... thanks
More Details
I have somehow lost the earlier memcache.php file, using which I got the above graphical memcache usage display (had posted the pic of the output I got, above). I later downloaded memcache-3.0.6 from http://pecl.php.net/package/memcache/3.0.6 and tried running the example.php and memcache.php files which are present inside the archive.
Firstly, running example.php gives me following error -
Notice: memcache_connect(): Server localhost (tcp 11211) failed with: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Warning: memcache_connect(): Can't connect to localhost:11211, A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Connection to memcached failed
I checked that memcached service is running (in windows services list).
Also telnet localhost 11211 does not give any error and shows an empty command window.
Secondly, running memcache.php and entering the credentials, gives me the following, on clicking all the tabs (Refresh Data, View Host stats, variables ) -
Notice: Use of undefined constant values - assumed 'values' in path\to\file\memcache.php on line 57
Cant connect to:v:a
Line 57 was -
function get_host_port_from_server($server){
$values = explode(':', $server);
if (($values[0] == 'unix') && (!is_numeric( $values[1]))) {
return array($server, 0);
}
else {
return values; //line 57
}
}
How come that was values, I downloaded it from http://pecl.php.net/package/memcache/3.0.6. After I changed it to return $values; I got ->
Cant connect to:mymemcache-server1:11211
. I don't exactly remember how earlier I was able to get that memcache server graph (I posted the pic above).
Thirdly, on command line, after connecting via telnet, command stats gives the following -
STAT pid 1584
STAT uptime 2856
STAT time 1315981346
STAT version 1.2.1
STAT pointer_size 32
STAT curr_items 0
STAT total_items 0
STAT bytes 0
STAT curr_connections 1
STAT total_connections 3
STAT connection_structures 2
STAT cmd_get 0
STAT cmd_set 0
STAT get_hits 0
STAT get_misses 0
STAT bytes_read 7
STAT bytes_written 0
STAT limit_maxbytes 67108864
END
As I can see, you are on a windows machine, so the telnet command is:
telnet 127.0.0.1 11211
Notice the space instead of colon. This might help you debug the memcache connection
Yay! I found the solution!! It started working after I changed localhost to 127.0.0.1 in the /app/config/fcore.php file.
Made the following changes -
# Memcache server constants
define('MEMCACHE_SERVER', 'localhost:11211');
define('MEMCACHE_SERVER_CLUSTER', 'localhost:11211');
define('MEMCACHE_SERVER_CLUSTER_NEW', 'localhost:11211');
to
# Memcache server constants
define('MEMCACHE_SERVER', '127.0.0.1:11211');
define('MEMCACHE_SERVER_CLUSTER', '127.0.0.1:11211');
define('MEMCACHE_SERVER_CLUSTER_NEW', '127.0.0.1:11211');
Oh, I suffered so much due to this and I think I could not, in my life, fix this.
Luckily, our team setup a new repository for the project with 127.0.0.1 instead of localhost and when I took checkout memcache started working and later I realized this was the reason.