Couchbase PHP SDK gives generic network error - php

I am running a website with 2 servers for website code (in PHP), and 1 server as load-balancer. All 3 are also running couchbase instances, as part of one single cluster.
In PHP code, I using couchbase buckets as follows:
$cluster = new \CouchbaseCluster('http://127.0.0.1:8091');
$greyloftWebbucket = $cluster->openBucket('some_bucket');
$query = \CouchbaseViewQuery::from('abcd', 'pqrs');
This arrangement works fine when all couchbase instances are running. When any one of them is closed and I try to access buckets, I get following error randomly:
[2015-07-17 13:46:08] production.ERROR: exception 'CouchbaseException' with message 'Generic network failure. Enable detailed error codes (via LCB_CNTL_DETAILED_ERRCODES, or via `detailed_errcodes` in the connection string) and/or enable logging to get more information' in [CouchbaseNative]/CouchbaseBucket.class.php:282
Stack trace:
#0 [CouchbaseNative]/CouchbaseBucket.class.php(282): _CouchbaseBucket->http_request(1, 1, '/_design/abcd...', NULL, 1)
#1 [CouchbaseNative]/CouchbaseBucket.class.php(341): CouchbaseBucket->_view(Object(_CouchbaseDefaultViewQuery))
#2 /var/www/greyloft-laravel/app/couchbasemodel.php(25): CouchbaseBucket->query(Object(_CouchbaseDefaultViewQuery))
#3 /var/www/greyloft-laravel/app/Http/Controllers/Listing.php(42): App\couchbasemodel::listings()
#4 [internal function]: App\Http\Controllers\Listing->index()
That is, one time the page will load correctly and show bucket content, and then one time will show me above error. It doesn't matter if I access load balancer or any of the server directly.
Also, autofailover in enabled with replication set to 1 in couchbase cluster. In all 3 servers, I have set LCB_LOGLEVEL=5
What is happening? Is it problem in Couchbase PHP SDK or anything else? I would appreciate any help at all.
Update:
As per mnunberg's suggestion, I'm using new connection string
$cluster = new \CouchbaseCluster('http://127.0.0.1:8091?detailed_errcodes=1');
With this, the error message is now. It still pops over randomly (around half the times):
CouchbaseException in CouchbaseBucket.class.php line 74: The remote host refused the connection. Is the service up?
The autofailover is taking place. In Couchbase console log:
Failed over 'ns_1#<ip_address>': ok
Node ('ns_1#<ip_address>') was automatically failovered.
It seems to me that SDK is still trying to read the failed over node. Why is this happening? Any possible solution?

Related

PHP Fatal error: Uncaught GuzzleHttp\Exception\RequestException: cURL error 2: easy handle already used in multi handle

I am a user not a developer. The developer is not available.
This is the Google API library used in Google Shopping Products submission scripts.
The scripts worked successfully, every 20 minutes, for 2 years + the first 5 hours of yesterday.
Then the following error:
[18-Apr-2020 06:20:03 Europe/London] PHP Fatal error: Uncaught GuzzleHttp\Exception\RequestException: cURL error 2: easy handle already used in multi handle (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) in ../vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php:162
Stack trace:
#0 ../vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php(129): GuzzleHttp\Handler\CurlFactory::createRejection(Object(GuzzleHttp\Handler\EasyHandle), Array)
#1 ../vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php(89): GuzzleHttp\Handler\CurlFactory::finishError(Object(GuzzleHttp\Handler\CurlHandler), Object(GuzzleHttp\Handler\EasyHandle), Object(GuzzleHttp\Handler\CurlFactory))
#2 ../vendor/guzzlehttp/guzzle/src/Handler/CurlHandler.php(43): GuzzleHttp\Handler\CurlFactory::finish(ThObject(GuzzleHttp\Handler\CurlHandler), Object(GuzzleHttp\Handler\EasyHandle), Object(GuzzleHttp\Handler\CurlFactory))
#3 ../vendor/guzzlehttp/guzzle/src/Handl in ../vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php on line 162
The only server change at around the time the scripts stopped working was a security patch applied to the physical host and a server reboot.
PHP v7.3.16
I believe the Google library in use is v2.0
I can follow instructions although will probably not understand them!
TIA
Just in case anyone reading this is using Laravel. We suddenly started having the same problem a few days ago, tried installing different cURL versions and setting cURL options, nothing worked. I fixed it by changing the vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php file. Look for the line that says
if (count($this->handles) >= $this->maxHandles) {
curl_close($resource);
} else {
...
}
Comment this all out, and instead of the if/else just write
curl_close($resource);
In other words no matter what the handles count is you always close the cURL connection. This worked instantly for us!
Hope it helps :)
We solved this problem together with Stripe engineers yesterday (that's not to say your problem is Stripe-related, it isn't, but the problem/solution should be the same)
(These findings are not 100% confirmed, but appear to be the pattern): It's caused when making 2+ requests via cURL and appears to happen since one of the most recent versions of cURL or at least some other software (which may have updated automatically or been done by your hosting provider)
The solution we were provided is to disable persistent connections in cURL. There are different ways of how you could do that, depending on your implementation. But for inspiration, this is how we did it with Stripe:
$curl = new \Stripe\HttpClient\CurlClient();
$curl->setEnablePersistentConnections(false);
\Stripe\ApiRequestor::setHttpClient($curl);
I imagine it would be something similar to this for your libraries. And for those looking to solve this for Stripe, here it is :)
Notice: This solution will theoretically have an impact on latency, we have however not experienced this in practice yet. But now it's mentioned :)
I have reverted to curl 7.69.1 and all is well again.
For now, I have removed curl + libcurl from yum so they will not update.
Thanks for your help and advice and apologies if my style has been incorrect.
public_html/vendor/guzzlehttp/guzzle/src/Handler/Proxy.php
please comment these lines from this fn
public static function wrapSync(
callable $default,
callable $sync
) {
// return function (RequestInterface $request, array $options) use ($default, $sync) {
// return empty($options[RequestOptions::SYNCHRONOUS])
// ? $default($request, $options)
// : $sync($request, $options);
// };
}
I am not using guzzle, but I had same problem with other library
php 7.4.6
curl 7.19.7
CentOS release 6.10 (Final)
package "mercadopago/dx-php": "2.0.0"
On my dev server and other server i tested it works fine
I am not 100% but I think that is a bug on curl library that doesn't allow to reuse same curl connection for more than one request (again I am not sure about it).
I solved it with a hotfix on mercadopago/dx-php I edited ./vendor/mercadopago/dx-php/src/MercadoPago/RestClient.php
replacing line 150
from
$connect = $this->getHttpRequest();
to
$connect = new Http\CurlRequest();
in other words, force to use new connection for next request. on your code look where connection is reused and try to create new connection instead.
I know, it sucks because:
- is a hotfix on Third-party
- cannot reuse same connection
but it worked. Hope it can help you.

Problems using SAPI with PHP through COM, on IIS

I am attempting to get Text To Speech and Speech Recognition to work in PHP using Microsoft's SAPI through COM objects.
In the past, I already used this code to get TTS to work (Apache 2.1, PHP 5.5, on Windows 2003 Server)
// Instantiate the object
$VoiceObj = new COM("SAPI.SpVoice") or die("Unable to instantiate SAPI");
// Get the available voices
$VoicesToken=$VoiceObj->GetVoices();
$NumberofVoices=$VoicesToken->Count;
for($i=0;$i<$NumberofVoices;$i++)
{
$VoiceToken=$VoicesToken->Item($i);
$VoiceName[$i]=$VoiceToken->GetDescription();
}
// Get and print the id of the specified voice
$SelectedVoiceToken=$VoicesToken->Item(0);
$SelectedVoiceTokenid=$SelectedVoiceToken->id;
// Set the Voice
$VoiceObj->Voice=$SelectedVoiceToken;
$VoiceName=$VoiceObj->Voice->GetDescription();
$VoiceFile = new COM("SAPI.SpFileStream");
$VoiceFile->Open('./test.wav', 3, false);
// Speak to file
$VoiceObj->AudioOutputStream = $VoiceFile;
$VoiceObj->Speak("What an unbelievable test", 0);
$VoiceFile->Close();
On my new setup (IIS 7.5, PHP 7.0, Windows Server 2008R2) the same code fails at
$VoiceObj->Speak("What an unbelievable test", 0);
Fatal error: Uncaught com_exception: <b>Source:</b> Unknown<br/><b>Description:</b> Unknown in \\web\tts.php:30 Stack trace: #0 \\web\tts.php(30): com->Speak('this is a marve...', 0) #1 {main}
With such little detail (where to retrieve more?) I can't figure out what the problem may be.
Writing permissions checked.
PHP 7.0 replaced with 5.5, still not working.
Same code tested with a Win32 app, and it works flawlessly.
Any hints?
Two years later, I casually happen to find an answer to this problem.
PHP COM Objects can't perform file operations on network paths, even with all the required permissions
Once the development folders were moved on the same machine where IIS runs, a bunch of previously broken tests started working. They all had one thing in common: they were using COM interfaces to save files or something like that.

curl - 35: error:02001018:system library:fopen:Too many open files [duplicate]

Trying to execute the following code with Guzzle 5.
$client = new GuzzleClient(['defaults/headers/User-Agent' => static::$userAgentString]);
$request = $client->createRequest(static::$serviceRequestMethod, $url, $options); // Create signing request.
$signature = new Signature\Signature($this->accessKey, $this->secretKey);
$options = array_merge_recursive($options, ['query' => ['Signature' => $signature->signString($hash)]]);
$request = $client->createRequest(static::$serviceRequestMethod, $url, $options); // Create real request.
$response = $client->send($request);
When I call this line enough times on a long running CLI process, I get the following error traced back to the line $response = $client->send($request);
cURL error 35: error:02001018:system library:fopen:Too many open files
After that hits, every other web page and command on the server breaks down with the same "too many open files" error.
Here is the stack trace:
#0 /home/vagrant/code/example.com/vendor/guzzlehttp/guzzle/src/RequestFsm.php(104): GuzzleHttp\Exception\RequestException::wrapException(Object(GuzzleHttp\Message\Request), Object(GuzzleHttp\Ring\Exception\ConnectException))
#1 /home/vagrant/code/example.com/vendor/guzzlehttp/guzzle/src/RequestFsm.php(132): GuzzleHttp\RequestFsm->__invoke(Object(GuzzleHttp\Transaction))
#2 /home/vagrant/code/example.com/vendor/react/promise/src/FulfilledPromise.php(25): GuzzleHttp\RequestFsm->GuzzleHttp\{closure}(Array)
#3 /home/vagrant/code/example.com/vendor/guzzlehttp/ringphp/src/Future/CompletedFutureValue.php(55): React\Promise\FulfilledPromise->then(Object(Closure), NULL, NULL)
#4 /home/vagrant/code/example.com/vendor/guzzlehttp/guzzle/src/Message/FutureResponse.php(43): GuzzleHttp\Ring\Future\CompletedFutureValue->then(Object(Closure), NULL, NULL)
#5 /home/vagrant/code/example.com/vendor/guzzlehttp/guzzle/src/RequestFsm.php(135): GuzzleHttp\Message\FutureResponse::proxy(Object(GuzzleHttp\Ring\Future\CompletedFutureArray), Object(Closure))
#6 /home/vagrant/code/example.com/vendor/guzzlehttp/guzzle/src/Client.php(165): GuzzleHttp\RequestFsm->__invoke(Object(GuzzleHttp\Transaction))
#7 /home/vagrant/code/example.com/app/library/amazon/src/AWS.php(540): GuzzleHttp\Client->send(Object(GuzzleHttp\Message\Request))
I'm not aware of any need to explicitly close a resource after sending a request through Guzzle. Am I missing something here or could this be a bug in Guzzle?
This is not an issue with Guzzle, or MailGun, so much as it is with your particular implementation of the libraries. You are actually hitting the limits of the underlying operating system (libcurl, openssl, and fopen) by having so many long running (open) requests.
According to libcurl errors error 35 indicates that there was an error with the SSL/TLS handshake.
According to various google references error: 02001018 is an indication that openssl was unable to access (or rather read) the certificate file.
You are able to use ulimit to view and modify the limits various system-wide resources.
You are also able to use lsof to view the open files.
To resolve your issue:
(if able) increase system resource allowances - be sure to research the implications that this change can have.
Refactor your code so that you do not hit the operating environments limits. Perhaps it my be possible to use asynchronous communications for some of the requests. A different library, or perhaps "dropping down" and implementing your own.
Find some way to implement some type of rate limiting (I have listed this separately from #2) but they could go hand in hand.

Predis is giving 'Error while reading line from server'

I am using predis, it's subscribed to a channel and listening. It throws the following error (below) and dies after 60 secs exactly. It's surely not my web servers error or its timeout.
There is a similar issue being discussed here. Could not get much of it.
I tried setting connection_timeout in predis conf file to 0, but doesn't helps much.
Also if i keep using (send data to it and it processes) the worker it doesn't give any error. So its likely a timeout somewhere, and that too in connection.
Here is my code snippet, which is likely producing error, because if data is given to worker it runs this code and go forward, which produces no error after that.
$pubsub = $redis->pubSub();
$pubsub->subscribe($channel1);
foreach ($pubsub as $message) { //doing stuff here and unsubscribing from channel
}
Trace
PHP Fatal error: Uncaught exception 'Predis\Network\ConnectionException' with message 'Error while reading line from the server' in Predis/Network/ConnectionBase.php:159 Stack trace:
#0 library/vendor/predis/lib/Predis/Network/StreamConnection.php(195): Predis\Network\ConnectionBase->onConnectionError('Error while rea...')
#1 library/vendor/predis/lib/Predis/PubSub/PubSubContext.php(259): Predis\Network\StreamConnection->read()
#2 library/vendor/predis/lib/Predis/PubSub/PubSubContext.php(206): Predis\PubSub\PubSubContext->getValue()
#3 pdf/file.php(16): Predis\PubSub\PubSubContext->current()
#4 {main} thrown in Predis/Network/ConnectionBase.php on line 159
Checked the redis.conf timeout too, its also disabled.
Just set the read_write_timeout connection parameter to 0 or -1 to fix this. e.g.
$redis = new Predis\Client('tcp://10.0.0.1:6379'."?read_write_timeout=0");
Setting connection parameters is documented in the README. The author of Redis noted the relevance of the read_write_timeout parameter to this error in an issue on GitHub, in which he notes that:
If you are using Predis in a daemon-like script you should set read_write_timeout to -1 if you want to completely disable the timeout (this value works with older and newer versions of Predis). Also, remember that you must disable the default timeout of Redis by setting timeout = 0 in redis.conf or Redis will drop the connection of idle clients after 300 seconds of inactivity.
I had similar problem, better solution to the situation is not setting the timeout to 0 but using a exponential backoff and set the upper and the lower limit.
Change in the config parameter connection_timeout to 0 will also solve the issue.
I got the resolution to the problem. So, there is a limit to ports that a application server can connect to a particular application on another machine. These ports were getting exhausted.
We increased the limit and the problem got resolved.
How we got to know about this problem ?
In php, we were getting "Cannot assign requested address" error while creating a socket (error code 99).
At /etc/redis/redis.conf , set
timeout = 0
I'm using Heroku and solved this problem with switching from Redis Heroku to Redis Enterprise addon and then:
use Predis\Client as PredisClient;
To solve collision with GuzzleHttp\Client. You can leave the
as PredisClient
line if you are not usng GuzzleHttp.
And then connection:
$redisClient = new PredisClient(array(
'host' => parse_url(env('REDIS_URL'), PHP_URL_HOST),
'port' => parse_url(env('REDIS_URL'), PHP_URL_PORT),
'password' => parse_url(env('REDIS_URL'), PHP_URL_PASS)
)
);
(You can find your 'REDIS_URL' automatically prefilled in Heroku config vars).

(403) Access Not Configured when adding event to calendar

Today I started to get this error upon adding new events to calendar:
Fatal error: Uncaught exception 'apiServiceException' with message
'Error calling POST https://www.googleapis.com/calendar/v3/
calendars/[cal-id]#group.calendar.google.com/events?alt=json&key=[dev-key]:
(403) Access Not Configured' in /[...]/src/io/apiREST.php:86
Stack trace: #0 /[...]/src/io/apiREST.php(56): apiREST::decodeHttpResponse(Object(apiHttpRequest))
#1 /[...]/src/service/apiServiceResource.php(148): apiREST::execute(Object(apiServiceRequest)) #2 /[...]/src/contrib/apiCalendarService.php(472):
apiServiceResource->__call('insert', Array) #3 /[...]/index.php(160): EventsServiceResource->insert('[cal-id-part]...',
Object(Ev in /[...]/src/io/apiREST.php on line 86
It worked perfectly till now and I didn't change anything in code.
I had a similar problem accessing Google Analytics data with PHP. I fixed it by making sure the Analytics service was turned on for my project in my API console: https://code.google.com/apis/console/.
You may have to turn on the calendar service. See the link below for further explanation when someone had an issue with the translation service:
Translation api has stopped working
Don't know if this will help someone.
I had the same error and I had tried everything. Then I removed "developer_key" from config.php and it worked. Please note I was using Service Account https://developers.google.com/accounts/docs/OAuth2ServiceAccount
Well, they released new version and that what apparently caused errors. Got the newest version now and it works great once again.
Don't know if this will help someone, either.
It may also help to go to the respective calendar-settings page in your Google Account, edit something trivial and explicitly save again, (or re-save once more to set back to the old settings). Sometimes the google servers are a bit hesitant to accept any new settings, especially the "Sharing " settings. Check which of your service accounts have permission levels "can edit" , or "is owner" .

Categories