Elasticsearch PHP client throwing exception "No alive nodes found in your cluster" - php

I am trying to do a scan and scroll operation on an index as shown in the example :
$client = ClientBuilder::create()->setHosts([MYESHOST])->build();
$params = [
"search_type" => "scan", // use search_type=scan
"scroll" => "30s", // how long between scroll requests. should be small!
"size" => 50, // how many results *per shard* you want back
"index" => "my_index",
"body" => [
"query" => [
"match_all" => []
]
]
];
$docs = $client->search($params); // Execute the search
$scroll_id = $docs['_scroll_id']; // The response will contain no results, just a _scroll_id
// Now we loop until the scroll "cursors" are exhausted
while (\true) {
// Execute a Scroll request
$response = $client->scroll([
"scroll_id" => $scroll_id, //...using our previously obtained _scroll_id
"scroll" => "30s" // and the same timeout window
]
);
// Check to see if we got any search hits from the scroll
if (count($response['hits']['hits']) > 0) {
// If yes, Do Work Here
// Get new scroll_id
// Must always refresh your _scroll_id! It can change sometimes
$scroll_id = $response['_scroll_id'];
} else {
// No results, scroll cursor is empty. You've exported all the data
break;
}
}
The first $client->search($params) API call executes fine and I am able to get back the scroll id. But $client->scroll() API fails and I am getting the exception : "Elasticsearch\Common\Exceptions\NoNodesAvailableException No alive nodes found in your cluster"
I am using Elasticsearch 1.7.1 and PHP 5.6.11
Please help

I found the php driver for elasticsearch is riddled with issues, the solution I had was to just implement the RESTful API with curl via php, Everything worked much quicker and debugging was much easier

I would guess the example is not up to date with the version you're using (the link you've provided is to 2.0, and you are sauing you use 1.7.1). Just add inside the loop:
try {
$response = $client->scroll([
"scroll_id" => $scroll_id, //...using our previously obtained _scroll_id
"scroll" => "30s" // and the same timeout window
]
);
}catch (Elasticsearch\Common\Exceptions\NoNodesAvailableException $e) {
break;
}

Check if your server running with following command.
service elasticsearch status
I had the same problem and solved it.
I have added script.disable_dynamic: true to elasticsearch.yml as explained in Digitalocan tutorial https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-elasticsearch-on-ubuntu-14-04
So elasticsearch server was not started.
I removed following line from elasticsearch.yml
script.disable_dynamic: true

restart the elastic search service and set the network host to local "127.0.0.1".

I would recommend on using php curl lib directly for elasticsearch queries.
I find it easier to use than any other elasticsearch client lib, you can simulate any query using cli curl and you can find many examples, documentation and discussions in the internet.

Maybe you should try to telnet on your machine
telnet [your_es_host] [your_es_ip]
to check if you can access to it.
If not please try to open that port or disable your machine's firewall.

That error basically means it can't find your cluster, likely due to misconfiguration on either the client's side or the server's side.

I have had the same problem with scroll and it was working with certain indexes but not with others. It must have had been a bug in the driver as it went away after I have updated elasticsearch/elasticsearch package from 2.1.3 to 2.2.0

Uncomment in elasticsearch.yml:
network.host:198....
And set to:
127.0.0.1
Like this:
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
I use Elasticsearch 2.2 in Magento 2 under LXC container.

I setup Elasticsearch server in docker as the doc, https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
But it uses a different network (networks: - esnet) and it cannot talk to the application network. After remove the networks setting and it works well.

If you setup Elasticsearch server in docker as the doc, https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
But it uses a different network (networks: - esnet) from other services and it cannot talk to the application network. After remove the networks setting and it works well.

Try:
Stop your elasticsearch service if it's already running
Go to your elasticsearch directory via terminal, run:
> ./bin/elasticsearch
This worked for me.

Related

"no alive nodes found in cluster" while indexing docs

I have a "legacy" php application that we just migrated to run on Google Cloud (Kubernetes Engine). Along with it I also have a ElasticSearch installation (Elastic Cloud on Kubernetes) running. After a few incidents with Kubernetes killing my Elastic Search when we're trying to deploy other services we have come to the conclusion that we should probably not run ES on Kubernetes, at least if are to manage it ourselves. This due to a apparent lack of knowledge for doing it in a robust way.
So our idea is now to move to managed Elastic Cloud instead which was really simple to deploy and start using. However... now that I try to load ES with the data needed for our php application if fails mid-process with the error message no alive nodes found in cluster. Sometimes it happens after less than 1000 "documents" and other times I manage to get 5000+ of them indexed before failure.
This is how I initialize the es client:
$clientBuilder = ClientBuilder::create();
$clientBuilder->setElasticCloudId(ELASTIC_CLOUD_ID);
$clientBuilder->setBasicAuthentication('elastic',ELASTICSEARCH_PW);
$clientBuilder->setRetries(10);
$this->esClient = $clientBuilder->build();
ELASTIC_CLOUD_ID & ELASTICSEARCH_PW are set via environment vars.
The request looks something like:
$params = [
'index' => $index,
'type' => '_doc',
'body' => $body,
'client' => [
'timeout' => 15,
'connect_timeout' => 30,
'curl' => [CURLOPT_HTTPHEADER => ['Content-type: application/json']
]
The body and which index depends on how far we get with the "ingestion", but generally pretty standard stuff.
All this works without any real problems when running on a own installation of Elastic in our own GKE cluster.
What I've tried so far is to add the retries and timeouts, but none of that seems to make much of a difference?
We're running:
php 7.4
ElasticSearch 7.11
Elastic Search client 7.12 (php via composer)
If you use WAMP64, this error will occur, You have to use XAMPP instead.
Try the following command in the command prompt, If it runs, there is a problem with your configurations.
curl -u elastic:<password> https://<endpoint>:<port>
(Ex for Elastic Cloud)
curl -u elastic:<password> example.es.us-central1.gcp.cloud.es.io:9234

Blackfire profiling not working

So I followed these instructions for my vagrant box and everything seemed to go fine, I mean its running. It has been configured with its server id and server token.
I then installed the PHP Probe, as per the instructions on the same page and restarted apache2 when it was done. I then did composer require
blackfire/php-sdk and finally in my code I did:
$probe = $blackfire->createProbe();
// some PHP code you want to profile
$blackfire->endProbe($probe);
dd('End here.'); // Laravels die and dump function.
So as far as I know I did everything right. Then, in my console I did:
vagrant#scotchbox:/var/www$ php artisan fetch_eve_online_region_type_history_information
[Blackfire\Exception\ApiException]
401: while calling GET https://blackfire.io/api/v1/collab-tokens [context: NULL] [headers: array (
0 => 'Authorization: Basic xxxxxx=',
1 => 'X-Blackfire-User-Agent: Blackfire PHP SDK/1.0',
)]
// where xxxx is some kind of authentication token that looks different from what I gave as my server id and token.
uh .... Ok so the docs state if something goes wrong to check the logs:
vagrant#scotchbox:/var/www$ cat /var/log/blackfire/agent.log
vagrant#scotchbox:/var/www$
Theres nothing in the logs....
What am I doing wrong?
Not a real solution, but rather a workaround until we hear more about how to actually solve it.
I have added the client credentials manually directly in the code and it solved the issue for me:
$config = new \Blackfire\ClientConfiguration();
$config->setClientId('...your _client_ id...');
$config->setClientToken('...your _client_ token...');
$blackfire = new \Blackfire\Client($config);
The string that I saw in the error was Authorization: Basic Og== and Og== is just a base64-encoded string :, which hints that username/password (or id/token in this case?) automatic lookup failed and authorization is impossible. That's why providing the details manually works around it.
A little bit late but maybe someone will need it in the future.
Adding the HOME environment variable to apache's vhost file so blackfire finds ~/.blackfire.ini solves it.
<VirtualHost hostname:80>
...
SetEnv HOME /Users/me #i'm running macOS, on linux would be /home/me
...
</VirtualHost>
consider that your probe configuration is correct (server_id & server_tokens) and you can profile from the browser , for using PHP SDK (phpunit integration with blackfire) you have to configure the client side :
apt-get install blackfire-agent
blackfire config you will prompt for BLACKFIRE_CLIENT_ID and BLACKFIRE_CLIENT_TOKEN .
you can also login to this api/v1/collab-tokens to test your client credentials username=>BLACKFIRE_CLIENT_ID , password=>BLACKFIRE_CLIENT_TOKEN
the config file location for the client : /root/.blackfire.ini

PHP SOAP awfully slow: Response only after reaching fastcgi_read_timeout

The constructor of the SOAP client in my wsdl web service is awfully slow, I get the response only after the fastcgi_read_timeout value in my Nginx config is reached, it seems as if the remote server is not closing the connection. I also need to set it to a minimum of 15 seconds otherwise I will get no response at all.
I already read similar posts here on SO, especially this one
PHP SoapClient constructor very slow and it's linked threads but I still cannot find the actual cause of the problem.
This is the part which takes 15+ seconds:
$client = new SoapClient("https://apps.correios.com.br/SigepMasterJPA/AtendeClienteService/AtendeCliente?wsdl");
It seems as it is only slow when called from my php script, because the file opens instantly when accessed from one of the following locations:
wget from my server which is running the script
SoapUI or Postman (But I don't know if they cached it before)
opening the URL in a browser
Ports 80 and 443 in the firewall are open. Following the suggestion from another thread, I found two work arounds:
Loading the wsdl from a local file => fast
Enabling the wsdl cache and using the remote URL => fast
But still I'd like to know why it doesn't work with the original URL.
It seems as if the web service does not close the connection, or in other words, I get the response only after reaching the timeout set in my server config. I tried setting keepalive_timeout 15; in my Nginx config, but it does not work.
Is there any SOAP/PHP parameter which forces the server to close the connection?
I was able to reproduce the problem, and found the solution to the issue (works, maybe not the best) in the accepted answer of a question linked in the question you referenced:
PHP: SoapClient constructor is very slow (takes 3 minutes)
As per the answer, you can adjust the HTTP headers using the stream_context option.
$client = new SoapClient("https://apps.correios.com.br/SigepMasterJPA/AtendeClienteService/AtendeCliente?wsdl",array(
'stream_context'=>stream_context_create(
array('http'=>
array(
'protocol_version'=>'1.0',
'header' => 'Connection: Close'
)
)
)
));
More information on the stream_context option is documented at http://php.net/manual/en/soapclient.soapclient.php
I tested this using PHP 5.6.11-1ubuntu3.1 (cli)

force behat to use in memory db with laravel 4.2 during curl requests

I'm trying to get some Behavior tests integrated into my API test suite. I'm leveraging Laravel 4.2 and already have a nice suite of unit tests. The problem I'm running into is persistent data after these tests suite run - as well as correctly populated seed data.
I've tried to link sqlite into my bootstrap process following a few examples I've seen in various places online, but all this is really doing is setting up my DB when I call behat from CLI - during the application run (most specifically any curl requests out to the API) laravel still ties my DB to my local configuration which uses a mysql db.
here's an example snippet of a test suite, the Adding a new track scenario is one where I'd like to have my API use the sqlite when the request and payload is made to my API.
Feature: Tracks
Scenario: Finding all the tracks
When I request "GET /api/v1/tracks"
Then I get a "200" response
Scenario: Finding an invalid track
When I request "GET /api/v1/tracks/1"
Then I get a "404" response
Scenario: Adding a new track
Given I have the payload:
"""
{"title": "behat add", "description": "add description", "short_description": "add short description"}
"""
When I request "POST /api/v1/tracks"
Then I get a "201" response
Here is a snippet of my bootstrap/start.php file. What I'm I am trying to accomplish is for my behat scenario (ie: Adding a new track) request to hit the testing config so I can manage w/ a sqlite db.
$env = $app->detectEnvironment(array(
'local' => array('*.local'),
'production' => array('api'),
'staging' => array('api-staging'),
'virtualmachine' => array('api-vm'),
'testing' => array('*.testing'),
));
Laraval does not know about Behat. Create a special environment for it, with its own database.
Here is what I have in my start.php:
if (getenv('APP_ENV') && getenv('APP_ENV') != '')
{
$env = $app->detectEnvironment(function()
{
return getenv('APP_ENV');
});
}
else
{
$env = $app->detectEnvironment(array(
'local' => array('*.local', 'homestead'),
/* ... */
));
}
APP_ENV is set in your Apache/Nginx VirtualHost config. For apache:
SetEnv APP_ENV acceptance
Create a special local test URL for Behat to use and put that in the VirtualHost entry.
I recommend using an SQLite file-based database. Delete the file before each Feature or Scenario. Found it to be much quicker than MySQL. I want to use the SQLite in-memory mode but I could not find a way to persist data between requests with the in-memory database.

'Fake' an API response for testing

We are trying to recreate some of the responses from a live API locally for testing purposes.
We are trying to build a very basic PHP replica that responds to the Ajax requests with JSON.The code below is what I have it returning right now as a string and the error on the other end throws up an error:
"Uncaught TypeError: Cannot read property 'instanceId' of undefined".
code:
$var = "{'request':{'instanceId':'1234546','usage':'1'}}";
echo($var);
We have tested and it works with the live API. So it something that I am doing wrong when trying to return the dummy JSON data. Now as far as I am aware this is not a valid JSON response, is there a way to easily 'fake' the response with something like I have above?
That isn't valid json. Reverse your quotes. Single on the outside, double on the inside.
You may also need to return a correct content-type header.
$var = '{"request":{"instanceId":"1234546","usage":"1"}}';
header("Content-Type", "application/json");
echo($var);
Or better yet:
$obj = array("request" => array("instanceId" => "123456", "usage" => "1"));
header("Content-Type", "application/json");
echo(json_encode($obj));
To keep your tests local you can use Jaqen, a very light server built for testing scripts that depend of an API.
You set whatever you want it to respond directly on the request.
For example:
# Console request to a Jaqen instance listening at localhost:9000
$ curl 'http://localhost:9000/echo.json?request\[instanceId\]=123456&request\[usage\]=1' -i
=> HTTP/1.1 200 OK
=> Content-Type: application/json
...
=> {"request":{"instanceId":"1234546","usage":"1"}}
There are several ways to tell it what to do and also serves static files to allow loading test pages and assets, check out the documentation for more.
It's built on node.js so it's really easy to install and use it locally.
# To install it (once you have node.js):
$ npm install -g jaqen
# To run it just type 'jaqen' on the console:
$ jaqen
=> Jaqen v1.0.0 is listening on port 9000.
=> Press Ctl+C to stop the server.
That's it!

Categories