I'm setting up a vuejs app and multiple Laravel applications that must communicate between them. Communication between Laravel applications are done through cUrl request. For now, everything are on my dev machines (Last MacBook Pro - Mac OS Majove) with the help of MAMP Pro (PHP 7.3).
Problem is when I do simultaneous queries, I had a :
CURL error 28 communication timeout ... with 0 bytes received.
It if of course not a timeout problem (I tried with 2 minutes on all applications and all timeouts - same result). Since I'm working with API, there is no PHP session (so no session file lock).
It seems that cUrl connection is closed but I don't know why (I don't close it myself - and it don't hit the timeouts (connect/read/global) ).
More visually :
vuejs --ajax1--> Laravel A --cUrl--> Laravel B --cUrl--> Laravel C
vuejs --ajax2--> Laravel A --cUrl--> Laravel B --cUrl--> Laravel C
vuejs <--500-- Laravel A --X-- Laravel B <---- Laravel C
vuejs <--500-- Laravel A --X-- Laravel B <---- Laravel C
ajax1 and ajax2 are sent at the same time.
It is working if ajax1 and ajax2 are sent NOT at the same time.
What I know:
Communication is cut between Laravel A and Laravel B but Laravel B
execute the code en return a response (that never arrived because, I think, connection is closed ?). But both request hit Laravel C and Laravel C also runs.
What I tried:
Apache and nginx
Disable firewall
Increase all timeouts (PHP - cUrl)
Increase memory limit (PHP)
cUrl CURLOPT_FORBID_REUSE and CURLOPT_FRESH_CONNECT options
Change local domain name and tld of Laravel applications
What I wondering:
Is there a maximum requests that I can do, at the same time, with cUrl on the same machine (I, of course, set CURLOPT_MAXCONNECTS to 20 - without success) ?
Is there a PHP.ini configuration that I missed ?
Is it possible that the problem come from the fact that all these applications run on the same machine ? If yes, why ?
Are both servers (nginx and Apache) limits connection from same IP ? (since all applications are on the same machine, they all have the same IP).
Just bumped into the same problem. Where main backend is acting as a proxy to get data from other backend through curl(guzzle).
But i get timeout only if doing more than 5 simultaneous requests.
All applications are live and i've tried setting up a new VPS server to avoid having all backends on the same machine.
Setting CURLOPT_MAXCONNECTS also didn't work.
This is my fake proxy code(using laravel)
$uri = $request->get('url');
$api_url = env('URL');
$token = env('TOKEN');
$key = 'Bearer ' . $token;
$client = new Client([
'base_uri' => $api_url,
'headers' => [
'Accept' => 'application/json',
'Content-Type' => 'application/json',
'Authorization' => $key
]
]);
$request = $client->request('GET', $uri, [
'http_errors' => false
]);
$response = $request->getBody()->getContents();
return $response;
I haven't found a solution yet. Hopefully someone could help us solving this problem.
Related
I have a "legacy" php application that we just migrated to run on Google Cloud (Kubernetes Engine). Along with it I also have a ElasticSearch installation (Elastic Cloud on Kubernetes) running. After a few incidents with Kubernetes killing my Elastic Search when we're trying to deploy other services we have come to the conclusion that we should probably not run ES on Kubernetes, at least if are to manage it ourselves. This due to a apparent lack of knowledge for doing it in a robust way.
So our idea is now to move to managed Elastic Cloud instead which was really simple to deploy and start using. However... now that I try to load ES with the data needed for our php application if fails mid-process with the error message no alive nodes found in cluster. Sometimes it happens after less than 1000 "documents" and other times I manage to get 5000+ of them indexed before failure.
This is how I initialize the es client:
$clientBuilder = ClientBuilder::create();
$clientBuilder->setElasticCloudId(ELASTIC_CLOUD_ID);
$clientBuilder->setBasicAuthentication('elastic',ELASTICSEARCH_PW);
$clientBuilder->setRetries(10);
$this->esClient = $clientBuilder->build();
ELASTIC_CLOUD_ID & ELASTICSEARCH_PW are set via environment vars.
The request looks something like:
$params = [
'index' => $index,
'type' => '_doc',
'body' => $body,
'client' => [
'timeout' => 15,
'connect_timeout' => 30,
'curl' => [CURLOPT_HTTPHEADER => ['Content-type: application/json']
]
The body and which index depends on how far we get with the "ingestion", but generally pretty standard stuff.
All this works without any real problems when running on a own installation of Elastic in our own GKE cluster.
What I've tried so far is to add the retries and timeouts, but none of that seems to make much of a difference?
We're running:
php 7.4
ElasticSearch 7.11
Elastic Search client 7.12 (php via composer)
If you use WAMP64, this error will occur, You have to use XAMPP instead.
Try the following command in the command prompt, If it runs, there is a problem with your configurations.
curl -u elastic:<password> https://<endpoint>:<port>
(Ex for Elastic Cloud)
curl -u elastic:<password> example.es.us-central1.gcp.cloud.es.io:9234
The constructor of the SOAP client in my wsdl web service is awfully slow, I get the response only after the fastcgi_read_timeout value in my Nginx config is reached, it seems as if the remote server is not closing the connection. I also need to set it to a minimum of 15 seconds otherwise I will get no response at all.
I already read similar posts here on SO, especially this one
PHP SoapClient constructor very slow and it's linked threads but I still cannot find the actual cause of the problem.
This is the part which takes 15+ seconds:
$client = new SoapClient("https://apps.correios.com.br/SigepMasterJPA/AtendeClienteService/AtendeCliente?wsdl");
It seems as it is only slow when called from my php script, because the file opens instantly when accessed from one of the following locations:
wget from my server which is running the script
SoapUI or Postman (But I don't know if they cached it before)
opening the URL in a browser
Ports 80 and 443 in the firewall are open. Following the suggestion from another thread, I found two work arounds:
Loading the wsdl from a local file => fast
Enabling the wsdl cache and using the remote URL => fast
But still I'd like to know why it doesn't work with the original URL.
It seems as if the web service does not close the connection, or in other words, I get the response only after reaching the timeout set in my server config. I tried setting keepalive_timeout 15; in my Nginx config, but it does not work.
Is there any SOAP/PHP parameter which forces the server to close the connection?
I was able to reproduce the problem, and found the solution to the issue (works, maybe not the best) in the accepted answer of a question linked in the question you referenced:
PHP: SoapClient constructor is very slow (takes 3 minutes)
As per the answer, you can adjust the HTTP headers using the stream_context option.
$client = new SoapClient("https://apps.correios.com.br/SigepMasterJPA/AtendeClienteService/AtendeCliente?wsdl",array(
'stream_context'=>stream_context_create(
array('http'=>
array(
'protocol_version'=>'1.0',
'header' => 'Connection: Close'
)
)
)
));
More information on the stream_context option is documented at http://php.net/manual/en/soapclient.soapclient.php
I tested this using PHP 5.6.11-1ubuntu3.1 (cli)
I am trying to do a scan and scroll operation on an index as shown in the example :
$client = ClientBuilder::create()->setHosts([MYESHOST])->build();
$params = [
"search_type" => "scan", // use search_type=scan
"scroll" => "30s", // how long between scroll requests. should be small!
"size" => 50, // how many results *per shard* you want back
"index" => "my_index",
"body" => [
"query" => [
"match_all" => []
]
]
];
$docs = $client->search($params); // Execute the search
$scroll_id = $docs['_scroll_id']; // The response will contain no results, just a _scroll_id
// Now we loop until the scroll "cursors" are exhausted
while (\true) {
// Execute a Scroll request
$response = $client->scroll([
"scroll_id" => $scroll_id, //...using our previously obtained _scroll_id
"scroll" => "30s" // and the same timeout window
]
);
// Check to see if we got any search hits from the scroll
if (count($response['hits']['hits']) > 0) {
// If yes, Do Work Here
// Get new scroll_id
// Must always refresh your _scroll_id! It can change sometimes
$scroll_id = $response['_scroll_id'];
} else {
// No results, scroll cursor is empty. You've exported all the data
break;
}
}
The first $client->search($params) API call executes fine and I am able to get back the scroll id. But $client->scroll() API fails and I am getting the exception : "Elasticsearch\Common\Exceptions\NoNodesAvailableException No alive nodes found in your cluster"
I am using Elasticsearch 1.7.1 and PHP 5.6.11
Please help
I found the php driver for elasticsearch is riddled with issues, the solution I had was to just implement the RESTful API with curl via php, Everything worked much quicker and debugging was much easier
I would guess the example is not up to date with the version you're using (the link you've provided is to 2.0, and you are sauing you use 1.7.1). Just add inside the loop:
try {
$response = $client->scroll([
"scroll_id" => $scroll_id, //...using our previously obtained _scroll_id
"scroll" => "30s" // and the same timeout window
]
);
}catch (Elasticsearch\Common\Exceptions\NoNodesAvailableException $e) {
break;
}
Check if your server running with following command.
service elasticsearch status
I had the same problem and solved it.
I have added script.disable_dynamic: true to elasticsearch.yml as explained in Digitalocan tutorial https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-elasticsearch-on-ubuntu-14-04
So elasticsearch server was not started.
I removed following line from elasticsearch.yml
script.disable_dynamic: true
restart the elastic search service and set the network host to local "127.0.0.1".
I would recommend on using php curl lib directly for elasticsearch queries.
I find it easier to use than any other elasticsearch client lib, you can simulate any query using cli curl and you can find many examples, documentation and discussions in the internet.
Maybe you should try to telnet on your machine
telnet [your_es_host] [your_es_ip]
to check if you can access to it.
If not please try to open that port or disable your machine's firewall.
That error basically means it can't find your cluster, likely due to misconfiguration on either the client's side or the server's side.
I have had the same problem with scroll and it was working with certain indexes but not with others. It must have had been a bug in the driver as it went away after I have updated elasticsearch/elasticsearch package from 2.1.3 to 2.2.0
Uncomment in elasticsearch.yml:
network.host:198....
And set to:
127.0.0.1
Like this:
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
I use Elasticsearch 2.2 in Magento 2 under LXC container.
I setup Elasticsearch server in docker as the doc, https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
But it uses a different network (networks: - esnet) and it cannot talk to the application network. After remove the networks setting and it works well.
If you setup Elasticsearch server in docker as the doc, https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
But it uses a different network (networks: - esnet) from other services and it cannot talk to the application network. After remove the networks setting and it works well.
Try:
Stop your elasticsearch service if it's already running
Go to your elasticsearch directory via terminal, run:
> ./bin/elasticsearch
This worked for me.
I don't understand why guzzle requests are really slow on laravel forge and laravel homestead. I did not change default server configuration on forge and homestead.
Every simple request like this one ...
$client = new GuzzleHttp\Client();
$response = $client->get('path-to-my-api');
... takes about 150ms (on homestead and forge). This appends on every request (same network or internet). I read some posts about guzzle and it seems to be very fast for every users, but not for me.
Versions :
curl 7.35.0 (x86_64-pc-linux-gnu) libcurl/7.35.0 OpenSSL/1.0.1f zlib/1.2.8 libidn/1.28 librtmp/2.3
PHP Version 5.6.0
Guzzle 5.1.0
Something really weird is that when I do this (asynchronous) ...
$req = $client->createRequest('GET', 'path-to-my-api', ['future' => true]);
$client->send($req)->then(function ($response) {
});
... it takes about 10ms. It's great but I dont understand why. And I don't want to perform asynchronous requests.
Maybe my time measure is buggy, but I think it's Ok : I use PHP debug bar like this :
// .....
// synch
Debugbar::startMeasure('synch','SYNCH Request');
$response = $client->get('path-to-my-api');
Debugbar::stopMeasure('synch');
// asynch
Debugbar::startMeasure('asynch','ASYNCH Request');
$req = $client->createRequest('GET', 'path-to-my-api', ['future' => true]);
$client->send($req)->then(function ($response) {
Debugbar::stopMeasure('asynch');
});
I know it's not easy to answer this question (because it's vague), but I have no clue for now :(. I can edit it if you want. Thanks a lot.
Guzzle cannot be slow - it's just a library. Your synchronous requests are probably taking longer because your API is taking long to respond, and your asynchronous requests seem to be faster because it's not blocking the network until it receives a response.
Try calling the API directly in your browser or using cURL in your terminal - you'll probably find the latency is there.
I'm trying to get some Behavior tests integrated into my API test suite. I'm leveraging Laravel 4.2 and already have a nice suite of unit tests. The problem I'm running into is persistent data after these tests suite run - as well as correctly populated seed data.
I've tried to link sqlite into my bootstrap process following a few examples I've seen in various places online, but all this is really doing is setting up my DB when I call behat from CLI - during the application run (most specifically any curl requests out to the API) laravel still ties my DB to my local configuration which uses a mysql db.
here's an example snippet of a test suite, the Adding a new track scenario is one where I'd like to have my API use the sqlite when the request and payload is made to my API.
Feature: Tracks
Scenario: Finding all the tracks
When I request "GET /api/v1/tracks"
Then I get a "200" response
Scenario: Finding an invalid track
When I request "GET /api/v1/tracks/1"
Then I get a "404" response
Scenario: Adding a new track
Given I have the payload:
"""
{"title": "behat add", "description": "add description", "short_description": "add short description"}
"""
When I request "POST /api/v1/tracks"
Then I get a "201" response
Here is a snippet of my bootstrap/start.php file. What I'm I am trying to accomplish is for my behat scenario (ie: Adding a new track) request to hit the testing config so I can manage w/ a sqlite db.
$env = $app->detectEnvironment(array(
'local' => array('*.local'),
'production' => array('api'),
'staging' => array('api-staging'),
'virtualmachine' => array('api-vm'),
'testing' => array('*.testing'),
));
Laraval does not know about Behat. Create a special environment for it, with its own database.
Here is what I have in my start.php:
if (getenv('APP_ENV') && getenv('APP_ENV') != '')
{
$env = $app->detectEnvironment(function()
{
return getenv('APP_ENV');
});
}
else
{
$env = $app->detectEnvironment(array(
'local' => array('*.local', 'homestead'),
/* ... */
));
}
APP_ENV is set in your Apache/Nginx VirtualHost config. For apache:
SetEnv APP_ENV acceptance
Create a special local test URL for Behat to use and put that in the VirtualHost entry.
I recommend using an SQLite file-based database. Delete the file before each Feature or Scenario. Found it to be much quicker than MySQL. I want to use the SQLite in-memory mode but I could not find a way to persist data between requests with the in-memory database.