Guzzle / Laravel cURL error 6: Could not resolve host: api.coingecko.com [duplicate] - php

This question already has answers here:
curl: (6) Could not resolve host: google.com; Name or service not known
(7 answers)
Closed 9 months ago.
Ok so I am a little stuck with this issue. I have a foreach loop (usually 50 results) that queries an API using Guzzle via Laravel Http and I am getting really inconsistent results.
I monitor the inserts in the database as they come in and sometimes the process seems slow and other times the process will fail with the following after x number of returned results.
cURL error 6: Could not resolve host: api.coingecko.com
The following is the actual code im using to fetch the results.
foreach ($json_result as $account) {
var_dump($account['name']);
$name = $account['name'];
$coingecko_id = $account['id'];
$identifier = strtoupper($account['symbol']);
$response_2 = Http::get('https://api.coingecko.com/api/v3/coins/'.urlencode($coingecko_id).'?localization=false');
if($response_2->successful()){
$json_result_extra_details = $response_2->json();
if( isset($json_result_extra_details['description']['en']) ){
$description = $json_result_extra_details['description']['en'];
}
if( isset($json_result_extra_details['links']['twitter_screen_name']) ){
$twitter_screen_name = $json_result_extra_details['links']['twitter_screen_name'];
}
}else {
// Throw an exception if a client or server error occurred...
$response_2->throw();
}
$crypto_account = CryptoAccount::updateOrCreate(
[
'identifier' => $identifier
],
[
'name' => $name,
'identifier' => $identifier,
'type' => "cryptocurrency",
'coingecko_id' => $coingecko_id,
'description' => $description,
]);
//sleep(1);
}
Now I know I am within the API rate limit of 100 calls a minute so I don't think that is the issue. I am wondering if this is a server/api issue which I don't really have any control over or if it related to my code and how Guzzle is implemented.
When I do single queries I don't seem to have a problem, the issue seems to be when it is inside the foreach loop.
Any advice would be great. Thanks
EDIT
Ok to update the question, I am now wondering if this is Guzzle/Laravel related. I changed the API to now point to the Twitter API and I am getting the same error after 80 synchronous requests.

I think it's better to use Asynchronous Request directly with Guzzle.
$request = new \GuzzleHttp\Psr7\Request('GET', 'https://api.coingecko.com/api/v3/coins?localization=false');
for ($i=0; $i < 50 ; $i++) {
$promise = $client->sendAsync($request)
->then(function ($response) {
echo 'I completed! ' . $response->getBody();
});
$promise->wait();
}
more information on Async requests: Doc

I have a similar problem as yours.
I doing the HTTP requests in the loop, and the first 80 requests are okay.
But the 81st start throwing this "Could not resolve host" exception.
It's very strange for me because the domain can be resolved perfectly fine on my machine.
Thus I start digging into the code.
End up I found that Laravel's Http facades keep generate the new client.
And I guess this eventually trigger the DNS resolver's rate limit?
So I have the workaround as following:
// not working
// as this way will cause Laravel keep getting a new HTTP client from guzzle.
foreach($rows as $row) {
$response = Http::post();
}
// workaround
$client = new GuzzleHttp\Client();
foreach($rows as $row) {
$response = $client->post();
// don't forget use $response->getBody();
}
i believe it's because $client will cached the DNS resolve result, thus it will reduce the call to DNS resolver and not trigger the rate limit?
I'm not sure whether it was right. BUT it's working for me.

Related

cPanel Parked Domains Not returning array

A password was changed and cPanel broke. Fixed the password and it's still broken! I have to iterate over parked domains. I've verified the user / password combination is correct via PuTTY.
<?php
include_once('cpanel_api_xml.php');
$domain = 'example.com';
$pass = '';//etc
$user = '';//etc
$xmlapi = new xmlapi('127.0.0.1');
$xmlapi->password_auth($user,$pass);
$domains_parked = $xmlapi->listparkeddomains($user);
foreach ($domains_parked as $k1=>$v1)
{
if ($v1->domain == $domain) {$return = true; break;}
}
?>
That code generates the following error:
Invalid argument supplied for foreach()
Apparently $domains_parked is not even set! I've spent time looking at the function being called so without dumping all 86KB here is the cleaned up version of $xmlapi->listparkeddomains:
<?php
public function listparkeddomains($username, $domain = null)
{
$args = array();
if (!isset($username))
{
error_log("listparkeddomains requires that a user is passed to it");
return false;
}
if (isset($domain))
{
$args['regex'] = $domain;
return $this->api2_query($username, 'Park', 'listparkeddomains', $args);
}
return $this->api2_query($username, 'Park', 'listparkeddomains');
}
?>
I don't know what they're doing with setting a variable as the second parameter. I've called this function with and without and tested the reaction with a simple mail().
Next I tried calling the API in a more direct fashion:
$xmlapi->api2_query($username, 'Park', 'listparkeddomains')
That also does not work. Okay, let's try some really raw output testing:
echo "1:\n";
print_r($xmlapi);
echo "2:\n";
print_r($xmlapi->api2_query($user, 'Park', 'listparkeddomains'));
echo "3:\n";
$domains_parked = $xmlapi->listparkeddomains($user);
print_r($domains_parked);
die();
That outputs the following:
1: xmlapi Object (
[debug:xmlapi:private] =>
[host:xmlapi:private] => 127.0.0.1
[port:xmlapi:private] => 4099
[protocol:xmlapi:private] => https
[output:xmlapi:private] => simplexml
[auth_type:xmlapi:private] => pass
[auth:xmlapi:private] => <pass>
[user:xmlapi:private] => <user>
[http_client:xmlapi:private] => curl ) 2: 3:
I have never encountered such fragile code though I have no choice but to use it. Some help please?
So cPanel version 74 killed off the whole XML API and it doesn't frigin tell you with any error messages. I can not objectively say in the least that cPanel provides a stable platform to build anything reliable upon. You can either intentionally gimp your server from automatically updating (and potentially miss out on security updates) or every so X iterations of time completely rewrite the code again...and again...and again.

php - Detect bad request

I have 2 JSON sources and one of them reply 400 Bad request (depend of charge in servers)
So I want that my php code check the answer of both server and select working one
<?php
$server1 = 'server1.lan'
$server2 = 'server2.lan'
/*
Here a code to check and select the working server
*/
$json=file_get_contents('https://'.$workingServer.'/v1/data?source='.$_GET['source']);
$data = json_decode($json);
if (count($data->data)) {
// Cycle through the array
foreach ($data->data as $idx => $data) {
echo "<p>$data->name</p>\n";
?>
Thanks !
Below is an idea of what you may want to implement. Your goal is to get that idea and implement something like that in your own way, with a normal error handling and removal of code duplication:
$json = file_get_contents('https://server1.lan/v1/data');
if ($json === false)
{
$json = file_get_contents('https://server2.lan/v1/data');
if ($json === false)
{
die('Both servers are unavailable');
}
}
file_get_contents returns boolean false on failure, so if the first server is unavailable, call the second. If it is also unavailable, exit the script, or do some sort of error handling that you prefer.
You may want to create an array of possible server names, and use a function that iterates over all of them until it finds a working one, and returns the contents, or throws an exception on failure.
I would also suggest that you use curl, which gives you an option to see the error codes of the request, customize the request itself, and so on.
Check $http_response_header after making the file_get_contents call.
$json = file_get_contents(('https://'.$server1.'/v1/data?source='.$_GET['source']);
if (strpos($http_response_header[0],"400") > 0)
{
$json = file_get_contents(('https://'.$server.'/v1/data?source='.$_GET['source']);
}
See examples at http://php.net/manual/en/reserved.variables.httpresponseheader.php

Do Doctrine flush method works asynchronous?

I have a functional test that creates several records and then makes some request calls, the tests sometimes passes and others not, it's really weird, when I use var_dump it sometimes give me the amount of records I was requiring, and other times it just give me less than that.
This is the code:
foreach (range(0, 80) as $number)
{
$citaDetalle = new CitasDetalle();
$citaDetalle->setCodigo('FF#')
->setCitaGenerator($generator)
->setUidCreate($user)
->setFechaCita( DateExtension::nextLaborDay((new \DateTime())->modify("+5 Day"), false, false) )
->setCitaTurno($turno)
->setCitaPlace($place)
;
$em->persist($citaDetalle);
}
foreach (range(0, 20) as $number)
{
$citaDetalle = new CitasDetalle();
$citaDetalle->setCodigo('FF#')
->setCitaGenerator($generator)
->setUidCreate($user)
->setFechaCita( DateExtension::nextLaborDay((new \DateTime())->modify("+5 Day"), false, false) )
->setCitaTurno($turno2)
->setCitaPlace($place)
;
$em->persist($citaDetalle);
}
$em->flush();
$crawler = $this->client->request('GET', '/c/g/citas/new');
$this->assertEquals(200, $this->client->getResponse()->getStatusCode(),
"Unexpected HTTP status code for GET /c/g/citas/new");
$form = $crawler->selectButton('Generar Cita')->form([
'core_gestion_bundle_citas_detalle_type[citaGenerator]' =>
$crawler->filter('#core_gestion_bundle_citas_detalle_type_citaGenerator option:contains("Generator Test")')->attr('value')
]);
$this->client->submit($form);
$this->client->followRedirect();
$lastDate = $em->getRepository('CoreGestionBundle:CitasDetalle')
->obtenerUltimaCita()[0]->getFechaCita();
$compareDate = DateExtension::nextLaborDay((new \DateTime())->modify("+6 Day"));
$this->assertEquals($compareDate->format('Y-m-d'), $lastDate->format('Y-m-d'));
This is not a proper way to test things. Why would you create over and over again records in your db? It's silly as with DataFixtures you can reach the same but you can do only once (and, more important, you don't need to "littering your test code").
Remember also that your db should be cleared and restored at every test (or, if you're able to do this, test "write" onto db with transaction and, in tearDown() function, discard changes)
Answer to your question
No, doctrine will not do things in async. way, your problem must be somewhere else.

Activemq and Php Stomp: synchronous producer sample

I'm trying have this principle working:
a producer that sends one message (1) and waits for ack which contains some result (json result of an operation, actually)
a consumer that checks all pending messages every 5 seconds, and handle all of them in one row, and acknowlegdes all of them in one row, then wait again 5 seconds (infinite loop).
Here are the 30 lines of my stompproducer.php:
<?php
function msg($txt)
{
echo date('H:i:s > ').$txt."\n";
}
$queue = '/aaaa';
$msg = 'bar';
if (count($argv)<3) {
echo $argv[0]." [msg] [nb to send]\n";
exit(1);
}
$msg = (string)$argv[1];
$to_send = intval($argv[2]);
try {
$stomp = new Stomp('tcp://localhost:61613');
while (--$to_send) {
msg("Sending...");
$result = $stomp->send(
$queue,
$msg." ". date("Y-m-d H:i:s"),
array('receipt' => 'message-123')
);
echo 'result='.var_export($result,true)."\n";
msg("Done.");
}
} catch(StompException $e) {
die('Connection failed: ' . $e->getMessage());
}
Here are the 30 lines of my stompconsumer.php:
<?php
$queue = '/aaaa';
$_waitTimer=5000000;
$_timeLastAsk = microtime(true);
function msg($txt)
{
echo date('H:i:s > ').$txt."\n";
}
try {
$stomp = new Stomp('tcp://localhost:61613');
$stomp->subscribe($queue, array('activemq.prefetchSize' => 40));
$stomp->setReadTimeout(0, 10000);
while (true) {
$frames_read=array();
while ($stomp->hasFrame()) {
$frame = $stomp->readFrame();
if ($frame != null) {
array_push($frames_read, $frame);
}
if (count($frames_read)==40) {
break;
}
}
msg("Nombre de frames lues : ".count($frames_read));
msg("Pause...");
$e=$_waitTimer-(microtime(true)-$_timeLastAsk);
if ($e>0) {
usleep($e);
}
if (count($frames_read)>0) {
msg("Ack now...");
foreach ($frames_read as $frame) {
$stomp->ack($frame);
}
}
$_timeLastAsk = microtime(true);
}
} catch(StompException $e) {
die('Connection failed: ' . $e->getMessage());
}
I can't manage to do synchronous producer, ie producer that waits for consumer ack. If you run the samples I've done here, you'll see that producer instantaneously sends all messages, then quits, with all "true" like "ok" results when calling $stomp->send().
I still haven't found good examples, neither good documentation with a simple blocking sample.
What shall I do to make my producer blocking until the consumer sends its ack?
NB: I've read all documentation here and the stomp php questions on stackoverflow here and here.
First thing to pop t my mind: Take a look at this stomp plugin:
http://activemq.apache.org/message-redelivery-and-dlq-handling.html
Another workaround I can thing of is:
On producer side:
1. Change your producer to send persistent messages
On your consumer side:
Use a timer.
1. Read message/frames until empty or max cap reached.
2. Create a CURL request and empty packed list of messages
3. Sleep your server for 5 secs
You definitely need to test this further, but should work. Once the process wakes up, you should be able to read all messages queued.
Things to consider:
- persistent messages will need an expiration time
- You'll need ACK on your consumer side to make sure to update status of messages already attended. Use ACK=client so you can ACK all messages acknowledged
- It's easier if you don't have to wait for your CURL to respond.
- Out of the box, it's not supported to send ACK from the consumer (server side).
Best of luck
From the question it sounds like you are looking for a request / response type messaging pattern. This is something you must implement yourself as the STOMP ack you reference is only acking the message to the message broker on behalf of the consumer, the producer has no knowledge of this. Request response involves setting a reply-to address on the outbound message and then waiting to receive a response on that address before sending the next message. There are a great many articles out there that document this sort of thing such as this one.
Or if you only need to know if the broker has received the message from the client and persisted it then you can use STOMP's built in receipt mechanism to have the broker send you a receipt indicating that it has processed your sent message. This however does not guarantee that a consumer has processed the message yet.
I just remembered, you can try reactphp/stomp library.
It's an event driven library that might help you. specially take a look ad the core functionality addPeriodicTimer
https://github.com/reactphp/stomp
Cheers

Prevent timeout during large request in PHP

I'm making a large request to the brightcove servers to make a batch change of metadata in my videos. It seems like it only made it through 1000 iterations and then stopped - can anyone help in adjusting this code to prevent a timeout from happening? It needs to make about 7000/8000 iterations.
<?php
include 'echove.php';
$e = new Echove(
'xxxxx',
'xxxxx'
);
// Read Video IDs
# Define our parameters
$params = array(
'fields' => 'id,referenceId'
);
# Make our API call
$videos = $e->findAll('video', $params);
//print_r($videos);
foreach ($videos as $video) {
//print_r($video);
$ref_id = $video->referenceId;
$vid_id = $video->id;
switch ($ref_id) {
case "":
$metaData = array(
'id' => $vid_id,
'referenceId' => $vid_id
);
# Update a video with the new meta data
$e->update('video', $metaData);
echo "$vid_id updated sucessfully!<br />";
break;
default:
echo "$ref_id was not updated. <br />";
break;
}
}
?>
Thanks!
Try the set_time_limit() function. Calling set_time_limit(0) will remove any time limits for execution of the script.
Also use ignore_user_abort() to bypass browser abort. The script will keep running even if you close the browser (use with caution).
Try sending a 'Status: 102 Processing' every now and then to prevent the browser from timing out (your best bet is about 15 to 30 seconds in between). After the request has been processed you may send the final response.
The browser shouldn't time out any more this way.

Categories