PHP Google Maps Geolocation Request Failing Half Of The Time - php

I retrieved this function for making requests to Google Maps Geolocation API using PHP.
Make a request for The Google Maps Geolocation API
The issue I am having is the request fails around 50% of the time, and it takes forever to actually time out. ( Example: I'll make the request, and it will fail after about 20 seconds. I will refresh the page thus making the request again without changing any information, and it works ) I am trying to write this function into a class, but the service has been far too unreliable.
The Error:
Warning: file_get_contents(http://maps.google.com/maps/api/geocode/json?sensor=false&address=+chicago+IL): failed to open stream: Connection timed out in filename.php on line 2132
Is there any reason why?
Can I force this to timeout sooner?
Note:
HTACCESS is currently blocking all but my IP, but changing this doesn't seem to help
public function getGeoPosition($address){
$url = "http://maps.google.com/maps/api/geocode/json?sensor=false" .
"&address=" . urlencode($address);
$json = file_get_contents($url);
$data = json_decode($json, TRUE);
if($data['status']=="OK"){
return $data['results'];
}
}
print_r(getGeoPosition('chicago,il'));

Related

Send multiple http request at same time in php

I am trying to get page meta tags and description from given url .
I have url array that I have to loop through to send curl get request and get each page meta, this takes a lot of time to process .
Is there any way to process all urls simultaneuosly at same time?
I mean send request to all urls at same time and then receive
response as soon as request is completed respectively.
For this purpose I have used
curl_multi_init()
but its not working as expected. I have used this example
Simultaneuos HTTP requests in PHP with cURL
I have also used GuzzleHttp example
Concurrent HTTP requests without opening too many connections
my code
$urlData = [
'http://youtube.com',
'http://dailymotion.com',
'http://php.net'
];
foreach ($urlData as $url) {
$promises[] = $this->client->requestAsync('GET', $url);
}
Promise\all($promises)->then(function (array $responses) {
foreach ($responses as $response) {
$htmlData = $response->getBody();
dump($profile);
}
})->wait();
But I got this error
Call to undefined function GuzzleHttp\Promise\Promise\all()
I am using Guzzle 6 and Promises 1.3
I need a solution whether it is in curl or in guzzle to send simultaneous request to save time .
Check your use statements. You probably have a mistake there, because correct name is GuzzleHttp\Promise\all(). Maybe you forgot use GuzzleHttp\Promise as Promise.
Otherwise the code is correct and should work. Also check that you have cURL extension enabled in PHP, so Guzzle will use it as the backend. It's probably there already, but worth to check ;)

Google drive api file_get_contents and refferer

I am trying to list files from google drive folder.
If I use jquery I can successfully get my results:
var url = "https://www.googleapis.com/drive/v3/files?q='" + FOLDER_ID + "'+in+parents&key=" + API_KEY;
$.ajax({
url: url,
dataType: "jsonp"
}).done(function(response) {
//I get my results successfully
});
However I would like to get this results with php, but when I run this:
$url = 'https://www.googleapis.com/drive/v3/files?q='.$FOLDER_ID.'+in+parents&key='.$API_KEY;
$content = file_get_contents($url);
$response = json_decode($content, true);
echo json_encode($response);
exit;
I get an error:
file_get_contents(...): failed to open stream: HTTP request failed! HTTP/1.0 403 Forbidden
If I run this in browser:
https://www.googleapis.com/drive/v3/files?q={FOLDER_ID}+in+parents&key={API_KEY}
I get:
The request did not specify any referer. Please ensure that the client is sending referer or use the API Console to remove the referer restrictions.
I have set up referrers for my website and localhost in google developers console.
Can someone explain me what is the difference between jquery and php call and why does php call fails?
It's either the headers or the cookies.
When you conduct the request using jQuery, the user agent, IP and extra headers of the user are sent to Google, as well as the user's cookies (which allow the user to stay logged in). When you do it using PHP this data is missing because you, the server, becomes the one who sends the data, not the user, nor the user's browser.
It might be that Google blocks requests with invalid user-agents as a first line of defense, or that you need to be logged in.
Try conducting the same jQuery AJAX request while you're logged out. If it didn't work, you know your problem.
Otherwise, you need to alter the headers. Take a look at this: PHP file_get_contents() and setting request headers. Of course, you'll need to do some trial-and-error to figure out which missing header allows the request to go through.
Regarding the referrer, jQuery works because the referrer header is set as the page you're currently on. When you go to the page directly there's no referrer header. PHP requests made using file_get_contents have no referrer because it doesn't make much sense for them to have any.

file_get_contents failing when was working earlier

I am trying to access multiple json files provided by steam for the market price of an item for CSGO. I am using a first file_get_contents which works:
$inventory = file_get_contents('http://steamcommunity.com/profiles/' . $steamprofile['steamid'] . '/inventory/json/730/2');
but the 2nd onwards doesn't work:
$marketString = file_get_contents('http://steamcommunity.com/market/priceoverview/?currency=1&appid=730&market_hash_name=' . urlencode($json_a->{'rgDescriptions'}->$rgDescrId->{'market_hash_name'}));
However I get the error on all items for example:
Warning: file_get_contents(http://steamcommunity.com/market/priceoverview/?currency=1&appid=730&market_hash_name=Negev%20|%20Nuclear%20Waste%20(Minimal%20Wear)): failed to open stream: HTTP request failed! HTTP/1.0 429 Unknown in /home4/matt500b/public_html/themooliecommunity.com/CSGO/index.php on line 24
I can confirm that allow_url_fopen is on
Pasting the following url into a browser shows that the url works
http://steamcommunity.com/market/priceoverview/?currency=1&appid=730&market_hash_name=Negev%20|%20Nuclear%20Waste%20(Minimal%20Wear)
Please note that about 1 hour ago this worked but now throwing an error, any suggestions?
You've got response with status 429 Too many requests
The user has sent too many requests in a given amount of time ("rate
limiting").
So this site can just block too frequent reference to his API
A HTTP 429 is a too many request warning, it's not an error, just a note to tell you you've over done it a little. You'll have to either wait a while or if it's your own server then adjust it's settings to allow for more requests.

file_get_contents(https://...#me): failed to open stream: HTTP request failed! HTTP/1.1 401 Unauthorized

I am working on a webpage where I have to get data from an API (with PHP). The authenatication works fine and the user can log in. To save the access token I use the function setcookie() in php. However after some time the data dissapears and I get the following warning:
Warning: file_get_contents(https://...#me): failed to open stream: HTTP request failed! HTTP/1.1 401 Unauthorized in C:\wamp\www\main.php on line 40
These are the lines:
function getUser($access_token){
$url = "https://jawbone.com/nudge/api/v.1.0/users/#me";
$opts = (array(
'http'=>array(
'method'=>"GET",
'header'=>"Authorization: Bearer {$access_token}\r\n"
)
));
$context = stream_context_create($opts);
$response = file_get_contents($url, false, $context); //
$user = json_decode($response, true);
return $user['data'];
}
It's quite weird actually because it works when I delete the access token cookie and then log in (and authenticate) again... I simply do not understand why this is happening.
Setting the cookie (expires in is 31536000):
if (!isset($_COOKIE['access_token'])) {
setcookie('access_token', $data, time() + ($json['expires_in']));
}
Can you please tell me what I am doing wrong?
Check your expiry date of the cookie, even though the token may be valid on your side, the cookie may expire earlier.
Maybe break it up a bit. Start at the simplest task, make it work. So, let's say you need an api call, make that call work. When that works, move on to the next step.
If the api call needs a secret token, I can imagine you don't want to just give it to any user to store it as a cookie.
Next step would be authentication. If you're not only new to PHP, but web applications in general, I would start with something less challenging.
But let's assume you'll master it quite soon, the next step (there is never a last step) could be a security decision: does the user need to do the api call, or can I do that for the user by doing the api call server side once the user is authorized?

Amazon CloudSearch throws HTTP 403 on document upload

I am trying to integrate Amazon CloudSearch into SilverStripe. What I want to do is when the pages are published I want a CURL request to send the data about the page as a JSON string to the search cloud.
I am using http://docs.aws.amazon.com/cloudsearch/latest/developerguide/uploading-data.html#uploading-data-api as a reference.
Every time I try to upload it returns me a 403. I have allowed the IP address in the access policies for the search domain as well.
I am using this as a code reference: https://github.com/markwilson/AwsCloudSearchPhp
I think the problem is the AWS does not authenticate correctly. How do I correctly authenticate this?
If you are getting the following error
403 Forbidden, Request forbidden by administrative rules.
and if you are sure you have appropriate rules in effect, I would check the api url you are using. Make sure you are using the correct endpoint. If you are doing batch upload the api endpoint should look like below
your-search-doc-endpoint/2013-01-01/documents/batch
Notice 2013-01-01, that is a required part of the url. That is the api version you will be using. You cannot do the following even though it might make sense
your-search-doc-endpoint/documents/batch <- Won't work
To search you would need to hit the following api
your-search-endpoint/2013-01-01/search?your-search-params
After many searches and trial and error I was able to put together a small code block, from small pieces of code from everywhere to be able to upload a "file" using CURL and PHP to aws cloudsearch.
The one and most important things is to make sure that your data is prepare correctly to be sent in JSON format.
Note: For cloudsearch you're not uploading a file your posting a stream of JSON data. That is why many of us have a problem uploading the data.
So in my case I wanted to be able to upload data that my search engine on clousearch, it seems simple and it is but the lack of example code to do this is not there most people tell you you to go to the documentation which usually has examples but to use the aws CLI. The php SDK is just a learning curb plus instead of making it simple you do 20 steps to do 1 task and not only that you're require to have all these other libraries that are just wrappers for native PHP functions and sometimes instead of making it simple it becomes complicated.
So back to how I did it, first I am pulling the data from my database as an array and serialize it to save it to a file.
$row = $database_data;
foreach ($rows as $key => $row) {
$data['type'] = 'add';
$data['id'] = $row->id;
$data['fields']['title'] = $row->title;
$data['fields']['content'] = $row->content;
$data2[] = $data;
}
// now save your data to a file and make sure
// to serialize() it
$fp = fopen($path_to_file, $mode)
flock($fp, LOCK_EX);
fwrite($fp, serialize($data2));
flock($fp, LOCK_UN);
fclose($fp);
Now that you have your data saved we can play with it
$aws_doc_endpoint = '{Your AWS CloudSearch Document Endpoint URL}';
// Lets read the data
$data = file_get_contents($path_to_file);
// Now lets unserialize() it and encoded in JSON format
$data = json_encode(unserialize($data));
// finally lets use CURL
$ch = curl_init($aws_doc_endpoint);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Length: ' . strlen($data)));
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json'));
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$response = curl_exec($ch);
curl_close($ch);
$response = json_decode($response);
if ($response->status == 'success')
{
return TRUE;
}
return FALSE;
And like I said there is nothing to it. Most answers that I encounter where, use Guzzle its really easy, well yes it is but for just a simple task like this you don't need it.
Aside from that if you still get an error make sure to check the following.
Well formatted JSON data.
Make sure you have access to the endpoint.
Well I hope someone finds this code helpful.
To diagnose whether it's an access policy issue, have you tried a policy that allows all access to the upload? Something like the following opens it up to everything:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "cloudsearch:*"
}
]
}
I noticed that if you just go to the document upload endpoint in a browser (mine looks like "doc-YOURDOMAIN-RANDOMID.REGION.cloudsearch.amazonaws.com") you'll get the 403 "Request forbidden by administrative rules" error, even with open access, so as #dminer said you'll need to make sure you're posting to the correct full url.
Have you considered using a PHP SDK? Like http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-cloudsearchdomain.html. It should take care of making correct requests, in which case you could rule out transport errors.
this never worked for me. and i used the Cloudsearch terminal to upload files. and php curl to search files.
Try adding "cloudsearch:document" to CloudSearch's access policy under Actions

Categories