I previously had a Google geocoding script working to extract longitude and latitude using local addresses in a database.
In the last 6 months I've switched hosts, and apparently Google has implemented a new forward geocoder. Now it just returns the url not loading error from the xml script call.
I've tried everything to get my code working. Even sample coding from other websites won't work on my server. What am I missing? Is there possibly a server side setting that is blocking this from executing properly?
Attempt # 1:
$request_url = "http://maps.googleapis.com/maps/api/geocode/xml?new_forward_geocoder=true&address=1600+Amphitheatre+Parkway,+Mountain+View,+CA";
echo $request_url;
$xml = simplexml_load_file($request_url) or die("url not loading");
$status = $xml->status;
return $status;
Simply returns url not loading. I have tried with and without the new_forwad_geocoder. I have also tried with and without https.
The $request_url string DOES return proper results if you simply copy and paste it to a browser.
Also tried this just to see if I could get a file to return. Attempt 2:
$request_url = "http://maps.googleapis.com/maps/api/geocode/json?new_forward_geocoder=true&address=1600+Amphitheatre+Parkway,+Mountain+View,+CA";//&sensor=true
echo $request_url."<br>";
$tmp = file_get_contents($request_url);
echo $tmp;
Any idea what could be causing the connection failure?
I wasn't ever able to get this working with XML again and the file_get_contents call was the culprit I'm almost positive.
I've posted what I did get to work with JSON/Curl (below) in case anyone has similar issues.
Ultimately I think the problems I ran into had to do with an upgrade to our Apache version on the server; and some of the default settings related to file_get_contents and fopen being more restrictive. I haven't confirmed this though.
This code does work though:
class geocoder{
static private $url = "http://maps.google.com/maps/api/geocode/json?sensor=false&address=";
static public function getLocation($address){
$url = self::$url.$address;
$resp_json = self::curl_file_get_contents($url);
$resp = json_decode($resp_json, true);
//var_dump($resp);
if($resp['status']='OK'){
//var_dump($resp['results'][0]['geometry']['location']);
//echo "<br>";
//var_dump($resp['results'][0]['geometry']['location_type']);
//echo "<br>";
//var_dump($resp['results'][0]['place_id']);
return array ($resp['results'][0]['geometry']['location'], $resp['results'][0]['geometry']['location_type'], $resp['results'][0]['place_id']);
}else{
return false;
}
}
static private function curl_file_get_contents($URL){
$c = curl_init();
curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($c, CURLOPT_URL, $URL);
$contents = curl_exec($c);
curl_close($c);
if ($contents) return $contents;
else return FALSE;
}
}
$Address = "1600 Amphitheatre Parkway, Mountain View, CA";
$Address = urlencode(trim($Address));
list ($loc, $type, $place_id) = geocoder::getLocation($Address);
//var_dump($loc);
$lat = $loc["lat"];
$lng = $loc["lng"];
echo "<br><br> Address: ".$Address;
echo "<br>Lat: ".$lat;
echo "<br>Lon: ".$lng;
echo "<br>Location: ".$type;
echo "<br>Place ID: ".$place_id;
Related
I use curl with the Riot API. Everything is working fine on my live server but isn't in local. The curl extension is enabled in WampServer and I don't get any error messages, it's just a blank page.
Here's my code even if it's not actually relevant.
<?php
$private_key = "XXX";
function summoner_name($summoner, $server, $private_key) {
$summoner_encoded = rawurlencode($summoner);
$summoner_lower = strtolower($summoner_encoded);
$curl_url = 'https://' . $server . '.api.pvp.net/api/lol/' . $server . '/v1.4/summoner/by-name/' . $summoner . '?api_key='.$private_key;
$curl = curl_init($curl_url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
$result = curl_exec($curl);
curl_close($curl);
return $result;
}
function summoner_info_array_name($summoner) {
$summoner_lower = mb_strtolower($summoner, 'UTF-8');
$summoner_nospaces = str_replace(' ', '', $summoner_lower);
return $summoner_nospaces;
}
$summoner = "Test";
$server = "euw";
$summoner_info = summoner_name($summoner, $server, $private_key);
$summoner_info_array = json_decode($summoner_info, true);
$summoner_info_array_name = summoner_info_array_name($summoner);
$summoner_id = $summoner_info_array[$summoner_info_array_name]['id'];
$summoner_name_display = $summoner_info_array[$summoner_info_array_name]['name'];
$summoner_icon = $summoner_info_array[$summoner_info_array_name]['profileIconId'];
echo '<img src="http://ddragon.leagueoflegends.com/cdn/6.9.1/img/profileicon/'.$summoner_icon.'.png" /><br/><hr>'.$summoner_name_display;
?>
And here's my phpinfo() for curl extension.
Thanks in advance!
.
So, first, as #MaksimVolkob pointed out, and as we discussed in the comments, the first step to resolving these errors is to see what the error message actually is. curl_error() will give you this information.
Specifically, you're getting an SSL/TLS error:
SSL certificate problem: unable to get local issuer certificate' (length=63)
If you don't care about security (I do not recommend this for production applications, ever.), you can disable the SSL verification step that is failing:
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
The better way is to fix your CA certificate information by setting CURLOPT_CAINFO. This blog post explains this pretty well.
Edit: As OP discovered, this question has more specifics about getting cURL to recognize the proper CA certificate.
You can always call curl_getinfo() and curl_error() functions to check the problems on latest curl query.
Like this:
$result = curl_exec($curl);
if ($result === false) {
echo "Something is wrong here!\n".var_export(curl_error($curl), true)
. "\nQuery:".var_export(curl_getinfo($curl), true); exit();
}
(I'm scraping this stuff with the permission of the website in question, by the way).
Pretty simple web scraper, was working fine when I was loading all the links by hand, but when I've tried to load them in via JSON and variables (so I can do lots of scraping with the one script and make the process more modular by just adding more links to JSON) it runs on an infinite loop.
(Page has been loading for about 15 minutes now)
Here is my JSON. Only one store is in there for testing purposes but there is going to be about 15 more.
[
{
"store":"Incu Men",
"cat":"Accessories",
"general_cat":"Accessories",
"spec_cat":"accessories",
"url":"http://www.incuclothing.com/shop-men/accessories/",
"baseurl":"http://www.incuclothing.com",
"next_select":"a.next",
"prod_name_select":".infobox .fn",
"label_name_select":".infobox .brand",
"desc_select":".infobox .description",
"price_select":"#price",
"mainImg_select":"",
"more_imgs":".product-images",
"product_url":".hproduct .photo-link"
}
]
Here is the PHP scraper code:
<?php
//Set infinite time limit
set_time_limit (0);
// Include simple html dom
include('simple_html_dom.php');
// Defining the basic cURL function
function curl($url) {
$ch = curl_init();
// Initialising cURL
curl_setopt($ch, CURLOPT_URL, $url);
// Setting cURL's URL option with the $url variable passed into the function
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
// Setting cURL's option to return the webpage data
$data = curl_exec($ch);
// Executing the cURL request and assigning the returned data to the $data variable
curl_close($ch);
// Closing cURL
return $data;
// Returning the data from the function
}
function getLinks($catURL, $prodURL, $baseURL, $next_select) {
$urls = array();
while($catURL) {
echo "Indexing: $url" . PHP_EOL;
$html = str_get_html(curl($catURL));
foreach ($html->find($prodURL) as $el) {
$urls[] = $baseURL . $el->href;
}
$next = $html->find($next_select, 0);
$url = $next ? $baseURL . $next->href : null;
echo "Results: $next" . PHP_EOL;
}
return $urls;
}
$string = file_get_contents("jsonWorkers/incuMens.json");
$json_array = json_decode($string,true);
foreach ($json_array as $value){
$baseURL = $value['baseurl'];
$catURL = $value['url'];
$store = $value['store'];
$general_cat = $value['general_cat'];
$spec_cat = $value['spec_cat'];
$next_select = $value['next_select'];
$prod_name = $value['prod_name_select'];
$label_name = $value['label_name_select'];
$description = $value['desc_select'];
$price = $value['price_select'];
$prodURL = $value['product_url'];
if (!is_null($value['mainImg_select'])){
$mainImg = $value['mainImg_select'];
}
$more_imgs = $value['more_imgs'];
$allLinks = getLinks($catURL, $prodURL, $baseURL, $next_select);
}
?>
Any ideas why the script would be running infinitely and not returning anything/stopping/printing anything to screen? I'm just gonna let it run until it stops. When I was doing this by hand it would only take a minute or so, sometimes less, so I'm sure it's a problem with my variables/json but I can't for the life of me see what the issues lie.
Can anyone take a quick look and point me in the right direction?
There is a problem with your while($catURL) loop. What do you want to do ?
Moreover, you can force to display information on your browser with the flush() command.
I am using cURL via PHP to test service connections, and I'm getting some inconsistent results. When I run the test via PHP & cURL this is my result:
{"response":"\n\n\n\n \n \n
When I put that same URL in my browser I get this:
{"response":"\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\n<head>\n <link href=\"/images/global/global.css\...and so on
The response in my browser is cut short, but you get the idea.
With my PHP, I read in a JSON file, parse out the URL I need and the use cURL to send a GET request. Here is the code that I am using to test the service via PHP:
<?php
include ("serviceURLs.php");
class callService {
function testService($url){
$ch = curl_init($url);
curl_exec($ch);
$info = curl_getinfo($ch);
if ($info['http_code'] == 200){
echo("Test has passed </br>");
}else{
echo("Test Failed.</br> ");
}
var_dump($info);
curl_close($ch);
}
function readFile(){
$myFile = "./service/catalog-adaptation.json";
$fr = fopen($myFile, 'r');
$fileData = fread($fr, filesize($myFile));
$json_a = json_decode($fileData, TRUE);
$prodServer = $json_a['serverRoots']['%SERVER_ROOT']['PROD'];
$demoServer = $json_a['serverRoots']['%SERVER_ROOT']['DEMO'];
$testServer = $json_a['serverRoots']['%SERVER_ROOT']['TEST'];
$testUrls = $json_a['commands'];
foreach($testUrls as $tURL){
$mURL = $tURL['URL'];
if(stripos($mURL, "%")===0){
$testTestService = str_replace("%SERVER_ROOT", $testServer, $mURL);
$testDemoService = str_replace("%SERVER_ROOT", $demoServer, $mURL);
$testProdService = str_replace("%SERVER_ROOT", $prodServer, $mURL);
echo ("Production test: ");
$this->testService($testProdService);
echo ("Demo test: ");
$this->testService($testDemoService);
echo ("Test test: ");
$this->testService($testTestService);
}
}
}
}
$newServiceTest = new callService;
$newServiceTest->readFile();
?>
Can anyone tell my why I am getting different results and how I can fix my code so I can get consistent results?
You need to set below option for return the transfer as a string of the return value of curl_exec() instead of outputting it out directly.
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
I am trying to use the currentcy exchange rate feeds of the European Central Bank (ECB)
http://www.ecb.int/stats/eurofxref/eurofxref-daily.xml
They have provided documentation on how to parse the xml but none of the options works for me: I checked that allow_url_fopen=On is set.
http://www.ecb.int/stats/exchange/eurofxref/html/index.en.html
For instance, I used but it doesn't echo anything and it seems the $XML object is always empty.
<?php
//This is aPHP(5)script example on how eurofxref-daily.xml can be parsed
//Read eurofxref-daily.xml file in memory
//For the next command you will need the config option allow_url_fopen=On (default)
$XML=simplexml_load_file("http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml");
//the file is updated daily between 2.15 p.m. and 3.00 p.m. CET
foreach($XML->Cube->Cube->Cube as $rate){
//Output the value of 1EUR for a currency code
echo '1€='.$rate["rate"].' '.$rate["currency"].'<br/>';
//--------------------------------------------------
//Here you can add your code for inserting
//$rate["rate"] and $rate["currency"] into your database
//--------------------------------------------------
}
?>
Update:
As I am behind proxy at my test environment, I tried this but still I don't get to read the XML:
function curl($url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_close ($ch);
return curl_exec($ch); }
$address = urlencode($address);
$data = curl("http://www.ecb.int/stats/eurofxref/eurofxref-daily.xml");
$XML = simplexml_load_file($data);
var_dump($XML); -> returns boolean false
Please help me. Thanks!
I didn't find any relevant settings in php.ini. Check with phpinfo() if you have SimpleXML support and cURLsupport enabled. (You should have them both and especially SimpleXML since you're using it and it returns false, it doesn't complain about missing function.)
Proxy might be an issue here. See this and this answer. Using cURL could be an answer to your problem.
Here's one alternative foud here.
$url = file_get_contents('http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml');
$xml = new SimpleXMLElement($url) ;
//file put contents - same as fopen, wrote and close
//need to output "asXML" - simple xml returns an object based upon the raw xml
file_put_contents(dirname(__FILE__)."/loc.xml", $xml->asXML());
foreach($xml->Cube->Cube->Cube as $rate){
echo '1€='.$rate["rate"].' '.$rate["currency"].'<br/>';
}
This solution works for me:
$data = [];
$url = "http://www.ecb.europa.eu/stats/eurofxref/eurofxref-hist-90d.xml";
$xmlRaw = file_get_contents($url);
$doc = new DOMDocument();
$doc->preserveWhiteSpace = FALSE;
$doc->loadXML($xmlRaw);
$node1 = $doc->getElementsByTagName('Cube')->item(0);
foreach ($node1->childNodes as $node2) {
$value = [];
foreach ($node2->childNodes as $node3) {
$value['date'] = $node2->getAttribute('time');
$value['currency'] = $node3->getAttribute('currency');
$value['rate'] = $node3->getAttribute('rate');
$data[] = $value;
unset($value);
}
}
echo "<pre"> . print_r($data) . "</pre>";
Hey all I have seen several questions on the topic here, but none of them have solved my problem. I have a script on my site which I want to use to generate several different types of emails to my users. I wanted a way to be able to create template files for the different emails which accept $_POST variables to fill in relevant information, and to simply make a post request to these templates and get back the response to place as the body of the email. I am attempting to write a function which would accept the location of the template file (either relative or absolute would work, but I would prefer relative honestly), and an array of parameters that I would like to send to the template via post. So far I have had no luck. Here is my code so far:
private function post_request($url, $data) {
$output = array();
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
$result = curl_exec($ch);
curl_close($ch);
if ($result) {
$output['status'] = "ok";
$output['content'] = $result;
} else {
$output['status'] = "failure";
$output['error'] = curl_error($ch);
}
curl_close($ch);
return $output;
}
I have been getting the error "couldn't connect to host" from curl, but after outputting my url to an error log I have been able to verify that copying and pasting the URL into firefox results in seeing the page correctly.
Any ideas? I am not married to the idea of using curl, so if there is a better option I would be more than happy to use it instead. Thanks for the help all!
You should be able to use file_get_contents() for this, so long as your host has not prevented it from accessing remote locations (and the $url script is not looking exclusively for POST data).
private function post_request($url, $data) {
$output = array();
$url_with_data = '';
foreach ( $data as $k=>$v ){ // Loop through data and create request string
$url_with_data .= '&' . $k . '=' . $v;
}
// Remove first ampersand and encode the data
$url_with_data = urlencode( substr( $url_with_data, 1 ) );
// Request file
// Format will be http://url.com?var1=data&var2=data&var3=data
$result = file_get_contents( $url . '?' . $url_with_data );
if ($result) {
$output['status'] = "ok";
$output['content'] = $result;
} else {
$output['status'] = "failure";
$output['error'] = 'Could not open remote file';
}
return $output;
}
Another option: You say that both files reside on the same server. If that is the case, you could simply require() the template builder.
private function post_request($url, $data) {
$output = array();
#require_once('./path/to/template_builder.php');
if ($result) {
$output['status'] = "ok";
$output['content'] = $result;
} else {
$output['status'] = "failure";
$output['error'] = 'Could not open remote file';
}
return $output;
}
Then in template_builder.php:
<?php
unset( $result );
if ( is_array( $data ) ){
// Parse $data ...
$result = $email_template;
}
As it turns out, the issue ended up being a server configuration error. The server was timing out while attempting to contact the file because it was hitting the wrong DNS server. Fixing that solved my problem!