I have been able to parse actual .json files, but this link I can't seem to parse.
http://forecast.weather.gov/MapClick.php?lat=36.321903791028205&lon=-96.80576767853478&FcstType=json
I am thinking because the link itself is not a .json file but a json formatted link... and I am having issues trying to parse it... even if I start by using...
<?php
$url = "http://forecast.weather.gov/MapClick.php?lat=36.321903791028205&lon=-96.80576767853478&FcstType=json";
$json = file_get_contents($url);
$json_a = json_decode($json,true);
// <---------- Current Conditions ----------> //
//Display Location
$location_full = $json_a['location']['areaDescription'];
?>
And the on my page I want to display this information I have:
<?php
require 'req/weatherinfo.php';
?>
<!DOCTYPE html>
<html>
<head>
<title>PawneeTV Weather</title>
</head>
<body>
<?php echo $location_full; ?><p>
</body>
</html>
Any ideas why its generating a blank page? I have cleared the errors now it just doesn't display anything. I've done with many times with a .json file source, it works with this source http://api.wunderground.com/api/43279e1c0b065c2e/forecast/q/OK/Pawnee.json, but will not work with a link thats ends with =json instead of .json
You can not use file_get_contents in that case. More explanation about this you can read here.
This code is working:
<?php
$url = "http://forecast.weather.gov/MapClick.php?lat=36.321903791028205&lon=-96.80576767853478&FcstType=json";
// create curl resource
$ch = curl_init();
// set url
curl_setopt($ch, CURLOPT_URL, $url);
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
// $output contains the output string
$output = curl_exec($ch);
// close curl resource to free up system resources
curl_close($ch);
$json_a = json_decode($output,true);
// <---------- Current Conditions ----------> //
//Display Location
$location_full = $json_a['location']['areaDescription'];
Related
I have solved the problem of downloading a source code of a Google's search result page here. Here is the code:
<!DOCTYPE html>
<html>
<body>
<!-- this program saves source code of a website to an external file -->
<!-- the string there for the fake user agent can be found here: http://useragentstring.com/index.php -->
<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'https://www.google.com/search?q=blue+car');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:88.0) Gecko/20100101 Firefox/88.0');
$html = curl_exec($ch);
if(empty($html)) {
echo "<pre>cURL request failed:\n".curl_error($ch)."</pre>";
} else {
$myfile = fopen("file.txt", "w") or die("Unable to open file!");
fwrite($myfile, $html);
fclose($myfile);
}
?>
</body>
</html>
Now I wish to have 100 results instead of only 10. If I change Google search settings it has no influence on the code written above. The number of search results variable is stored somewhere and it is not a part of the query string while searching on Google...
Please use the &num parameter to specify the number of records returned (&num=xx)
So for your case, please change
curl_setopt($ch, CURLOPT_URL, 'https://www.google.com/search?q=blue+car');
to
curl_setopt($ch, CURLOPT_URL, 'https://www.google.com/search?q=blue+car&num=100');
I am making a home automantion project with Arduino and I am using Teleduino to remotely control an LED as a test. I want to take the contents of this link and display them into a php page.
<!DOCTYPE html>
<html>
<body>
<?php
include 'simple_html_dom.php';
echo file_get_html('http://us01.proxy.teleduino.org/api/1.0/2560.php?k=202A57E66167ADBDC55A931D3144BE37&r=definePinMode&pin=7&mode=1');
?>
</body>
The problem is that the function does not return anything.
Is something wrong with my code?
Is there any other function I can use to send a request to a page and get that page in return?
I think you had to use function file_get_contents but your server is protcting data from scraping so curl would be a better solution:
<?php
// echo file_get_contents('http://us01.proxy.teleduino.org/api/1.0/2560php?k=202A57E66167ADBDC55A931D3144BE37&r=definePinMode&pin=7&mode=1');
// create curl resource
$ch = curl_init();
// set url
curl_setopt($ch, CURLOPT_URL, "http://us01.proxy.teleduino.org/api/1.0/2560.php?k=202A57E66167ADBDC55A931D3144BE37&r=definePinMode&pin=7&mode=1");
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
// $output contains the output string
$output = curl_exec($ch);
echo $output;
// close curl resource to free up system resources
curl_close($ch);
?>
Im using php, curl, and simple_dom_document to get snow data from snowbird.com. The problem is I cant seem to actually find the data I need. I am able to find the parent div and its name but I cant find the actually snow info div. Here is my code. Below my code ill past a small part of the output.
<?php
require('simple_html_dom.php');
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://www.snowbird.com/mountain-report/");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, true);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
$content = curl_exec($ch);
curl_close($ch);
$html = new simple_html_dom();
$html->load($content);
$ret = $html->find('.horizSnowChartText');
$ret = serialize($ret);
$ret3 = new simple_html_dom();
$ret3->load($ret);
$es = $ret3->find('text');
$ret2 = $ret3->find('.total-inches');
print_r($ret2);
//print_r($es);
?>
And here is a picture of the output. You can see it skips the actual snow data and goes right to the inches mark ".
Do note that the html markup you're getting has multiple instances of .total-inches (multiple nodes with this class). If you want to explicitly get one, you can point to it directly using the second argument of ->find().
Example:
$ret2 = $html->find('.total-inches', 3);
// ^
If you want to check them all out, a simple foreach should suffice:
foreach($html->find('.current-conditions .snowfall-total .total-inches') as $in) {
echo $in , "\n";
}
I'm struggling getting an array of LS cities... file_get_contents() returns an empty dropdown on their roadblock requiring you to select cities. Unfortunately it's empty... so then I thought it was coming from an ajax request. But looking at the page I don't see any ajax requests on the page. Then I tried CURL, thinking that maybe simulating a browser would help... the below code had no affect.
$ch = curl_init("http://www.URL.com/");
curl_setopt($ch, CURLOPT_VERBOSE, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.0.3705; .NET CLR 1.1.4322)');
$result=curl_exec($ch);
var_dump($result);
Does anyone have any ideas on how I can get a solid list of available areas?
I have found out how they populate the list of cities and created some sample code below you can use.
The list of cities is stored as a JSON string in one of their javascript files, and the list is actually populated from a different javascript file. The names of the files appear to be somewhat random, but the root name remains the same.
An example of the JS file with the city JSON is hXXp://a3.ak.lscdn.net/deals/system/javascripts/bingy-81bf24c3431bcffd317457ce1n434ca9.js The script that populates the list is hXXp://a2.ak.lscdn.net/deals/system/javascripts/confirm_city-81bf24c3431bcffd317457ce1n434ca9.js but for us this is inconsequential.
We need to load their home page with a new curl session, look for the unique javascript URL that is the bingy script and fetch that with curl. Then we need to find the JSON and decode it to PHP so we can use it.
Here is the script I came up with that works for me:
<?php
error_reporting(E_ALL); ini_set('display_errors', 1); // debugging
// set up new curl session with options
$ch = curl_init('http://livingsocial.com');
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13');
$res = curl_exec($ch); // fetch home page
// regex string to find the bingy javascript file
$matchStr = '/src="(https?:\/\/.*?(?:javascripts)\/bingy-?[^\.]*\.js)"/i';
if (!preg_match($matchStr, $res, $bingyMatch)) {
die('Failed to extract URL of javascript file!');
}
// this js file is now our new url
$url = $bingyMatch[1];
curl_setopt($ch, CURLOPT_URL, $url);
$res = curl_exec($ch); // fetch bingy js
$pos = strpos($res, 'fte_cities'); // search for the fte_cities variable where the list is stored
if ($pos === false) {
die('Failed to locate cities JSON in javascript file!');
}
// find the beginning of the json string, and the end of the line
$startPos = strpos($res, '{', $pos + 1);
$endPos = strpos($res, "\n", $pos + 1);
$json = trim(substr($res, $startPos, $endPos - $startPos)); // snip out the json
if (substr($json, -1) == ';') $json = substr($json, 0, -1); // remove trailing semicolon if present
$places = json_decode($json, true); // decode json to php array
if ($places == null) {
die('Failed to decode JSON string of cities!');
}
// array is structured where each country is a key, and the value is an array of cities
foreach($places as $country => $cities) {
echo "Country: $country<br />\n";
foreach($cities as $city) {
echo ' '
."{$city['name']} - {$city['id']}<br />\n";
}
echo "<br />\n";
}
Some important notes:
If they decide to change the javascript file names, this will fail to work.
If they rename the variable name that holds the cities, this will fail to work.
If they modify the json to span multiple lines, this will not work (this is unlikely because it uses extra bandwidth)
If they change the structure of the json object, this will not work.
In any case, depending on their modifications it may be trivial to get working again, but it is a potential issue. They may also be unlikely to make these logistical changes because it would require modifications to a number of files, and then require more testing.
Hope that helps!
Perhaps a bit late, but you don't need to couple to our JavaScript to obtain the cities list. We have an API for that:
https://sites.google.com/a/hungrymachine.com/livingsocial-api/home/cities
header('Content-Type: image/jpeg');
$imageURL = $_POST['url'];
$image = #ImageCreateFromString(#file_get_contents($imageURL));
if (is_resource($image) === true)
imagejpeg($image, 'NameYouWantGoesHere.jpg');
else
echo "This image ain't quite cuttin it.";
This is the code I have to convert a url that I receive from an html form into an image. However, whenever I try to display it or take it off the server to look at it, it 'cannot be read' or is 'corrupted'. So for some reason it is converted to an image, recognized as a proper resource, but is not proper image at that point. Any ideas?
You don't want ImageCreateFromString - using file_get_contents is getting you the actual binary data for the image.
Try $image = #imagecreatefromjpeg(#file_get_contents($imageURL)); and see if you like the results better (assuming the original is a JPEG).
You can use cURL to open remote file:
$ch = curl_init();
// set the url to fetch
curl_setopt($ch, CURLOPT_URL, 'http://www.google.com/img.jpg');
// don't give me the headers just the content
curl_setopt($ch, CURLOPT_HEADER, 0);
// return the value instead of printing the response to browser
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
// use a user agent to mimic a browser
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.5) Gecko/20041107 Firefox/1.0');
$content = curl_exec($ch);
// remember to always close the session and free all resources
curl_close($ch);