I'm using the following code in an attempt to get a public Linkedin company page into a variable, but it always returns Linkedin's page not found 404. Any idea where I'm going wrong?
$html = get_web_page('https://www.linkedin.com/company/google/');
echo stripos( $html['content'], 'occludable-update' );
echo $html['content'];
function get_web_page( $url )
{
$user_agent='Mozilla/5.0 (Windows NT 6.1; rv:8.0) Gecko/20100101 Firefox/8.0';
$options = array(
CURLOPT_CUSTOMREQUEST =>"GET", //set request type post or get
CURLOPT_POST =>false, //set to GET
CURLOPT_USERAGENT => $user_agent, //set user agent
CURLOPT_COOKIEFILE =>"cookie.txt", //set cookie file
CURLOPT_COOKIEJAR =>"cookie.txt", //set cookie jar
CURLOPT_RETURNTRANSFER => true, // return web page
CURLOPT_HEADER => false, // don't return headers
CURLOPT_FOLLOWLOCATION => true, // follow redirects
CURLOPT_ENCODING => "", // handle all encodings
CURLOPT_AUTOREFERER => true, // set referer on redirect
CURLOPT_CONNECTTIMEOUT => 120, // timeout on connect
CURLOPT_TIMEOUT => 120, // timeout on response
CURLOPT_MAXREDIRS => 10, // stop after 10 redirects
);
$ch = curl_init( $url );
curl_setopt_array( $ch, $options );
$content = curl_exec( $ch );
$err = curl_errno( $ch );
$errmsg = curl_error( $ch );
$header = curl_getinfo( $ch );
curl_close( $ch );
$header['errno'] = $err;
$header['errmsg'] = $errmsg;
$header['content'] = $content;
return $header;
}
They must have some kind of scraping protection in place. If you fetch the page with curl via CLI you can see that it just returns a bit of Javascript code:
$ curl https://www.linkedin.com/company/google/
<html><head>
<script type="text/javascript">
window.onload = function() {
// Parse the tracking code from cookies.
var trk = "bf";
var trkInfo = "bf";
var cookies = document.cookie.split("; ");
for (var i = 0; i < cookies.length; ++i) {
if ((cookies[i].indexOf("trkCode=") == 0) && (cookies[i].length > 8)) {
trk = cookies[i].substring(8);
}
else if ((cookies[i].indexOf("trkInfo=") == 0) && (cookies[i].length > 8)) {
trkInfo = cookies[i].substring(8);
}
}
if (window.location.protocol == "http:") {
// If "sl" cookie is set, redirect to https.
for (var i = 0; i < cookies.length; ++i) {
if ((cookies[i].indexOf("sl=") == 0) && (cookies[i].length > 3)) {
window.location.href = "https:" + window.location.href.substring(window.location.protocol.length);
return;
}
}
}
// Get the new domain. For international domains such as
// fr.linkedin.com, we convert it to www.linkedin.com
var domain = "www.linkedin.com";
if (domain != location.host) {
var subdomainIndex = location.host.indexOf(".linkedin");
if (subdomainIndex != -1) {
domain = "www" + location.host.substring(subdomainIndex);
}
}
window.location.href = "https://" + domain + "/authwall?trk=" + trk + "&trkInfo=" + trkInfo +
"&originalReferer=" + document.referrer.substr(0, 200) +
"&sessionRedirect=" + encodeURIComponent(window.location.href);
}
</script>
</head></html>
Related
Currently i can fetch as per shopify's API limit of 250 products per call and echo this out. I have done some research and found that i need to paginate the request on the overall count of products [5000 products / 250 products per page = 20 pages] in the store.
I want get all products in shopify
so I tried to solved.
but i can not get all products.
the result is always 'error.....'.what is problem?
$pages = ceil($products_cnt->count/250); // Count products / 250
for($i = 0; $i < $pages; $i++){
$api_url = 'https://apikey:password#store.myshopify.com';
$get_url = $api_url . '/admin/products.json?limit=250&page='.($i+1);
$products_content = #file_get_contents( $get_url );
if (!empty($products_all)) {
print_r('ok');
} else {
print_r('error.....');
}
$products_json = json_decode( $products_content, true );
$products = $products_json['products'];
I guess you have a problem with Shopify API rate limit. But to be sure of this need to check the response from the Shopify API. For the HTTP request better to use the curl or some HTTP client package for example the Guzzle.
Try instead of the #file_get_contents($get_url) use this code:
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL, $url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT, 30);
$products_content = curl_exec($ch);
if(curl_errno($ch)){
print_r('Curl error.' . curl_error($ch));
}
$status_code = curl_getinfo($ch, CURLINFO_HTTP_CODE);
if(in_array($status_code, [200, 201])){
print_r('ok');
} else {
print_r(
'Shopify API error. ' .
'HTTP Code: ' . curl_getinfo($ch, CURLINFO_HTTP_CODE) . '; '
'Error: ' . $products_content
);
}
curl_close($ch);
The Pagination method you are trying to use has been deprecated. Shopify introduced new cursor based pagination from API version 2019-07. To read more about Cursor based pagination, head over to Shopify Docs for Cursor based Pagination. It is better if you use some PHP library that offers rate limiting and other things. However, a sample implementation using cURL would look something like below. Check code comments for details.
<?php
// username and password for API
$username = "";
$password = "";
$nextPage = NULL;
$curl = curl_init();
// set result limit and Basic auth
curl_setopt_array(
$curl,
array(
CURLOPT_URL => "https://store-name.myshopify.com/admin/api/2020-07/products.json?limit=50",
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => "",
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 0,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => "GET",
CURLOPT_USERPWD => $username . ":" . $password
)
);
// call back function to parse Headers and get Next Page Link
curl_setopt(
$curl,
CURLOPT_HEADERFUNCTION,
function($curl, $header) use (&$nextPage) {
$len = strlen($header);
$header = explode(':', $header, 2);
if (count($header) < 2) // ignore invalid headers
return $len;
if (trim($header[0]) === "Link" && strpos($header[1], 'next') !== false) {
$links = explode(',', $header[1], 2);
$link = count($links) === 2 ? $links[1] : $links[0];
if (preg_match('/<(.*?)>/', $link, $match) === 1) $nextPage = $match[1];
}
return $len;
}
);
// First request
$response = curl_exec($curl);
if (curl_errno($curl)) {
$error_msg = curl_error($curl);
print_r($error_msg);
}
$parsedResponse = json_decode($response);
$result = $parsedResponse->products;
// generate new requests till next page is available
while ($nextPage !== NULL) {
curl_setopt($curl, CURLOPT_URL, $nextPage);
$parsedResponse->products = [];
$nextPage = NULL;
$response = curl_exec($curl);
$parsedResponse = json_decode($response);
if (curl_errno($curl)) {
$error_msg = curl_error($curl);
} else {
$result = array_merge($result, $parsedResponse->products);
sleep(2);
}
};
echo "Products Count: ";
echo count($result);
curl_close($curl);
Response Headers parsing function by Geoffrey
This question is market as answered in this thread:
How to POST an XML file using cURL on php?
But that answer wasn't really the correct answer in my opinion since it just show how to send XML code with cURL. I need to send an XML file.
Basically, I need this C# code to be converted to PHP:
public Guid? UploadXmlFile()
{
var fileUploadClient = new WebClient();
fileUploadClient.Headers.Add("Content-Type", "application/xml");
fileUploadClient.Headers.Add("Authorization", "api " + ApiKey);
var rawResponse = fileUploadClient.UploadFile(Url, FilePath);
var stringResponse = Encoding.ASCII.GetString(rawResponse);
var jsonResponse = JObject.Parse(stringResponse);
if (jsonResponse != null)
{
var importFileId = jsonResponse.GetValue("ImportId");
if (importFileId != null)
{
return new Guid(importFileId.ToString());
}
}
return null;
}
I have tried in several ways and this is my latest try.
The cURL call:
/**
* CDON API Call
*
*/
function cdon_api($way, $method, $postfields=false, $contenttype=false)
{
global $api_key;
$contenttype = (!$contenttype) ? 'application/x-www-form-urlencoded' : $contenttype;
$curlOpts = array(
CURLOPT_URL => 'https://admin.marketplace.cdon.com/api/'.$method,
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_TIMEOUT => 60,
CURLOPT_HTTPHEADER => array('Authorization: api '.$api_key, 'Content-type: '.$contenttype, 'Accept: application/xml')
);
if ($way == 'post')
{
$curlOpts[CURLOPT_POST] = TRUE;
}
elseif ($way == 'put')
{
$curlOpts[CURLOPT_PUT] = TRUE;
}
if ($postfields !== false)
{
$curlOpts[CURLOPT_POSTFIELDS] = $postfields;
}
# make the call
$ch = curl_init();
curl_setopt_array($ch, $curlOpts);
$response = curl_exec($ch);
curl_close($ch);
return $response;
}
The File Export:
/**
* Export products
*
*/
function cdon_export()
{
global $api_key;
$upload_dir = wp_upload_dir();
$filepath = $upload_dir['basedir'] . '/cdon-feed.xml';
$response = cdon_api('post', 'importfile', array('uploaded_file' => '#/'.realpath($filepath).';type=text/xml'), 'multipart/form-data');
echo '<br>Response 1: <pre>'.print_r(json_decode($response), true).'</pre><br>';
$data = json_decode($response, true);
if (!empty($data['ImportId']))
{
$response = cdon_api('put', 'importfile?importFileId='.$data['ImportId'], false, 'text/xml');
echo 'Response 2: <pre>'.print_r(json_decode($response), true).'</pre><br>';
$data = json_decode($response, true);
}
}
But the output I get is this:
Response 1:
stdClass Object
(
[Message] => The request does not contain a valid media type.
)
I have experimented around with different types at the different places, application/xml, multipart/form-data and text/xml, but nothing works.
How do I do to make it work? How do I manage to send the XML file with cURL?
to me, it looks like the C# code just does the equivalent of
function UploadXmlFile(): ?string {
$ch = curl_init ( $url );
curl_setopt_array ( $ch, array (
CURLOPT_POST => 1,
CURLOPT_HTTPHEADER => array (
"Content-Type: application/xml",
"Authorization: api " . $ApiKey
),
CURLOPT_POSTFIELDS => file_get_contents ( $filepath ),
CURLOPT_RETURNTRANSFER => true
) );
$jsonResponse = json_decode ( ($response = curl_exec ( $ch )) );
curl_close ( $ch );
return $jsonResponse->importId ?? NULL;
}
but at least 1 difference, your PHP code adds the header 'Accept: application/xml', your C# code does not
I need to scrape an ASP website using cURL. My hosting does not allow me to turn off safe_mode or open_basedir. That's why CURLOPT_FOLLOWLOCATION cannot be activated (it throws an error "CURLOPT_FOLLOWLOCATION cannot be activated when an open_basedir is set").
I tried to implement some workaround but after several unlucky days starting to be desperate. I am wondering how to change the code below to contain manual redirection instead of CURLOPT_FOLLOWLOCATION:
include_once __DIR__.'/simple_html_dom.php';
define('COOKIE_FILE', __DIR__.'/cookie.txt');
#unlink(COOKIE_FILE); //clear cookies before we start
define('CURL_LOG_FILE', __DIR__.'/request.txt');
#unlink(CURL_LOG_FILE);//clear curl log
class ASPBrowser {
public $exclude = array();
public $lastUrl = '';
public $dom = false;
/**Get simplehtmldom object from url
* #param $url
* #param $post
* #return bool|simple_html_dom
*/
public function getDom($url, $post = false) {
$f = fopen(CURL_LOG_FILE, 'a+'); // curl session log file
if($this->lastUrl) $header[] = "Referer: {$this->lastUrl}";
$curlOptions = array(
CURLOPT_ENCODING => 'gzip,deflate',
CURLOPT_AUTOREFERER => 1,
CURLOPT_CONNECTTIMEOUT => 120, // timeout on connect
CURLOPT_TIMEOUT => 120, // timeout on response
CURLOPT_URL => $url,
CURLOPT_SSL_VERIFYPEER => false,
CURLOPT_SSL_VERIFYHOST => false,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_MAXREDIRS => 9,
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_HEADER => 0,
CURLOPT_USERAGENT => "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36",
CURLOPT_COOKIEFILE => COOKIE_FILE,
CURLOPT_COOKIEJAR => COOKIE_FILE,
CURLOPT_STDERR => $f, // log session
CURLOPT_VERBOSE => true,
);
if($post) { // add post options
$curlOptions[CURLOPT_POSTFIELDS] = $post;
$curlOptions[CURLOPT_POST] = true;
}
$curl = curl_init();
curl_setopt_array($curl, $curlOptions);
$data = curl_exec($curl);
$this->lastUrl = curl_getinfo($curl, CURLINFO_EFFECTIVE_URL); // get url we've been redirected to
curl_close($curl);
if($this->dom) {
$this->dom->clear();
$this->dom = false;
}
$dom = $this->dom = str_get_html($data);
fwrite($f, "{$post}\n\n");
fwrite($f, "-----------------------------------------------------------\n\n");
fclose($f);
return $dom;
}
function createASPPostParams($dom, array $params) {
$postData = $dom->find('input,select,textarea');
$postFields = array();
foreach($postData as $d) {
$name = $d->name;
if(trim($name) == '' || in_array($name, $this->exclude)) continue;
$value = isset($params[$name]) ? $params[$name] : $d->value;
$postFields[] = rawurlencode($name).'='.rawurlencode($value);
}
$postFields = implode('&', $postFields);
return $postFields;
}
function doPostRequest($url, array $params) {
$post = $this->createASPPostParams($this->dom, $params);
return $this->getDom($url, $post);
}
function doPostBack($url, $eventTarget, $eventArgument = '') {
return $this->doPostRequest($url, array(
'__EVENTTARGET' => $eventTarget,
'__EVENTARGUMENT' => $eventArgument
));
}
function doGetRequest($url) {
return $this->getDom($url);
}
}
(Credits: Andrey http://256cats.com/scraping-asp-websites-php-dopostback-ajax-emulation/)
You're probably looking for the CURLINFO_REDIRECT_URL info variable, as that returns the URL that it would otherwise had redirected to if you'd allowed it. Added in PHP 5.3.7.
Note that the exact response code 3xx also affects how the HTTP request method is supposed to change or not change when you follow a redirect. See details in the HTTP spec, RFC 7231 section 6.4.
The libcurl docs for CURLINFO_REDIRECT_URL.
I've a variable with multiple single quotes and want to extract a string of this.
My Code is:
$image['src'] = addslashes($image['src']);
preg_match('~src=["|\'](.*?)["|\']~', $image['src'], $matches);
$image['src'] = $matches[1];
$image['src'] contains this string:
tooltip_html(this, '<div style="display: block; width: 262px"><img src="https://url.com/var/galerie/15773_262.jpg"/></div>');
I thought all would be right but $image['src'] returns null. The addslashes method works fine and returns this:
tooltip_html(this, \'<div style="display: block; width: 262px"><img src="https://url.com/var/galerie/15773_262.jpg"/></div>\');
I don't get the problem in here, did I miss something?
=====UPDATE======
The whole code:
<?php
error_reporting(E_ALL);
header("Content-Type: application/json", true);
define('SITE', 'https://akipa-autohandel.autrado.de/');
include_once('simple_html_dom.php');
/**
* Create CDATA-Method for XML Output
*/
class SimpleXMLExtended extends SimpleXMLElement {
public function addCData($cdata_text) {
$node = dom_import_simplexml($this);
$no = $node->ownerDocument;
$node->appendChild($no->createCDATASection($cdata_text));
}
}
/**
* Get a web file (HTML, XHTML, XML, image, etc.) from a URL. Return an
* array containing the HTTP server response header fields and content.
*/
function get_web_page( $url ) {
$user_agent='Mozilla/5.0 (Windows NT 6.1; rv:8.0) Gecko/20100101 Firefox/8.0';
$options = array(
CURLOPT_CUSTOMREQUEST =>"GET", //set request type post or get
CURLOPT_POST =>false, //set to GET
CURLOPT_USERAGENT => $user_agent, //set user agent
CURLOPT_COOKIEFILE =>"cookie.txt", //set cookie file
CURLOPT_COOKIEJAR =>"cookie.txt", //set cookie jar
CURLOPT_RETURNTRANSFER => true, // return web page
CURLOPT_HEADER => false, // don't return headers
CURLOPT_FOLLOWLOCATION => true, // follow redirects
CURLOPT_ENCODING => "", // handle all encodings
CURLOPT_AUTOREFERER => true, // set referer on redirect
CURLOPT_CONNECTTIMEOUT => 120, // timeout on connect
CURLOPT_TIMEOUT => 120, // timeout on response
CURLOPT_MAXREDIRS => 10, // stop after 10 redirects
);
$ch = curl_init( $url );
curl_setopt_array( $ch, $options );
$content = curl_exec( $ch );
$err = curl_errno( $ch );
$errmsg = curl_error( $ch );
$header = curl_getinfo( $ch );
if($content === FALSE) {
// when output is false it can't be used in str_get_html()
// output a proper error message in such cases
echo 'output error';
die(curl_error($ch));
}
curl_close( $ch );
$header['errno'] = $err;
$header['errmsg'] = $errmsg;
$header['content'] = $content;
return $header;
}
function renderPage( $uri ) {
$rendering = get_web_page( $uri );
if ( $rendering['errno'] != 0 )
echo 'bad url, timeout, redirect loop';
if ( $rendering['http_code'] != 200 )
echo 'no page, no permissions, no service';
$content = $rendering['content'];
if(!empty($content)) {
$parsing = str_get_html($content);
}
return $parsing;
}
/**
* Get all current car data of the selected autrado site
*/
function models() {
$paramURI = SITE . 'schnellsuche.php?suche_hersteller=14&suche_modell=&suche_from=form&suche_action=suche&itemsperpage=500';
$content = renderPage($paramURI);
foreach ($content->find('tr[class*=fahrzeugliste]') as $auto) {
$item['src'] = $auto->find('a[onmouseover]', 0)->onmouseover;
preg_match('~src=["\'](.*?)["\']~', $item['src'], $matches);
echo $matches[1];
}
}
if(isset($_POST['action']) && !empty($_POST['action'])) {
$action = $_POST['action'];
if((string) $action == 'test') {
$output = models();
json_encode($output);
}
}
?>
The content of $image['src'] is not as you wrote above. I've run now your script and the content is:
tooltip_html(this, '<div style="display: block; width: 262px"><img src="http://server12.autrado.de/autradogalerie_copy/var/galerie/127915_262.jpg" /></div>');
It will work if you add the following line before the preg_match:
$item['src']= html_entity_decode($item['src']);
I'm stuck on this. I'm trying to pull dynamically generated JSON data from a remote server. Here is just the URL to generate the JSON:
https://www.librarything.com/api_getdata.php?userid=jtodd1973&key=1962548278&max=1&responseType=json
I am able to access the data fine using jQuery/AJAX. Here's the code I'm using on jtodd.info/librarything.php:
<div id="booklist">
<table id="tbl_books" style="width:80%; border: thin solid black;">
<tr style="border: thin solid black; background-color: #666; color: #fff;">
<th style="width:40%;">Title</th>
<th style="width:30%;">Author</th>
<th style="width:10%;">Rating</th>
<th style="width:20%;">Reading dates</th>
</tr>
</table>
</div>
<script type="text/javascript">
$(document).ready(function () {
$.ajax({
type:'POST',
callback: 'callback',
crossDomain: true,
contentType: 'application/json; charset=utf-8',
dataType:'JSONP',
beforeSend: function setHeader(xhr){ xhr.setRequestHeader('accept', 'application/json'); },
url:'https://www.librarything.com/api_getdata.php?userid=jtodd1973&key=1962548278&booksort=title&showTags=1&showCollections=1&showDates=1&showRatings=1&max=1000',
success:function(data) {
x = 0;
var data1 = JSON.stringify(data);
var data2 = JSON.parse(data1);
$.each(data2.books, function(i,book){
var date1 = Number(1420027199);
var date2 = Number(book.entry_stamp);
if (date2 > date1) {
x = x + 1;
var testTitle = book.title;
var n = testTitle.indexOf(" (");
if(n > -1) {
var bookTitle = testTitle.substr(0, n);
} else {
var bookTitle = testTitle;
}
var bookAuthor = book.author_lf;
var bookRating = book.rating;
if(x % 2 == 0){
var rowColor = "#fff";
} else {
var rowColor = "#ccc";
}
$('#booklist table').append('<tr style="background-color:' + rowColor + ';">' +
'<td style="font-style: italic;">' + bookTitle +
'</td><td>' + bookAuthor +
'</td><td style="text-align: center;">' + bookRating +
'</td><td> ' +
'</td></tr>');
}
});
},
error:function() {
alert("Sorry, I can't get the feed");
}
});
});
</script>
However, I am not able to access the data using PHP & cURL. I'm getting no response from the server. More specifically, I get Error number 7 / HTTP code 0. Here's the code I am using on jtodd.info/librarything2.php:
<?php
$url = 'https://www.librarything.com/api_getdata.php?userid=jtodd1973&key=1962548278&max=1&responseType=json';
$result = get_web_page( $url );
if ( $result['errno'] != 0 )
echo "<p>Error number = " . $result['errno'] . "</p>";
if ( $result['http_code'] != 200 )
echo "<p>HTTP code = " . $result['http_code'] . "</p>";
$page = $result['content'];
echo "<pre>" . $page . "</pre>";
function get_web_page( $url ) {
if(!function_exists("curl_init")) die("cURL extension is not installed");
$ch = curl_init();
$options = array(
CURLOPT_URL => $url,
CURLOPT_RETURNTRANSFER => true, // return web page
CURLOPT_HEADER => true, // don't return headers
CURLOPT_FOLLOWLOCATION => true, // follow redirects
CURLOPT_ENCODING => "", // handle all encodings
CURLOPT_USERAGENT => "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.2 (KHTML, like Gecko) Chrome/22.0.1216.0 Safari/537.2", // who am i
CURLOPT_AUTOREFERER => true, // set referer on redirect
CURLOPT_CONNECTTIMEOUT => 120, // timeout on connect
CURLOPT_TIMEOUT => 120, // timeout on response
CURLOPT_MAXREDIRS => 10, // stop after 10 redirects
CURLOPT_SSL_VERIFYPEER => false // Disabled SSL Cert checks
);
curl_setopt_array( $ch, $options );
$content = curl_exec( $ch );
$err = curl_errno( $ch );
$errmsg = curl_error( $ch );
$header = curl_getinfo( $ch );
curl_close( $ch );
$header['errno'] = $err;
$header['errmsg'] = $errmsg;
$header['content'] = $content;
return $header;
}
?>
Thanks for any advice.
I've just tried your code - and this is the response I get:
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 11 Feb 2015 20:53:34 GMT
Content-Type: application/json
Content-Length: 1102
Connection: keep-alive
Set-Cookie: cookie_from=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/
Set-Cookie: LTAnonSessionID=572521187; expires=Wed, 10-Feb-2016 20:53:34 GMT; path=/
lt-backend: 192.168.0.101:80
{"settings":{"amazonchoice":null,"show":{"showCovers":null,"showAuthors":null,"showTitles":null,"showRatings":null,"showDates":null,"showReviews":null,"showTags":null,"showCollections":null},"style":null,"title":null,"titleLink":null,"theuser":"jtodd1973","powered":"Powered by ","uniqueKey":null,"bookcount":1,"showWhat":null,"nullSetMsg":"No books found.","notEnoughImagesMsg":"Not enough books found.","domain":"www.librarything.com","textsnippets":{"by":"by","Tagged":"Tagged","readreview":"read review","stars":"stars"}},"books":{"112016118":{"book_id":"112016118","title":"As I lay dying : the corrected text","author_lf":"Faulkner, William","author_fl":"William Faulkner","author_code":"faulknerwilliam","ISBN":"067973225X","ISBN_cleaned":"067973225X","publicationdate":"1990","entry_stamp":"1409083726","entry_date":"Aug 26, 2014","copies":"1","rating":5,"language_main":"","language_secondary":"","language_original":"","hasreview":"0","dateacquired_stamp":"0","dateacquired_date":"Dec 31, 1969","cover":"https:\/\/images-na.ssl-images-amazon.com\/images\/P\/067973225X.01._SCLZZZZZZZ_.jpg"}}}
i.o.w. the problem is not in your PHP code - you need to look further (like: find out if your IP is blocked for some reason)