2 JSON. Only 1 work. json_decode php - php

I seriously got gray hair.
I would like to echo the [ask] data for https://api.gdax.com/products/btc-usd/ticker/
But it's return null.
When i try with to use another API with almost the same json, it work perfect.
This example works
<?php
$url = "https://api.bitfinex.com/v1/ticker/btcusd";
$json = json_decode(file_get_contents($url), true);
$ask = $json["ask"];
echo $ask;
This example return null
<?php
$url = "https://api.gdax.com/products/btc-usd/ticker/";
$json = json_decode(file_get_contents($url), true);
$ask = $json["ask"];
echo $ask;
Anybody there has an good explanation, whats wrong with the code returning null

the server of that null result is preventing php agent to connect thus returning http 400 error. you need to specify a user_agent value to your http request.
e.g.
$ua = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36';
$options = array('http' => array('user_agent' => $ua));
$context = stream_context_create($options);
$url = "https://api.gdax.com/products/btc-usd/ticker/";
$json = json_decode(file_get_contents($url, false, $context), true);
$ask = $json["ask"];
echo $ask;
you can also use any user_agent string you want on the $ua variable, as long as you make sure that your target server allows it.

You can't access this URL without passing arguments. It happen some time when the host is checking from where the request come.
$ch = curl_init();
$header=array('GET products/btc-usd/ticker/ HTTP/1.1',
'Host: api.gdax.com',
'Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language:en-US,en;q=0.8',
'Cache-Control:max-age=0',
'Connection:keep-alive',
'Host:adfoc.us',
'User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116 Safari/537.36',
);
curl_setopt($ch,CURLOPT_URL,"https://api.gdax.com/products/btc-usd/ticker/");
curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,0);
curl_setopt($ch,CURLOPT_HTTPHEADER,$header);
$result=curl_exec($ch);
Then you can use json_decode() on $result !

Related

Cant get data from Api in php, the api have the data in json format

here is code example
use both PHP built-in function file_get_contents() / CURL commands.
$api_url = 'https://api.tg3ds.com/api/v1/scan_records?apikey=1sjQKWfPpdyxRBvfv2BuTl5JzexOIScCFN0t&limit=20&offset=0&sort=scanned_at&user_id=PGLY1096&unfold=true&filter=PGLY1096';
// Read JSON file
$json_data = file_get_contents($api_url);
// Decode JSON data into PHP array
$response_data = json_decode($json_data);
var_dump($response_data);
exit();
As cURL
// create & initialize a curl session
$curl = curl_init();
// set our url with curl_setopt()
curl_setopt($curl, CURLOPT_URL, "https://api.tg3ds.com/api/v1/scan_records?apikey=1sjQKWfPpdyxRBvfv2BuTl5JzexOIScCFN0t&limit=20&offset=0&sort=scanned_at&user_id=PGLY1096&unfold=true&filter=PGLY1096");
// return the transfer as a string, also with setopt()
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
// curl_exec() executes the started curl session
// $output contains the output string
$output = curl_exec($curl);
var_dump($output);
exit();
For this curl call you have to set User Agent for example
`$context = stream_context_create(
array(
"http" => array(
"header" => "User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36"
)
) );
$api_url = 'https://api.tg3ds.com/api/v1/scan_records?apikey=1sjQKWfPpdyxRBvfv2BuTl5JzexOIScCFN0t&limit=20&offset=0&sort=scanned_at> &user_id=PGLY1096&unfold=true&filter=PGLY1096';
$json_data = file_get_contents($api_url, false,
$context);`

How to get title from URL in PHP from sites returning 403 Forbidden

I am trying to get the title of a few pages in PHP with this code. It works fine with almost every link except for a few, for example, with 9gag.
function download_page($url)
{
$agent = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36';
$ch = curl_init();
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_VERBOSE, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_USERAGENT, $agent);
curl_setopt($ch, CURLOPT_URL, $url);
$data = curl_exec($ch);
return $data;
}
function get_title_tag($str)
{
$pattern = '/<title[^>]*>(.*?)<\/title>/is';
if(preg_match_all($pattern, $str, $out))
{
return $out[1][0];
}
return false;
}
$url = "https://9gag.com/gag/avPBX3b";
$data = download_page($url);
echo $extracted_title = get_title_tag($data);
It echoes
Attention Required! | Cloudflare
which seems to be protected by a Cloudflare bot verification page. But when I try to post this link on any social network, they are able get the title and all the metadata required. How is it possible?
Edit:
Even if I use the opengraph.io API, I get:
"root":{
"error":{
"code": 2005
"message": "Got 403 error from server."
}
}
just replace agent string and it should work OK, from:
$agent = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36';
to:
$agent = 'facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)';
I see that CloudFlare has enabled captcha verification if standard agent strings are present so this will easily bypass this. I'm puzzled with security here but that is out of scope of this question
You can make use of Facebook's Graph API.
https://graph.facebook.com/v7.0/?fields=og_object&id=https://9gag.com/gag/avPBX3b
JSON Output:
{
"og_object": {
"id": "994417753967326",
"description": "More memes, funny videos and pics on 9GAG",
"title": "32 Places People Have Mispronounced Their Entire Life",
"type": "article",
"updated_time": "2020-06-12T15:54:27+0000"
},
"id": "https://9gag.com/gag/avPBX3b"
}
You can read more about it's usage here.

Why would a PHP cURL request work on localhost but not on server (getting 403 forbidden)? [duplicate]

I am trying to make a sitescraper. I made it on my local machine and it works very fine there. When I execute the same on my server, it shows a 403 forbidden error.
I am using the PHP Simple HTML DOM Parser. The error I get on the server is this:
Warning:
file_get_contents(http://example.com/viewProperty.html?id=7715888)
[function.file-get-contents]: failed
to open stream: HTTP request failed!
HTTP/1.1 403 Forbidden in
/home/scraping/simple_html_dom.php on
line 40
The line of code triggering it is:
$url="http://www.example.com/viewProperty.html?id=".$id;
$html=file_get_html($url);
I have checked the php.ini on the server and allow_url_fopen is On. Possible solution can be using curl, but I need to know where I am going wrong.
I know it's quite an old thread but thought of sharing some ideas.
Most likely if you don't get any content while accessing an webpage, probably it doesn't want you to be able to get the content. So how does it identify that a script is trying to access the webpage, not a human? Generally, it is the User-Agent header in the HTTP request sent to the server.
So to make the website think that the script accessing the webpage is also a human you must change the User-Agent header during the request. Most web servers would likely allow your request if you set the User-Agent header to an value which is used by some common web browser.
A list of common user agents used by browsers are listed below:
Chrome: 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
Firefox: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:75.0) Gecko/20100101 Firefox/75.0
etc...
$context = stream_context_create(
array(
"http" => array(
"header" => "User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36"
)
)
);
echo file_get_contents("www.google.com", false, $context);
This piece of code, fakes the user agent and sends the request to https://google.com.
References:
stream_context_create
Cheers!
This is not a problem with your script, but with the resource you are requesting. The web server is returning the "forbidden" status code.
It could be that it blocks PHP scripts to prevent scraping, or your IP if you have made too many requests.
You should probably talk to the administrator of the remote server.
Add this after you include the simple_html_dom.php
ini_set('user_agent', 'My-Application/2.5');
You can change it like this in parser class from line 35 and on.
function curl_get_contents($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
function file_get_html()
{
$dom = new simple_html_dom;
$args = func_get_args();
$dom->load(call_user_func_array('curl_get_contents', $args), true);
return $dom;
}
Have you tried other site?
It seems that the remote server has some type of blocking. It may be by user-agent, if it's the case you can try using curl to simulate a web browser's user-agent like this:
$url="http://www.example.com/viewProperty.html?id=".$id;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
$html = curl_exec($ch);
curl_close($ch);
Write this in simple_html_dom.php for me it worked
function curl_get_contents($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
$html = curl_exec($ch);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
function file_get_html($url, $use_include_path = false, $context=null, $offset = -1, $maxLen=-1, $lowercase = true, $forceTagsClosed=true, $target_charset = DEFAULT_TARGET_CHARSET, $stripRN=true, $defaultBRText=DEFAULT_BR_TEXT, $defaultSpanText=DEFAULT_SPAN_TEXT)
{
$dom = new simple_html_dom;
$args = func_get_args();
$dom->load(call_user_func_array('curl_get_contents', $args), true);
return $dom;
//$dom = new simple_html_dom(null, $lowercase, $forceTagsClosed, $target_charset, $stripRN, $defaultBRText, $defaultSpanText);
}
I realize this is an old question, but...
Just setting up my local sandbox on linux with php7 and ran across this. Using the terminal run scripts, php calls php.ini for the CLI. I found that the "user_agent" option was commented out. I uncommented it and added a Mozilla user agent, now it works.
Did you check your permissions on file? I set up 777 on my file (in localhost, obviously) and I fixed the problem.
You also may need some additional information in the conext, to make the website belive that the request comes from a human. What a did was enter the website from the browser an copying any extra infomation that was sent in the http request.
$context = stream_context_create(
array(
"http" => array(
'method'=>"GET",
"header" => "User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/50.0.2661.102 Safari/537.36\r\n" .
"accept: text/html,application/xhtml+xml,application/xml;q=0.9,
image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3\r\n" .
"accept-language: es-ES,es;q=0.9,en;q=0.8,it;q=0.7\r\n" .
"accept-encoding: gzip, deflate, br\r\n"
)
)
);
In my case, the server was rejecting HTTP 1.0 protocol via it's .htaccess configuration. It seems file_get_contents is using HTTP 1.0 version.
Use below code:
if you use -> file_get_contents
$context = stream_context_create(
array(
"http" => array(
"header" => "User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36"
)
));
=========
if You use curl,
curl_setopt($curl, CURLOPT_USERAGENT,'User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36');

file_get_contents not working with php file

Code:
$btc38="http://api.btc38.com/v1/depth.php?c=ltc&mk_type=btc";
$btc38_r=file_get_contents($btc38);
$btc38_a=json_decode($btc38_r,true);
I have used other webiste's API and they worked, the only one that didn't work is the above one.
all the websites that worked don't use a php files like the one above (depth.php), so maybe this is the issue.
So my question, is there any other way to parse that link into a multidimensional array?
Edit: var_dump is used just for debugging, my intention is to parse the link into an array.
Do not use the var_dump() that prints the output. And set some user agent. Without this, I've get back forbidden:
$url = "http://api.btc38.com/v1/depth.php?c=ltc&mk_type=btc";
$options = array(
'http'=>array(
'method'=>"GET",
'header'=>"Content-type: application/json\r\n" . // check function.stream-context-create on php.net
"User-Agent: Mozilla/5.0 (iPad; U; CPU OS 3_2 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Version/4.0.4 Mobile/7B334b Safari/531.21.102011-10-16 20:23:10\r\n" // i.e. An iPad
)
);
$context = stream_context_create($options);
$file = file_get_contents($url, false, $context);
$btc38_a = json_decode($file, true);
var_dump($btc38_a);

unable to get the website content by using file_get_content in php

When i am trying to get the website content from the external url fanpop.com by using file_get_contents in php, i am getting empty data. I used the below code to get the contents
$add_url= "http://www.fanpop.com/";
$add_domain = file_get_contents($add_url);
echo $add_domain;
but here i am getting empty result for $add_domain. But the same code is working for other urls and i tried to send the request from browser not from the script then also it is not working.
Below is the same request, but in CURL:
error_reporting(-1);
ini_set('display_errors','On');
$url="http://www.fanpop.com/";
$ch = curl_init();
$header=array('GET /1575051 HTTP/1.1',
'Host: adfoc.us',
'Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language:en-US,en;q=0.8',
'Cache-Control:max-age=0',
'Connection:keep-alive',
'Host:adfoc.us',
'User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116 Safari/537.36',
);
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,0);
curl_setopt( $ch, CURLOPT_COOKIESESSION, true );
curl_setopt($ch,CURLOPT_COOKIEFILE,'cookies.txt');
curl_setopt($ch,CURLOPT_COOKIEJAR,'cookies.txt');
curl_setopt($ch,CURLOPT_HTTPHEADER,$header);
echo $result=curl_exec($ch);
curl_close($ch);
... but the above is also not working, can any one tell is there any any changes have to make in that?
The problem with this particular site is that it only serves compressed contents and throws a 404 error otherwise.
Easy fix:
$ch = curl_init('http://www.fanpop.com');
curl_setopt($ch,CURLOPT_ENCODING , "");
curl_exec($ch);
You can also make this work for file_get_contents() but with a substantial amount of effort, as described in this article.

Categories