Google SafeBrowsing Lookup API - PHP - php

I'm trying to request information about a domain without success; code:
<?php
echo file_get_contents('https://sb-ssl.google.com/safebrowsing/api/lookup?client=asasd&apikey=MYKEY&appver=1.5.2&pver=3.0&url=http%3A%2F%2Fwww.onet.pl%2F');
?>
Why isn'tit working?

//function for getting the data from url
function get_data($url)
{
$ch = curl_init();
$timeout = 5;
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,$timeout);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
Then get the content using the function call :
$returned_content = get_data('your url');
get_file_contents() has huge security threat – and many servers have disabled this feature in PHP.

Why isn'tit working?
because wrong url
http://www.google.com/safebrowsing/diagnostic?site=http://example.com/

just take a look at the documentation:
with URLs, you should use urlencode()
fopen wrapper has to be enabled (same as for fopen())
maybe the url is wrong - when i copy your URL and try to open it, i get a pageload-failure.

Related

How to call posts from PHP

I have a website, that uses WP Super Cache plugin. I need to recycle cache once a day and then I need to call 5 posts (URL adresses) so WP Super Cache put these posts into cache again (caching is quite time consuming so I'd like to have it precached before users come so they dont have to wait).
On my hosting I can use a CRON but only for 1 call/hour. And I need to call 5 different URL's at once.
Is it possible to do that? Maybe create one HTML page with these 5 posts in iframe? Will something like that work?
Edit: Shell is not available, so I have to use PHP scripting.
The easiest way to do it in PHP is to use file_get_contents() (fopen() also works), if the HTTP stream wrapper is enabled on your server:
<?php
$postUrls = array(
'http://my.site.here/post1',
'http://my.site.here/post2',
'http://my.site.here/post3',
'http://my.site.here/post4',
'http://my.site.here/post5',
);
foreach ($postUrls as $url) {
// Get the post as an user will do it
$text = file_get_contents();
// Here you can check if the request was successful
// For example, use strpos() or regex to find a piece of text you expect
// to find in the post
// Replace 'copyright bla, bla, bla' with a piece of text you display
// in the footer of your site
if (strpos($text, 'copyright bla, bla, bla') === FALSE) {
echo('Retrieval of '.$url." failed.\n");
}
}
If file_get_contents() fails to open the URLs on your server (some ISP restrict this behaviour) you can try to use curl:
function curl_get_contents($url)
{
$ch = curl_init($url);
curl_setopt_array($ch, array(
CURLOPT_CONNECTTIMEOUT => 30, // timeout in seconds
CURLOPT_RETURNTRANSFER => TRUE, // tell curl to return the page content instead of just TRUE/FALSE
));
$text = curl_exec($ch);
curl_close($ch);
return $text;
}
Then use the function curl_get_contents() listed above instead of file_get_contents().
An example using PHP without building a cURL request.
Using PHP's shell exec, you can have an extremely light function like so :
$siteList = array("http://url1", "http://url2", "http://url3", "http://url4", "http://url5");
foreach ($siteList as &$site) {
$request = shell_exec('wget '.$site);
}
Now of course this is not the most concise answer and not always a good solution also, if you actually want anything from the response you will have to work with it a different way to cURLbut its a low impact option.
Thanks to Arkascha tip I created a PHP page that I call from CRON. This page contains simple function using cURL:
function cache_it($Url){
if (!function_exists('curl_init')){
die('No cURL, sorry!');
}
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $Url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 50); //higher timeout needed for cache to load
curl_exec($ch); //dont need it as output, otherwise $output = curl_exec($ch);
curl_close($ch);
}
cache_it('http://www.mywebsite.com/url1');
cache_it('http://www.mywebsite.com/url2');
cache_it('http://www.mywebsite.com/url3');
cache_it('http://www.mywebsite.com/url4');

Retrieve all languages used by a Github user

In my script I have a function that retrieves JSON information from the Github API, https://api.github.com/users/octocat/repos.
I want to have a different function to get all the languages used by (in this case) octocat and then count how many times he used the language.
I was thinking of this:
foreach($json['language'] as $RepoLanguage)
{
echo $RepoLanguage;
}
but that won't work, any suggestions/ideas?
I think the main reason is that you did not specify the User Agent as specified here: https://developer.github.com/v3/#user-agent-required
Did you check what result you have in the $json?
Here's a working example.
<?php
function get_content_from_github($url) {
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,1);
curl_setopt($ch,CURLOPT_USERAGENT,'My User Agent');
$content = curl_exec($ch);
curl_close($ch);
return $content;
}
$json = json_decode(get_content_from_github('https://api.github.com/users/octocat/repos'), true);
foreach($json as $repo) {
$language = $repo['language'];
}
?>

cURL get_data($url) with absolute URL

I'm using this code to get data using cURL
$url='http://example.com/'; //URL to get content from..
print_r(get_data($url)); //Dumps the content
/* Gets the data from a URL */
function get_data($url)
{
$ch = curl_init();
$timeout = 5;
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,$timeout);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
However, This code returns data with relative url. How can I get ride of this relative url & print with absolute url? May be with preg_replace.. But How ?
Have a look at the HTML base tag. You should find it helpful if you want to let the browser do all the relative-to-absolute conversion:
$data = get_data($url);
// Note: ideally you should use DOM manipulation to inject the <base>
// tag inside the <head> section
$data = str_replace("<head>", "<head><base href=\"$url\">", $data);
echo $data;
I think that you must to use a HTML parser like http://simplehtmldom.sourceforge.net/, and replace all links with the correct path.

echo html tags in status

I am updating facebook status with my feed update from my site,
and facebook status followed by link of feed.
I'm using
echo $_POST['msg']." #"."<a href='http://xxx.ch/comment.php?id=".$result."'>link</a>";
but the status updates in facebook is like that,
msg #<a href='http://xxx.ch/comment.php?id=2>link</a>
I want only
msg # link
Facebook doesn't support html tags in messages. Just specify url and it will be shown as url.
Links should be posted in link parameter, you can also select custom name for that one. Don't use message for this purpose
i got it using tinyurl:
function get_tiny_url($url) {
$ch = curl_init();
$timeout = 5;
curl_setopt($ch,CURLOPT_URL,'http://tinyurl.com/api-create.php?url='.$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,$timeout);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
$new_url = get_tiny_url('http://xxx.ch/comment.php?id='.$result);
echo $_POST['msg']." # ".$new_url;

Using file_get_contents to different URLs in a loop seems to break the rest of the code

I am currently trying to fetch some facebook data, which I then want to access in Javascript. Specifically, I am trying to access some characteristics of the user's friends.
So I am getting the user's friend list using file_get_contents to his graph API URL.
This provides me with an array of friend ids.
As I need a characteristic from each friend, I am doing:
foreach($dataarray as $friend) {
$friendurl = "https://graph.facebook.com/".$friend->id."?access_token=".$token."";
$fdata = json_decode(file_get_contents($friendurl));
if($fdata->gender == "male") {
array_push($fulldata, $fdata->name);
}
}
Having this code piece seems to break the javascript code, as none of my alert instructions are ran.
Also, inserting a break after the if, so that only one file_get_contents is done, seems to make the code runnable (but I obviously need to go through all of the friends).
How can I solve this?
I would use jQuery or xmlHttpRequest to do the HTTP GET, but somehow I always seem to get back a status code of 0, with an empty response.
Edit:
Here is the JS code:
<script type="text/javascript">
function initialize() {
alert('Test1');
<?php
$fulldata = array();
$data = $result->data;
foreach($data as $friend) {
$friendurl = "https://graph.facebook.com/".$friend->id."?access_token=".$token."";
//echo("alert(\"".$friendurl."\");");
$fdata = json_decode(file_get_contents($friendurl));
if($fdata->hometown->name) {
array_push($fulldata, $fdata->hometown->name);
}
}
echo ("alert(\"".count($fulldata)."\")");
?>
}
</script>
I should've also added that this is being done on a page embedded into facebook using the canvas feature.
Try...
function curl($url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
return curl_exec($ch);
curl_close ($ch);
}
foreach($dataarray as $friend){
$friendurl = "https://graph.facebook.com/".$friend->id."?access_token=".$token."";
$fdata = json_decode(curl($friendurl));
if($fdata->gender == "male"){
array_push($fulldata, $fdata->name);
}
}
Maybe FGC is disabled but you don't get any notifications/warnings.
Code from comment:
error_reporting(E_ALL); ini_set("display_errors", 1);
Note that you are doing cross-domain AJAX call which is prohibited for security reasons.
You can do the api call on the server and echo the data to the client side JS, or you can build a php proxy return the result of the Graph API call(As the proxy is at your own server, they are in the same domain).

Categories