Are the names of profiles on Facebook publicly accessible, as if I would NOT need to log into Facebook to access them?
I am intending to store a large amount of names as a small piece of a larger project. I feel as if scraping Facebook for names would be a relatively simple task using the Facebook Graph API, but I am a little confused.
I found another tutorial online at http://jilltxt.net/?p=2810 which described an easy way of finding any Facebook profile picture using one simple line:
https://graph.facebook.com/USER-ID/picture?type=large
This was very helpful because I am able to use a range of ID numbers and a small amount of PHP to gather large amounts of profile pictures as seen on my test page here: http://www.joshiefishbein.com/fi/photobook.php
But what I am unfamiliar with is how I go from collecting pictures to names in this one simple line. Is it possible? Is there another (better) way?
Here's the code I am working with. The range of ID's are just an example.
function gen_pix($min, $max, $quantity) {
$numbers = range($min, $max);
shuffle($numbers);
$x_arr = array_slice($numbers, 0, $quantity);
foreach ($x_arr as $key => $value) {
$username = "https://graph.facebook.com/" . $value . "/";
$json = json_decode(file_get_contents($username), true);
echo $json["name"];
}
}
$x = 337800042;
$y = 337800382;
$z = 1;
gen_pix($x,$y,$z);
I've gotten a little farther with this code, I can echo $username and I get the URL that I am looking for (for example https://graph.facebook.com/337800382/) but I do not get anything after that. json_decode isn't working seemingly.
In the same way you are pulling the profile picture, you can get the basic information of a user with their ID.
This page provides a list of data that is always publicly accessible.
So you need to make a GET request to pull back the JSON, like so...
https://graph.facebook.com/{user-id}/
For example https://graph.facebook.com/586207189/ pulls back my basic information. So your PHP would look like this
$json = json_decode(file_get_contents("https://graph.facebook.com/$user_id/"), true);
echo $json["name"];
PHP fiddle here
Update: Based on the code above, it's worth adding an IF to catch invalid Facebook IDs. Facebook IDs may not be sequential so not every one will return a name or image.
Updated code:
<?php
function gen_pix($min, $max, $quantity) {
$numbers = range($min, $max);
shuffle($numbers);
$x_arr = array_slice($numbers, 0, $quantity);
foreach ($x_arr as $key => $value) {
$username = "https://graph.facebook.com/" . $value . "/";
$json = json_decode(file_get_contents($username), true);
if (!isset($json['name'])) {
echo "Invalid ID<br />";
}
else {
echo $json["name"]. '<br />';
}
}
}
$x = 337800042;
$y = 337800382;
$z = 50;
gen_pix($x,$y,$z);
?>
PHP Fiddle here
It's also worth noting that pulling that much data from the graph is going to take a while. Have a look at doing batch requests to speed things up a bit. More info here
Related
As a part of an assignment I am trying to pull some statistics from the Riot API (JSON data for League of Legends). So far I have managed to find summoner id (user id) based on summoner name, and I have filtered out the id's of said summoner's previous (20) games. However now I can't figure out how to get the right values from the JSON data. So this is when I'll show you my code I guess:
$matchIDs is an array of 20 integers (game IDs)
for ($i = 1; $i <= 1; $i++)
{
$this_match_data = get_match($matchIDs[$i], $server, $api);
$processed_data = json_decode($this_match_data, true);
var_dump($processed_data);
}
As shown above my for loop is set to one, as I'm just focusing on figuring out one before continuing with all 20. The above example is how I got the match IDs and the summoner IDs. I'll add those codes here for comparison:
for ($i = 0; $i <= 19; $i++)
{
$temp = $data['matches'][$i]['matchId'];
$matchIDs[$i] = json_decode($temp, true);
}
$data is the variable I get when I pull all the info from the JSON page, it's the same method I use to get $this_match_data in the first code block.
function match_list($summoner_id, $server, $api)
{
$summoner_enc = rawurlencode($summoner);
$summoner_lower = strtolower($summoner_enc);
$curl =curl_init('https://'.$server.'.api.pvp.net/api/lol/'.$server.'/v2.2/matchlist/by-summoner/'.$summoner_id.'?api_key='.$api.'');
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
$result = curl_exec($curl);
curl_close($curl);
return $result;
}
Now to the root of the problem, This is where I put the data I get from the site, so you can see what I am working with. Now by using the following code I can get the first value in that file, the match ID.
echo $processed_data['matchId'];
But I can't seem to lock down any other information from this .json file. I've tried typing stuff like ['region'] instead of ['matchId'] with no luck as well as inserting index numbers like $processed_data[0], but nothing happens. This is just how I get the right info from the first examples and I am really lost here.
Ok, so I think I've figured it out myself. By adding this to the code I can print out the json file in a way more human-friendly way, and that should make it much easier to handle the data.
echo ("<pre>");
var_dump($processed_data);
echo ("</pre>");
I'm connecting to the trakt.tv api, I want to create a little app for myself that displays movies posters with ratings etc.
This is what I'm currently using to retrieve their .json file containing all the info I need.
$json = file_get_contents('http://api.trakt.tv/movies/trending.json/2998fbac88fd207cc762b1cfad8e34e6');
$movies = json_decode($json, true);
$movies = array_slice($movies, 0, 20);
foreach($movies as $movie) {
echo $movie['images']['fanart'];
}
Because the .json file is huge it is loading pretty slow. I only need a couple of attributes from the file, like title,rating and the poster link. Besides that I only need the first 20 or so. How can I make sure to load only a part of the .json file to load it faster?
Besides that I'm not experienced with php in combination with .json so if my code is garbage and you have suggestions I would love to hear them.
Unless the API provides a limit parameter or similar, I don't think you can limit the query at your side. On a quick look it doesn't seem to provide this. It also doesn't look like it really returns that much data (under 100KB), so I guess it is just slow.
Given the slow API I'd cache the data you receive and only update it once per hour or so. You could save it to a file on your server using file_put_contents and record the time it was saved too. When you need to use the data, if the saved data is over an hour old, refresh it.
This quick sketch of an idea works:
function get_trending_movies() {
if(! file_exists('trending-cache.php')) {
return cache_trending_movies();
}
include('trending-cache.php');
if(time() - $movies['retreived-timestamp'] > 60 * 60) { // 60*60 = 1 hour
return cache_trending_movies();
} else {
unset($movies['retreived-timestamp']);
return $movies;
}
}
function cache_trending_movies() {
$json = file_get_contents('http://api.trakt.tv/movies/trending.json/2998fbac88fd207cc762b1cfad8e34e6');
$movies = json_decode($json, true);
$movies = array_slice($movies, 0, 20);
$movies_with_date = $movies;
$movies_with_date['retreived-timestamp'] = time();
file_put_contents('trending-cache.php', '<?php $movies = ' . var_export($movies_with_date, true) . ';');
return $movies;
}
print_r(get_trending_movies());
re: Home Site = http://mobiledetect.net/
re: this script = Mobile_Detect.php
Download script here: https://github.com/serbanghita/Mobile-Detect
This script functions perfectly detecting the different parameters of a user's device.
However, this is how I am currently detecting these parameters:
// each part of the IF statement is hard-coded = not the way to do this
if($detect->isiOS()){
$usingOS = 'iOS';
}
if($detect->isAndroidOS()){
$usingOS = 'Android';
}
echo 'Your OS is: '.$usingOS;
My goal is to use a FOREACH to iterate thru the various arrays in this script to determine a user's device's parameters. I would need to have the "($detect->isXXXXOS())" be dynamic... (which, would be based upon the KEY). The results would display the KEY. But the detection would be based upon the VALUE.
Also, since my web page uses a REQUIRE to access this script... in the Mobile_Script.php script, the arrays are "protected." I think this is also causing me problems (but I don't know for sure).
Any help is appreciated.
In foreach loop you can call dynamic method look like this :
$array = array('Android','Windows','Linux','Mac');
foreach( $array as $value) {
$method = "is{$value}OS";
if($detect->$method()) {
$os = $value;
echo "Your OS is : {$os}";
}
}
Please rearrange your code what you want. I give you an example.
you can try to use somethin like this:
$OSList = $detect->getOperatingSystems();// will give array of operating system name => match params
foreach($OSList as $os_name=>$os_params/*unused*/)
{
$method = 'is'.$os_name;
if($detect->$method())
{
$usingOS = $os_name;
}
}
I'm working on a small and simple code which basically does some tweets filtering. The problem is that I'm hitting the request limit of Twitter API and I would like to know if there is a workaround or if what I want to do just cannot be done.
First, I type a twitter username to retrieve the ID's of people this user follows.
$user_id = $_GET["username"];
$url_post = "http://api.twitter.com/1/friends/ids.json?cursor=-1&screen_name=" . urlencode($user_id);
$following = file_get_contents($url_post, true);
$json = json_decode($following);
$ids = $json->ids;
Twitter API responds with a list of ID's.
Here comes the problem. The next step is to make a request to find out username, profile picture and description for each one of those ID's.
$following = array();
foreach ($ids as $value)
{
$build_url = 'http://api.twitter.com/1/users/lookup.json?user_id=' . $value . '';
$following[] = $build_url;
}
foreach ($following as $url)
{
$data_names = file_get_contents($url, true); //getting the file content
$json_names = json_decode($data_names);
foreach ($json_names as $tweet) {
$name = $tweet->name;
$description = $tweet->description;
echo '<p>';
echo $name . '<br>';
echo $description;
echo '</p>';
}
}
If the user follows 50 people it works. But if he follows, let's say, 600 hundred, that would be 600 hundred request (for username, description and profile pic) to Twitter API which exceeds the limit.
Is there any way to workaround this o it just cannot be done?
Thank you!
You can and should request users/lookup API endPoint with 100 userIds at a time, instead of doing one request per twitter ID. cf. https://dev.twitter.com/docs/api/1.1/get/users/lookup
You have to replace your forEach loop (foreach ($following as $url)) by a recursive function.
At the end of the function, check the number of hits remaining before calling it again (cf. this link to see how to know the time remining until you get rate limited).
If there is no hit left, sleep 15 minutes before calling the function again, otherwise do the call again.
There is plenty of information on how to do this, use Google and search existing stackOverflow questions.
class.php and I am trying to get the instagram feed but with multiple hashtags. My current code is:
<?php
require 'instagram.class.php';
$instagram = new Instagram('my app id');
$tag = 'winter';
$media = $instagram->getTagMedia($tag);
$limit = 10;
$size = '150';
foreach(array_slice($media->data, 0, $limit) as $data)
{
echo '<img src="'.$data->images->thumbnail->url.'" height="'.$size.'" width="'.$size.'">';
}
?>
So I want to get the pictures from #winter and #summer also, does have anyone some ideas? (I tried with array but doesn't work)
Currently it's not possible from what I can discern. But to accomplish what I think you're looking for you'd need to fire off two requests and depending on what you want you might want to display the intersecting images or the symmetric difference of the two requests.