I've tried to check if a youtube video is online, not private, not deleted etc. I need it for a Video-Blog.
All of the code I've found and tried hadn't worked. I hope you can help me.
I've registered for the new API v3.
My code is as follows, but I always get "Bad Request 400"
$theURL = "https://www.googleapis.com/youtube/v3/videos?part=status&id=". get_post_meta($post->ID,"wpzoom_post_embed_code", true) ."&key=my_api_key";
// echo $theURL;
//$theURL = "http://www.youtube.com/oembed?url=http://www.youtube.com/watch?v=". get_post_meta($post->ID,"wpzoom_post_embed_code", true) ."&format=json";
$headers = get_headers($theURL);
print_r ($headers);
if (substr($headers[0], 9, 3) !== "404") {
echo "online";
} else {
echo "offline";
}
Found a solution. But I didn't figured it out, how can I display locked Videos as offline. I search in my curl data string for "totalResults": 0," cause I fetch at once one ID for video.
Here is my code. - Maybe someone has a better idea to fix this little problem.
$theURL = "https://www.googleapis.com/youtube/v3/videos?part=status&id=" . get_post_meta($post->ID, "wpzoom_post_embed_code", true) . "&key=my_api_key";
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL,$theURL);
curl_setopt($ch,CURLOPT_HEADER,1);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch, CURLOPT_VERBOSE, 0);
$data = curl_exec($ch);
//var_dump($data);
if (strpos($data,'totalResults": 0,') !== false) {
echo '<span> <img src="link_for_button" alt="online"/> Offline</span><br />'; //echo "<span style='color:red;'>Video Offline</span>";
} else echo '<span><img src="link_for_button" alt="online"/> Online</span><br />'; //echo "<span style='color:green;'>Video Online</span>";
Related
First question! Woho!
I'm currently coding a Dashboard for a Discord bot in PHP.
Recently, when trying to select a server, it would only display 3 malfunctioning items. So I checked the complete array using print_r() and it turned out it rate-limited me.
Now, I got no idea why that happened, as I'm the only person on the server. So what am I doing wrong? Here is the code for listing the selection page:
$all = DiscordAPI("/users/#me/guilds");
foreach ($all as $guild) {
echo "<a href='/?server=" . $guild['id'] . "'><p>";
if ($guild['icon']) {
if (substr($guild['icon'], 0, 2) == "a_") {
echo "<img class='server-icon' src='https://cdn.discordapp.com/icons/" . $guild['id'] . "/" . $guild['icon'] . ".gif?size=64'>";
} else {
echo "<img class='server-icon' src='https://cdn.discordapp.com/icons/" . $guild['id'] . "/" . $guild['icon'] . ".webp?size=64'>";
}
} else {
$words = explode(" ", $guild['name']);
$acronym = "";
foreach ($words as $w) {
$acronym .= $w[0];
}
echo "<img class='server-icon' src='https://cdn.statically.io/avatar/s=64/" . $acronym . "'>";
}
echo $guild['name'];
echo "</p></a>";
}
And here is the code for the DiscordAPI() function:
function DiscordAPI($endpoint)
{
if (!$_SESSION['token']) {
die("No token provided");
}
$ch = curl_init("https://discord.com/api" . $endpoint);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
"Authorization: Bearer " . $_SESSION['token']
));
$data = curl_exec($ch);
curl_close($ch);
return json_decode($data, true);
}
There are 2 API calls before this code to verify that there is a valid token.
Quick sidenote: I got Autism, so I might understand stuff a bit differently. Don't be scared to ask if I worded something hard to understand. Thank you for trying to understand me.
I am trying to get information from a XML Rest API.
I can get everything but not the images.
I can display all info, but when it comes on images I get 401 error
Failed to load resource: the server responded with a status of 401 ().
My username and password are correct. The only way to get the images display is if I login to this API in a different window. Then all images are displayed.
Am I doing something wrong ? Here is my php code:
function CallAPI($url){
$curl = curl_init();
curl_setopt($curl, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
curl_setopt($curl, CURLOPT_USERPWD, "usr:psw");
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
$result = curl_exec($curl);
curl_close($curl);
return $result;
}
$a = "query SQL";
$asd = $database->query($a);
while($row = sqlsrv_fetch_array($asd)){
echo $url = $row['url']; // I can get this data display
$row['hotel_name'];
$a = CallAPI($url);
$axml = new SimpleXMLElement($a);
$img_link = $axml->item->images;
foreach ($img_link->image as $value) {
echo "<img src='".$value->sizes->size[1]->attributes('http://www.w3.org/1999/xlink')."' class='img-responsive' />";
}
So your problem must be that this link need authentication as well. So try make the call for $value->sizes->size[1]->attributes('http://www.w3.org/1999/xlink') to pass user and pass.
So you code should look like this
while($row = sqlsrv_fetch_array($asd)){
$url = $row['url'];
$row['hotel_name']; // No use of this one, maybe you use it on your end
$a = CallAPI($url);
$axml = new SimpleXMLElement($a);
$img_link = $axml->item->images;
foreach ($img_link->image as $value) {
/*I am adding the link to a variable*/
$imgLinkFromValue = $value->sizes->size[1]->attributes('http://www.w3.org/1999/xlink');
$img = CallAPI($imgLinkFromValue );
/* Now if you still get a link just pass the variable */
echo "<img src='$img->**link or href or anything you get**' class='img-responsive' />";
}
UPDATE
If you getting text or symbols maybe you have to use base64_encode().
Try this :
$img = base64_encode(CallAPI($imgLinkFromValue ));
echo '<img src="data:image/jpeg;base64,' . $img . '" class="img-responsive" />'
You can always change the data inside src to png or gif, depends on what you need.
For more Info about base64_encode function you can use this link.
http://php.net/manual/en/function.base64-encode.php
I'm trying to pull the count of subscribers for a particular youtube channel. I referred some links on Stackoverflow as well as external sites, came across links like this. Almost all the links suggested me to use youtube gdata api and pull the count from subscriberCount but the following code
$data = file_get_contents("http://gdata.youtube.com/feeds/api/users/Tollywood/playlists");
$xml = simplexml_load_string($data);
print_r($xml);
returns no such subscriberCount. Is there any other way of getting subscribers count or am I doing something wrong?
The YouTube API v2.0 is deprecated. Here's how to do it with 3.0. OAuth is not needed.
1) Log in to a Google account and go to https://console.developers.google.com/. You may have to start a new project.
2) Navigate to APIs & auth and go to Public API Access -> Create a New Key
3) Choose the option you need (I used 'browser applications') This will give you an API key.
4) Navigate to your channel in YouTube and look at the URL. The channel ID is here: https://www.youtube.com/channel/YOUR_CHANNEL_ID
5) Use the API key and channel ID to get your result with this query: https://www.googleapis.com/youtube/v3/channels?part=statistics&id=YOUR_CHANNEL_ID&key=YOUR_API_KEY
Great success!
Documentation is actually pretty good, but there's a lot of it. Here's a couple of key links:
Channel information documentation: https://developers.google.com/youtube/v3/sample_requests
"Try it" page: https://developers.google.com/youtube/v3/docs/subscriptions/list#try-it
Try this ;)
<?php
$data = file_get_contents('http://gdata.youtube.com/feeds/api/users/Tollywood');
$xml = new SimpleXMLElement($data);
$stats_data = (array)$xml->children('yt', true)->statistics->attributes();
$stats_data = $stats_data['#attributes'];
/********* OR **********/
$data = file_get_contents('http://gdata.youtube.com/feeds/api/users/Tollywood?alt=json');
$data = json_decode($data, true);
$stats_data = $data['entry']['yt$statistics'];
/**********************************************************/
echo 'lastWebAccess = '.$stats_data['lastWebAccess'].'<br />';
echo 'subscriberCount = '.$stats_data['subscriberCount'].'<br />';
echo 'videoWatchCount = '.$stats_data['videoWatchCount'].'<br />';
echo 'viewCount = '.$stats_data['viewCount'].'<br />';
echo 'totalUploadViews = '.$stats_data['totalUploadViews'].'<br />';
?>
I could do it with regex for my page , not sure does it work for you or not . check following codes:
<?php
$channel = 'http://youtube.com/user/YOURUSERNAME/';
$t = file_get_contents($channel);
$pattern = '/yt-uix-tooltip" title="(.*)" tabindex/';
preg_match($pattern, $t, $matches, PREG_OFFSET_CAPTURE);
echo $matches[1][0];
<?php
//this code was written by Abdu ElRhoul
//If you have any questions please contact me at info#oklahomies.com
//My website is http://Oklahomies.com
set_time_limit(0);
function retrieveContent($url){
$file = fopen($url,"rb");
if (!$file)
return "";
while (feof ($file)===false) {
$line = fgets ($file, 1024);
$salida .= $line;
}
fclose($file);
return $salida;
}
{
$content = retrieveContent("https://www.youtube.com/user/rhoula/about"); //replace rhoula with the channel name
$start = strpos($content,'<span class="about-stat"><b>');
$end = strpos($content,'</b>',$start+1);
$output = substr($content,$start,$end-$start);
echo "Number of Subscribers = $output";
}
?>
<?php
echo get_subscriber("UCOshmVNmGce3iwozz55hpww");
function get_subscriber($channel,$use = "user") {
(int) $subs = 0;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://www.youtube.com/".$use."/".$channel."/about?disable_polymer=1");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1 );
curl_setopt($ch, CURLOPT_POST, 0 );
curl_setopt($ch, CURLOPT_REFERER, 'https://www.youtube.com/');
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0');
$result = curl_exec($ch);
$R = curl_getinfo($ch);
if($R["http_code"] == 200) {
$pattern = '/yt-uix-tooltip" title="(.*)" tabindex/';
preg_match($pattern, $result, $matches, PREG_OFFSET_CAPTURE);
$subs = intval(str_replace(',','',$matches[1][0]));
}
if($subs == 0 && $use == "user") return get_subscriber($channel,"channel");
return $subs;
}
I would like to create a PHP script that will go to another website (given a URL) and check the page source of that page for a certain string of data.
I actually have a way of doing it right now, but looking for an alternative way.
Right now I'm using the file_get_contents php function to read in the page source of the URL into a variable.
$link = "www.example.com";
$linkcontents = file_get_contents($link);
Then I use the strpos php function to search the page for the string I'm looking for:
$needle = "<div>find me</div>";
if (strpos($linkcontents, $needle) == false) {
echo "String not found";
} else {
echo "String found";
}
I have heard the cURL command is good for handling things that have to do with URLs, I'm just not sure how I would use it to do what I'm doing with file_get_contents and strpos functions combined like I have put above.
Or if there is another way to do it, I'm all ears :-)
Well we construct a CURL function like this
function Visit($irc_server){
// Open the connection
$user_agent = $_SERVER['HTTP_USER_AGENT'];
$port = '80';
$ch = curl_init(); // initialize curl handle
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($ch, CURLOPT_URL, $irc_server);
curl_setopt($ch, CURLOPT_FAILONERROR, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch, CURLOPT_TIMEOUT, 50);
curl_setopt($ch, CURLOPT_USERAGENT, $user_agent);
curl_setopt($ch, CURLOPT_PORT, $port);
$data = curl_exec($ch);
$httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
$curl_errno = curl_errno($ch);
$curl_error = curl_error($ch);
if ($curl_errno > 0) {
$return = ("cURL Error ($curl_errno): $curl_error\n");
} else {
$return = $data;
}
curl_close($ch);
/*if($httpcode >= 200 && $httpcode < 300){
$return = 'OK';
}else{
$return ='Nok';
}*/
return $return;
}
Another function to process our url
function tenta($url){
// Now, create a instance of your class, define the behaviour
// of the crawler (see class-reference for more options and details)
// and start the crawling-process.
$crawler = new MyCrawler();
// URL to crawl
$crawler->setURL($url);
// Only receive content of files with content-type "text/html"
$crawler->addContentTypeReceiveRule("#text/html#");
// Ignore links to pictures, dont even request pictures
$crawler->addURLFilterRule("#\.(jpg|jpeg|gif|png)$# i");
// Store and send cookie-data like a browser does
$crawler->enableCookieHandling(true);
// Set the traffic-limit to 1 MB (in bytes,
// for testing we dont want to "suck" the whole site)
$crawler->setTrafficLimit(1000 * 1024);
// Thats enough, now here we go
$crawler->go();
// At the end, after the process is finished, we print a short
// report (see method getProcessReport() for more information)
$report = $crawler->getProcessReport();
if (PHP_SAPI == "cli") $lb = "\n";
else $lb = "<br />";
/*
echo "Summary:".$lb;
echo "Links followed: ".$report->links_followed.$lb;
echo "Documents received: ".$report->files_received.$lb;
echo "Bytes received: ".$report->bytes_received." bytes".$lb;
echo "Process runtime: ".$report->process_runtime." sec".$lb; */
}
We construct our class
// It may take a whils to crawl a site ...
set_time_limit(110000);
// Inculde the phpcrawl-mainclass
include("libs/PHPCrawler.class.php");
// Extend the class and override the handleDocumentInfo()-method
class MyCrawler extends PHPCrawler
{
function handleDocumentInfo($DocInfo)
{
global $find;
// Just detect linebreak for output ("\n" in CLI-mode, otherwise "<br>").
if (PHP_SAPI == "cli") $lb = "\n";
else $lb = "<br />";
// Print the URL and the HTTP-status-Code
echo "Page requested: ".$DocInfo->url." (".$DocInfo->http_status_code.")".$lb;
//echo $img_url = '<img src="'.$DocInfo->url.'.jpg" width = "150" height = "150" />'.$lb;
//we looking for kenya on this domain
foreach ($find as $matche) {
$matchb = implode(',',$matche);
//$matchb = $matche['word'];
if(preg_match("/(".$matchb.")/i", Visit($DocInfo->url))) {
echo "<a href=".$DocInfo->url." target=_blank>".$DocInfo->url."</a><b style='color:red;'>".$matche['word']."</b>".$lb;
}
}
// Print the refering URL
echo "Referer-page: ".$DocInfo->referer_url.$lb;
// Print if the content of the document was be recieved or not
if ($DocInfo->received == true)
echo "Content received: ".$DocInfo->bytes_received." bytes".$lb;
else
echo "Content not received".$lb;
// Now you should do something with the content of the actual
// received page or file ($DocInfo->source), we skip it in this example
echo $lb;
flush();
}
}
Our variables in array
Urls we will be crawling.
$url = array(
array("id"=>7, "name"=>"soltechit","url" => "soltechit.co.uk"),
array("id"=>5, "name"=>"CNN","url" => "cnn.com", "description" => "A social utility that connects people, to keep up with friends, upload photos, share links")
);
strings we are looking for
$find = array(
array("word" => "routers"),
array("word" => "Moose"),
array("word" => "worm"),
array("word" => "kenya"),
array("word" => "alshabaab"),
array("word" => "ISIS"),
array("word" => "security"),
array("word" => "windows 10 release"),
array("word" => "hacked")
);
Which we call like this
foreach ($url as $urls) {
$url = $urls['url'];
echo '<h2>'.$urls['name'].'</h2>';
echo $urls['description'].'<br>';
echo tenta($url).'<br>';
}
If file_get_contents works just fine for the task at hand, why change anything...? I say keep using it.
Note that you'll need to pass it a URL that starts with "http://", otherwise it'll try to open a local file called "www.example.com".
Also it's good practice to do === false with strpos, since otherwise a match at position 0 will not be recognized (since 0 == false but not 0 === false)
Something better is here I guess it would be of help which goes like:
<?php
// It may take a whils to crawl a site ...
set_time_limit(10000);
// Inculde the phpcrawl-mainclass
include("libs/PHPCrawler.class.php");
// Extend the class and override the handleDocumentInfo()-method
class MyCrawler extends PHPCrawler
{
function handleDocumentInfo($DocInfo)
{
// Just detect linebreak for output ("\n" in CLI-mode, otherwise "<br>").
if (PHP_SAPI == "cli") $lb = "\n";
else $lb = "<br />";
// Print the URL and the HTTP-status-Code
echo "Page requested: ".$DocInfo->url." (".$DocInfo->http_status_code.")".$lb;
// Print the refering URL
echo "Referer-page: ".$DocInfo->referer_url.$lb;
// Print if the content of the document was be recieved or not
if ($DocInfo->received == true)
echo "Content received: ".$DocInfo->bytes_received." bytes".$lb;
else
echo "Content not received".$lb;
// Now you should do something with the content of the actual
// received page or file ($DocInfo->source), we skip it in this example
echo $lb;
flush();
}
}
// Now, create a instance of your class, define the behaviour
// of the crawler (see class-reference for more options and details)
// and start the crawling-process.
$crawler = new MyCrawler();
// URL to crawl
$crawler->setURL("www.php.net");
// Only receive content of files with content-type "text/html"
$crawler->addContentTypeReceiveRule("#text/html#");
// Ignore links to pictures, dont even request pictures
$crawler->addURLFilterRule("#\.(jpg|jpeg|gif|png)$# i");
// Store and send cookie-data like a browser does
$crawler->enableCookieHandling(true);
// Set the traffic-limit to 1 MB (in bytes,
// for testing we dont want to "suck" the whole site)
$crawler->setTrafficLimit(1000 * 1024);
// Thats enough, now here we go
$crawler->go();
// At the end, after the process is finished, we print a short
// report (see method getProcessReport() for more information)
$report = $crawler->getProcessReport();
if (PHP_SAPI == "cli") $lb = "\n";
else $lb = "<br />";
echo "Summary:".$lb;
echo "Links followed: ".$report->links_followed.$lb;
echo "Documents received: ".$report->files_received.$lb;
echo "Bytes received: ".$report->bytes_received." bytes".$lb;
echo "Process runtime: ".$report->process_runtime." sec".$lb;
?>
I wanted to ask you help I have an xml source (http://livefmhits.6te.net/nowplay.xml) it gives me the source of the song and I wanted to remove the cover through the lastfm (artist.getinfo) in echo I tried as follows:
<?php
$xml = simplexml_load_file('http://livefmhits.6te.net/nowplay.xml');
$artist = urlencode($xml->TRACK["ARTIST"]);
$url = 'http://ws.audioscrobbler.com/2.0/?method=artist.getinfo&artist='.$artist.&api_key=b25b959554ed76058ac220b7b2e0a026;
$xml2 = #simplexml_load_file($url);
if ($xml2 === false)
{
echo("Url failed"); // do whatever you want to do
}
else
{
if($xml2->track->album->image[3])
{
echo '<img src="';
echo((string) $xml2->track->album->image[3]);
echo '">';
}
else
{
echo "<img src='http://3.bp.blogspot.com/-SEsYAbASI68/VZ7xNuKy-GI/AAAAAAAAA3M/IWcGRDoXXms/s1600/capaindisponivel.png'"; // do whatever you want to do
}
}
I'm not able to extract the source must be wrong echo, I like to remove the image that says "mega". I present to you the complete link
http://ws.audioscrobbler.com/2.0/?method=artist.getinfo&lang=ru&artist=COLDPLAY&api_key=ae9dc375e16f12528b329b25a3cca3ee and yet I was to do a post yours but I could not (Get large artist image from last.fm xml (api artist.getinfo))
I came to ask your help in this work from the outset thanks for availability
Here is how I'm doing it in json. It's pretty much the same in XML.
First, we define the API KEY:
define('YOUR_API_KEY', 'b25b959554ed76058ac220b7b2e0a026');
It's better to separate it from the code, it makes things easier if you need to reuse it somewhere else in your code. (for eg. in another function)
Then, we create the 2 functions we need to make the magic happen.
1) To query Lastfm's API and get its content, we will use CURL:
function _curl($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 15);
if(strtolower(parse_url($url, PHP_URL_SCHEME)) == 'https')
{
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER,1);
curl_setopt($ch,CURLOPT_SSL_VERIFYHOST,1);
}
curl_setopt($ch, CURLOPT_URL, $url);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
2) Lastfm offers many options. Personally, I find it's easier to separate main queries into functions. But as you simply target images, here is the function I'd use:
function lfm_img($artist)
{
$url = "http://ws.audioscrobbler.com/2.0/?method=artist.getinfo&artist=$artist&api_key=".YOUR_API_KEY."&format=json";
$json = _cul($url);
$data = str_ireplace("#text", "text", $json);
$list = json_decode($data);
//If an error occurs...
if($list->error)
return 'ERROR.'. $list->error;
//That's where we get the photo. We try to get the biggest size first, if not we try smaller sizes. Returns '0' if nothing is found.
if($list->artist->image[4])
$img = $list->artist->image[4]->text;
else if($list->artist->image[3])
$img = $list->artist->image[3];
else if($list->artist->image[2])
$img = $list->artist->image[2];
else if($list->artist->image[1])
$img = $list->artist->image[1];
else if($list->artist->image[0])
$img = $list->artist->image[0];
else
$img = 0;
return $img;
}
And finally, use them:
$artist_query = 'Nirvana';
$artist_image = lfm_img($artist);
//display image
echo '<img src="'. $artist_image .'" alt="'. $artist_query .'" />';
I think it's self explanatory here. ;)
Hope it helped!