I was getting youtube title and youtube description form the same code but now its not working
I am getting following error:
Warning: DOMDocument::load() [domdocument.load]: http:// wrapper is disabled in the server configuration by allow_url_fopen=0 in /home/colorsfo/public_html/zaroorat/admin/pages/addSongProcess.php on line 16
Warning: DOMDocument::load(http://gdata.youtube.com/feeds/api/videos/Y7G-tYRzwYY) [domdocument.load]: failed to open stream: no suitable wrapper could be found in /home/colorsfo/public_html/zaroorat/admin/pages/addSongProcess.php on line 16
Warning: DOMDocument::load() [domdocument.load]: I/O warning : failed to load external entity "http://gdata.youtube.com/feeds/api/videos/Y7G-tYRzwYY" in /home/colorsfo/public_html/zaroorat/admin/pages/addSongProcess.php on line 16
....................................
Following Coding is used to get Youtube Video Data:
$url = "http://gdata.youtube.com/feeds/api/videos/".$embedCodeParts2[0];
$doc = new DOMDocument;
#$doc->load($url);
$title = $doc->getElementsByTagName("title")->item(0)->nodeValue;
$videoDescription = $doc->getElementsByTagName("description")->item(0)->nodeValue;
It was working before (This coding is working fine in Local server but on internet its not working) but now its not working. Please guide me how to fix this error.
Thanks for your time.
Your server's allow_url_fopen is disabled (so is mine). I feel your pain. Here's what I did.
Try using cURL, but return your data in json, using YouTube's v2 api. You do that by appending that data to the end of your url.
?v=2&alt=json
You didn't post how you're getting your YouTube ID, and that may be a part of the issue (though your sample url did work). So just in case, I'm also posting a simple function to retrieve the ID from the YouTube video url.
function get_youtube_id($url) {
$newurl = parse_url($url);
return substr($newurl['query'],2);
}
OK, now assuming you have your video id, you can run the following function for each field you wish to return.
// Grab JSON and format it into PHP arrays from YouTube.
// Options defined in the switch. No option returns entire array
// Example of what the returned JSON will look like, pretty, here:
// http://gdata.youtube.com/feeds/api/videos/dQw4w9WgXcQ?v=2&alt=json&prettyprint=true
function get_youtube_info ( $vid, $info ) {
$youtube = "http://gdata.youtube.com/feeds/api/videos/$vid?v=2&alt=json";
$ch = curl_init($youtube);
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$output = curl_exec($ch);
curl_close($ch);
//If $assoc = true doesn't work, try:
//$output = json_decode($output, true);
$output = json_decode($output, $assoc = true);
//Add the ['feed'] in if it exists.
if ($output['feed']) {
$path = &$output['feed']['entry'];
} else {
$path = &$output['entry'];
}
//set up a switch to return various data bits to return.
switch($info) {
case 'title':
$output = $path['title']['$t'];
break;
case 'description':
$output = $path['media$group']['media$description']['$t'];
break;
case 'author':
$output = $path['author'][0]['name'];
break;
case 'author_uri':
$output = $path['author'][0]['uri'];
break;
case 'thumbnail_small':
$output = $path['media$group']['media$thumbnail'][0]['url'];
break;
case 'thumbnail_medium':
$output = $path['media$group']['media$thumbnail'][2]['url'];
break;
case 'thumbnail_large':
$output = $path['media$group']['media$thumbnail'][3]['url'];
break;
default:
return $output;
break;
}
return $output;
}
$url = "http://www.youtube.com/watch?v=oHg5SJYRHA0";
$id = get_youtube_id($url);
echo "<h3>" . get_youtube_info($id, 'title') . "</h3>"; //echoes the title
echo "<p><img style='float:left;margin-right: 5px;' src=" . get_youtube_info($id, 'thumbnail_small') . " />" . get_youtube_info($id, 'description') . "</p>"; //echoes the description
echo "<br style='clear:both;' /><pre>";
echo print_r(get_youtube_info($id));
echo "</pre>";
DOMDocuments' load() function uses PHPs fopen wrappers to retrieve files.
It seems that on your webserver, allow_url_fopen is set to 0, thus disabling these wrappers.
Try adding the following line to the top of your script:
ini_set ('allow_url_fopen', 1);
UPDATE: Try this:
<?php
$url = "http://gdata.youtube.com/feeds/api/videos/" . $embedCodeParts2[0];
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$file = curl_exec($ch);
curl_close($ch);
$doc = new DOMDocument;
#$doc->loadHTML($file);
$title = $doc->getElementsByTagName("title")->item(0)->nodeValue;
$videoDescription = $doc->getElementsByTagName("description")->item(0)->nodeValue;
I hope it is not too late. My solution is to edit /etc/resolv.conf in your Linux machine: and replace first line with below line:
nameserver 8.8.8.8
Then save the file. no need for service restart.
Might work for servers who disabled some function accidentally for security.
Related
This question is continuation of my previous question
<?php
$remoteFile = 'http://cdn/bucket/my textfile.txt';
$ch = curl_init($remoteFile);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here)
$data = curl_exec($ch);
print_r($data)
curl_close($ch);
if ($data === false) {
echo 'cURL failed';
exit;
}
$contentLength = 'unknown';
$status = 'unknown';
if (preg_match('/^HTTP\/1\.[01] (\d\d\d)/', $data, $matches)) {
$status = (int)$matches[1];
}
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
$contentLength = (int)$matches[1];
}
echo 'HTTP Status: ' . $status . "\n";
echo 'Content-Length: ' . $contentLength;
?>
I am using above code to get the file size in server side from CDN url but when I use the CDN url with space in it. it is throwing below error
page not found 09/18/2014 - 16:54 http://cdn/bucket/my textfile.txt
Can I make curl call for remote url which contain space ?
To give little bit more info on this
I am having interface where user will be saving file to CDN (so user
can give whatever title user want, it may contain space )and all
information in saved in back end db. I have another interface where I
retrieve the saved information and show it in my page along with file
size which I am getting using above code.
You have to encode your url's which have space's in it.
echo urlencode('http://cdn/bucket/my textfile.txt');
Ref: urlencode
or you can use,
echo '<a href="http://example.com/department_list_script/',
rawurlencode('sales and marketing/Miami'), '">';
Ref: rawurlencode
Yes you need to URL / URI encode
In an encoded URL, the spaces are encoded as: %20, so your URL would be: http://cdn/bucket/my%20textfile.txt so you could just use this url.
Or as this is PHP, you could use the urlencode function.
ref: http://php.net/manual/en/function.urlencode.php
e.g.
$remoteFile = urlencode('http://cdn/bucket/my textfile.txt');
or
$ch = curl_init(urlencode($remoteFile));
I am trying to get song name / artist name / song length / bitrate etc from a remote .mp3 file such as http://shiro-desu.com/scr/11.mp3 .
I have tried getID3 script but from what i understand it doesn't work for remote files as i got this error: "Remote files are not supported - please copy the file locally first"
Also, this code:
<?php
$tag = id3_get_tag( "http://shiro-desu.com/scr/11.mp3" );
print_r($tag);
?>
did not work either.
"Fatal error: Call to undefined function id3_get_tag() in /home4/shiro/public_html/scr/index.php on line 2"
As you haven't mentioned your error I am considering a common error case undefined function
The error you get (undefined function) means the ID3 extension is not enabled in your PHP configuration:
If you dont have Id3 extension file .Just check here for installation info.
Firstly, I didn’t create this, I’ve just making it easy to understand with a full example.
You can read more of it here, but only because of archive.org.
https://web.archive.org/web/20160106095540/http://designaeon.com/2012/07/read-mp3-tags-without-downloading-it/
To begin, download this library from here: http://getid3.sourceforge.net/
When you open the zip folder, you’ll see ‘getid3’. Save that folder in to your working folder.
Next, create a folder called “temp” in that working folder that the following script is going to be running from.
Basically, what it does is download the first 64k of the file, and then read the metadata from the file.
I enjoy a simple example. I hope this helps.
<?php
require_once("getid3/getid3.php");
$url_media = "http://example.com/myfile.mp3"
$a=getfileinfo($url_media);
echo"<pre>";
echo $a['tags']['id3v2']['album'][0] . "\n";
echo $a['tags']['id3v2']['artist'][0] . "\n";
echo $a['tags']['id3v2']['title'][0] . "\n";
echo $a['tags']['id3v2']['year'][0] . "\n";
echo $a['tags']['id3v2']['year'][0] . "\n";
echo "\n-----------------\n";
//print_r($a['tags']['id3v2']['album']);
echo "-----------------\n";
//print_r($a);
echo"</pre>";
function getfileinfo($remoteFile)
{
$url=$remoteFile;
$uuid=uniqid("designaeon_", true);
$file="temp/".$uuid.".mp3";
$size=0;
$ch = curl_init($remoteFile);
//==============================Get Size==========================//
$contentLength = 'unknown';
$ch1 = curl_init($remoteFile);
curl_setopt($ch1, CURLOPT_NOBODY, true);
curl_setopt($ch1, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch1, CURLOPT_HEADER, true);
curl_setopt($ch1, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here)
$data = curl_exec($ch1);
curl_close($ch1);
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
$contentLength = (int)$matches[1];
$size=$contentLength;
}
//==============================Get Size==========================//
if (!$fp = fopen($file, "wb")) {
echo 'Error opening temp file for binary writing';
return false;
} else if (!$urlp = fopen($url, "r")) {
echo 'Error opening URL for reading';
return false;
}
try {
$to_get = 65536; // 64 KB
$chunk_size = 4096; // Haven't bothered to tune this, maybe other values would work better??
$got = 0; $data = null;
// Grab the first 64 KB of the file
while(!feof($urlp) && $got < $to_get) { $data = $data . fgets($urlp, $chunk_size); $got += $chunk_size; } fwrite($fp, $data); // Grab the last 64 KB of the file, if we know how big it is.
if ($size > 0) {
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RESUME_FROM, $size - $to_get);
curl_exec($ch);
}
// Now $fp should be the first and last 64KB of the file!!
#fclose($fp);
#fclose($urlp);
}
catch (Exception $e) {
#fclose($fp);
#fclose($urlp);
echo 'Error transfering file using fopen and cURL !!';
return false;
}
$getID3 = new getID3;
$filename=$file;
$ThisFileInfo = $getID3->analyze($filename);
getid3_lib::CopyTagsToComments($ThisFileInfo);
unlink($file);
return $ThisFileInfo;
}
?>
I have link in specific variable eg.
$link = 'http://google.com'
and I try to get content from this link with function fopen.
Eg. : $var = fopen("'".$link."'", "rb");
echo stream_get_contents($var); ,
but without success. Error is
Warning: file_get_contents('http://google.com'): failed to open stream: No such file or directory in /var/www/...
If I use directly
$var = fopen('http://google.com', "rb");
echo stream_get_contents($var)
this work perfectly?
How do I fix this or what method to use if I link is a variable?
Based on your posted code, this worked for me. Try it using this method:
<?php
$link = "http://www.google.com";
$var = fopen($link, "rb");
echo stream_get_contents($var)
?>
This always worked for me.
$url = 'http://google.com';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($ch);
curl_close($ch);
I am trying to use the currentcy exchange rate feeds of the European Central Bank (ECB)
http://www.ecb.int/stats/eurofxref/eurofxref-daily.xml
They have provided documentation on how to parse the xml but none of the options works for me: I checked that allow_url_fopen=On is set.
http://www.ecb.int/stats/exchange/eurofxref/html/index.en.html
For instance, I used but it doesn't echo anything and it seems the $XML object is always empty.
<?php
//This is aPHP(5)script example on how eurofxref-daily.xml can be parsed
//Read eurofxref-daily.xml file in memory
//For the next command you will need the config option allow_url_fopen=On (default)
$XML=simplexml_load_file("http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml");
//the file is updated daily between 2.15 p.m. and 3.00 p.m. CET
foreach($XML->Cube->Cube->Cube as $rate){
//Output the value of 1EUR for a currency code
echo '1€='.$rate["rate"].' '.$rate["currency"].'<br/>';
//--------------------------------------------------
//Here you can add your code for inserting
//$rate["rate"] and $rate["currency"] into your database
//--------------------------------------------------
}
?>
Update:
As I am behind proxy at my test environment, I tried this but still I don't get to read the XML:
function curl($url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_close ($ch);
return curl_exec($ch); }
$address = urlencode($address);
$data = curl("http://www.ecb.int/stats/eurofxref/eurofxref-daily.xml");
$XML = simplexml_load_file($data);
var_dump($XML); -> returns boolean false
Please help me. Thanks!
I didn't find any relevant settings in php.ini. Check with phpinfo() if you have SimpleXML support and cURLsupport enabled. (You should have them both and especially SimpleXML since you're using it and it returns false, it doesn't complain about missing function.)
Proxy might be an issue here. See this and this answer. Using cURL could be an answer to your problem.
Here's one alternative foud here.
$url = file_get_contents('http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml');
$xml = new SimpleXMLElement($url) ;
//file put contents - same as fopen, wrote and close
//need to output "asXML" - simple xml returns an object based upon the raw xml
file_put_contents(dirname(__FILE__)."/loc.xml", $xml->asXML());
foreach($xml->Cube->Cube->Cube as $rate){
echo '1€='.$rate["rate"].' '.$rate["currency"].'<br/>';
}
This solution works for me:
$data = [];
$url = "http://www.ecb.europa.eu/stats/eurofxref/eurofxref-hist-90d.xml";
$xmlRaw = file_get_contents($url);
$doc = new DOMDocument();
$doc->preserveWhiteSpace = FALSE;
$doc->loadXML($xmlRaw);
$node1 = $doc->getElementsByTagName('Cube')->item(0);
foreach ($node1->childNodes as $node2) {
$value = [];
foreach ($node2->childNodes as $node3) {
$value['date'] = $node2->getAttribute('time');
$value['currency'] = $node3->getAttribute('currency');
$value['rate'] = $node3->getAttribute('rate');
$data[] = $value;
unset($value);
}
}
echo "<pre"> . print_r($data) . "</pre>";
How do I get the filesize of js file on another website. I am trying to create a monitor to check that a js file exists and that it is more the 0 bytes.
For example on bar.com I would have the following code:
$filename = 'http://www.foo.com/foo.js';
echo $filename . ': ' . filesize($filename) . ' bytes';
You can use a HTTP HEAD request.
<?php
$url = "http://www.neti.ee/img/neti-logo.gif";
$head = get_headers($url, 1);
echo $head['Content-Length'];
?>
Notice: this is not a real HEAD request, but a GET request that PHP parses for its Content-Length. Unfortunately the PHP function name is quite misleading. This might be sufficient for small js files, but use a real HTTP Head request with Curl for bigger file sizes because then the server won't have to upload the whole file and only send the headers.
For that case, use the code provided by Jakub.
Just use CURL, here is a perfectly good example listed:
Ref: http://www.php.net/manual/en/function.filesize.php#92462
<?php
$remoteFile = 'http://us.php.net/get/php-5.2.10.tar.bz2/from/this/mirror';
$ch = curl_init($remoteFile);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here)
$data = curl_exec($ch);
curl_close($ch);
if ($data === false) {
echo 'cURL failed';
exit;
}
$contentLength = 'unknown';
$status = 'unknown';
if (preg_match('/^HTTP\/1\.[01] (\d\d\d)/', $data, $matches)) {
$status = (int)$matches[1];
}
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
$contentLength = (int)$matches[1];
}
echo 'HTTP Status: ' . $status . "\n";
echo 'Content-Length: ' . $contentLength;
?>
Result:
HTTP Status: 302
Content-Length: 8808759
Another solution. http://www.php.net/manual/en/function.filesize.php#90913
This is just a two step process:
Crawl the the js file and store it to a variable
Check if the length of the js file is greater than 0
thats it!!
Here is how you can do it in PHP
<?php
$data = file_get_contents('http://www.foo.com/foo.js');
if(strlen($data)>0):
echo "yay"
else:
echo "nay"
?>
Note: You can use HTTP Head as suggested by Uku but then if you are seeking for the page content if js file has content then you would have to crawl again :(