I use this script to download and resize a remote image. In the resize part something goes wrong. What is it?
<?php
$img[]='http://i.indiafm.com/stills/celebrities/sada/thumb1.jpg';
$img[]='http://i.indiafm.com/stills/celebrities/sada/thumb5.jpg';
foreach($img as $i){
save_image($i);
if(getimagesize(basename($i))){
echo '<h3 style="color: green;">Image ' . basename($i) . ' Downloaded OK</h3>';
}else{
echo '<h3 style="color: red;">Image ' . basename($i) . ' Download Failed</h3>';
}
}
function save_image($img,$fullpath='basename'){
if($fullpath=='basename'){
$fullpath = basename($img);
}
$ch = curl_init ($img);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,1);
$rawdata=curl_exec($ch);
curl_close ($ch);
// now you make an image out of it
$im = imagecreatefromstring($rawdata);
$x=300;
$y=250;
// then you create a second image, with the desired size
// $x and $y are the desired dimensions
$im2 = imagecreatetruecolor($x,$y);
imagecopyresized($im2,$im,0,0,0,0,$x,$y,imagesx($im),imagesy($im));
imagecopyresampled($im2,$im,0,0,0,0,$x,$y,imagesx($im),imagesy($im));
// delete the original image to save resources
imagedestroy($im);
if(file_exists($fullpath)){
unlink($fullpath);
}
$fp = fopen($fullpath,'x');
fwrite($fp, $im2);
fclose($fp);
// remember to free resources
imagedestroy($im2);
}
?>
When I run it, PHP gives me the following error:
Warning: fwrite() expects parameter 2 to be string, resource given ... on line 53
fwrite() writes a string to a file. You want to use the GD function imagejpeg() to save the GD resource to a file. It works for me when I change
$fp = fopen($fullpath,'x');
fwrite($fp, $im2);
fclose($fp);
to
imagejpeg($im2, $fullpath);
On an unrelated note, if all you're doing with cURL is grabbing the files, you can simply use file_get_contents() instead of cURL, assuming PHP is configured to allow full URLs in fopen functions. (I believe it is by default.) See the Notes section on the file_get_contents() manual page for more info. The function is binary-safe, so it works with images in addition to text files. To use it, I just replaced all six lines of the cURL functions with this line:
$rawdata = file_get_contents($img);
Update:
In response to your question below, you can specify a new file name for them in the array keys like so:
<?php
$img['img1.jpg']='http://i.indiafm.com/stills/celebrities/sada/thumb1.jpg';
$img['img2.jpg']='http://i.indiafm.com/stills/celebrities/sada/thumb5.jpg';
foreach($img as $newname => $i){
save_image($i, $newname);
if(getimagesize(basename($newname))){
Related
I am trying to download images from a Slack channel by the following code, but I just get the html code of the page.
Am I doing something silly, or they have a trick to make this difficult?
<?php
copy('https://files.slack.com/files-tmb/T1Q3K1TFB-F6TQMM6AG-baba978dff/image003_480.jpg', 'file.jpeg');
echo '<img src="file.jpeg">';
echo '<hr>';
//Get the file
$content = file_get_contents("https://files.slack.com/files-tmb/T1Q3K1TFB-F6TQMM6AG-baba978dff/image003_480.jpg");
//Store in the filesystem.
$fp = fopen("image.jpg", "w");
fwrite($fp, $content);
fclose($fp);
echo '<img src="image.jpg">';
echo '<hr>';
$url_to_image
= 'https://files.slack.com/files-tmb/T1Q3K1TFB-F6TQMM6AG-baba978dff/image003_480.jpg';
$ch = curl_init($url_to_image);
$my_save_dir = 'images/';
$filename = basename($url_to_image);
$complete_save_loc = $my_save_dir . $filename;
$fp = fopen($complete_save_loc, 'wb');
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_exec($ch);
curl_close($ch);
fclose($fp);
echo '<img src="'.$complete_save_loc.'">';
echo '<hr>';
echo '<hr><img src="https://files.slack.com/files-tmb/T1Q3K1TFB-F6TQMM6AG-baba978dff/image003_480.jpg">';
echo '<hr><img src="https://files.slack.com/files-tmb/T1Q3K1TFB-F6TQMM6AG-baba978dff/image003_360.jpg">';
echo '<hr><img src="https://conversazioniconmario.slack.com/files/U1Q37UCNL/F6TQMM6AG/image003.jpg">';
?>
wget gets fooled too;
wget -nd -r -P /myLocalPath/images -A jpeg,jpg,bmp,gif,png https://files.slack.com/files-tmb/T1Q3K1TFB-F6TQMM6AG-baba978dff/image003_480.jpg
You can also download file by providing right headers:
wget -d --header="Authorization: Bearer A_VALID_TOKEN" https://files.slack.com/files-pri/T1Q3K1TFB-F6TQMM6AG-baba978dff/download/image003.jpg
You can download images and other files from Slack, but not directly. You need to first mark the file as public to get its public URL.
This can be done with the API method files.sharedPublicURL. It will return a file object including the permalink_public property, which is the URL you can use to download it with your app.
After you downloaded it you can revoke the public URL again with files.revokePublicURL.
For people interested in Python implementation, I suggest to check this repo. I have just created a pull request so the script works with the latest Slack API. https://github.com/auino/slack-downloader
I have one Image on another server (Image).but when i get this image With file_get_contents() function it will return
Not Found Error
and generate this Image.
file_put_contents(destination_path, file_get_contents(another_server_path));
plz help me. if there are another way to get those image.
Try this.
There is problem with URL Special character.then you have to decode some special character from url basename.
$imgfile = 'http://www.lagrolla.com.au/image/m fr 137 group.jpg';
$destinationPath = '/path/to/folder/';
$filename = basename($imgpath);
$imgpath = str_replace($filename,'',$imgpath).rawurldecode($filename);
copy($imgfile,$destination_path.$filename);
Another way to download copy file from another server is using curl:
$ch = curl_init('http://www.lagrolla.com.au/image/data/m%20fr%20137%20group.jpg');
$destinationPath = '/path/to/folder/filenameWithNoSpaces.jpg';
$fp = fopen($destinationPath, 'wb');
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_exec($ch);
curl_close($ch);
fclose($fp);
Note: It is bad practice to save images with spaces in file name, so you should save this file with proper name.
I am looking for a function that gets the metadata of a .mp3 file from a URL (NOT local .mp3 file on my server).
Also, I don't want to install http://php.net/manual/en/id3.installation.php or anything similar to my server.
I am looking for a standalone function.
Right now i am using this function:
<?php
function getfileinfo($remoteFile)
{
$url=$remoteFile;
$uuid=uniqid("designaeon_", true);
$file="../temp/".$uuid.".mp3";
$size=0;
$ch = curl_init($remoteFile);
//==============================Get Size==========================//
$contentLength = 'unknown';
$ch1 = curl_init($remoteFile);
curl_setopt($ch1, CURLOPT_NOBODY, true);
curl_setopt($ch1, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch1, CURLOPT_HEADER, true);
curl_setopt($ch1, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here)
$data = curl_exec($ch1);
curl_close($ch1);
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
$contentLength = (int)$matches[1];
$size=$contentLength;
}
//==============================Get Size==========================//
if (!$fp = fopen($file, "wb")) {
echo 'Error opening temp file for binary writing';
return false;
} else if (!$urlp = fopen($url, "r")) {
echo 'Error opening URL for reading';
return false;
}
try {
$to_get = 65536; // 64 KB
$chunk_size = 4096; // Haven't bothered to tune this, maybe other values would work better??
$got = 0; $data = null;
// Grab the first 64 KB of the file
while(!feof($urlp) && $got < $to_get) { $data = $data . fgets($urlp, $chunk_size); $got += $chunk_size; } fwrite($fp, $data); // Grab the last 64 KB of the file, if we know how big it is. if ($size > 0) {
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RESUME_FROM, $size - $to_get);
curl_exec($ch);
// Now $fp should be the first and last 64KB of the file!!
#fclose($fp);
#fclose($urlp);
} catch (Exception $e) {
#fclose($fp);
#fclose($urlp);
echo 'Error transfering file using fopen and cURL !!';
return false;
}
$getID3 = new getID3;
$filename=$file;
$ThisFileInfo = $getID3->analyze($filename);
getid3_lib::CopyTagsToComments($ThisFileInfo);
unlink($file);
return $ThisFileInfo;
}
?>
This function downloads 64KB from a URL of an .mp3 file, then returns the array with the metadata by using getID3 function (which works on local .mp3 files only) and then deletes the 64KB's previously downloaded.
Problem with this function is that it is way too slow from its nature (downloads 64KB's per .mp3, imagine 1000 mp3 files.)
To make my question clear : I need a fast standalone function that reads metadata of a remote URL .mp3 file.
This function downloads 64KB from a URL of an .mp3 file, then returns the array with the metadata by using getID3 function (which works on local .mp3 files only) and then deletes the 64KB's previously downloaded. Problem with this function is that it is way too slow from its nature (downloads 64KB's per .mp3, imagine 1000 mp3 files.)
Yeah, well what do you propose? How do you expect to get data if you don't get data? There is no way to have a generic remote HTTP server send you that ID3 data. Really, there is no magic. Think about it.
What you're doing now is already pretty solid, except that it doesn't handle all versions of ID3 and won't work for files with more than 64KB of ID3 tags. What I would do to improve it to is to use multi-cURL.
There are several PHP classes available that make this easier:
https://github.com/jmathai/php-multi-curl
$mc = EpiCurl::getInstance();
$results[] = $mc->addUrl(/* Your stream URL here /*); // Run this in a loop, 10 at a time or so
foreach ($results as $result) {
// Do something with the data.
}
I am trying to get song name / artist name / song length / bitrate etc from a remote .mp3 file such as http://shiro-desu.com/scr/11.mp3 .
I have tried getID3 script but from what i understand it doesn't work for remote files as i got this error: "Remote files are not supported - please copy the file locally first"
Also, this code:
<?php
$tag = id3_get_tag( "http://shiro-desu.com/scr/11.mp3" );
print_r($tag);
?>
did not work either.
"Fatal error: Call to undefined function id3_get_tag() in /home4/shiro/public_html/scr/index.php on line 2"
As you haven't mentioned your error I am considering a common error case undefined function
The error you get (undefined function) means the ID3 extension is not enabled in your PHP configuration:
If you dont have Id3 extension file .Just check here for installation info.
Firstly, I didn’t create this, I’ve just making it easy to understand with a full example.
You can read more of it here, but only because of archive.org.
https://web.archive.org/web/20160106095540/http://designaeon.com/2012/07/read-mp3-tags-without-downloading-it/
To begin, download this library from here: http://getid3.sourceforge.net/
When you open the zip folder, you’ll see ‘getid3’. Save that folder in to your working folder.
Next, create a folder called “temp” in that working folder that the following script is going to be running from.
Basically, what it does is download the first 64k of the file, and then read the metadata from the file.
I enjoy a simple example. I hope this helps.
<?php
require_once("getid3/getid3.php");
$url_media = "http://example.com/myfile.mp3"
$a=getfileinfo($url_media);
echo"<pre>";
echo $a['tags']['id3v2']['album'][0] . "\n";
echo $a['tags']['id3v2']['artist'][0] . "\n";
echo $a['tags']['id3v2']['title'][0] . "\n";
echo $a['tags']['id3v2']['year'][0] . "\n";
echo $a['tags']['id3v2']['year'][0] . "\n";
echo "\n-----------------\n";
//print_r($a['tags']['id3v2']['album']);
echo "-----------------\n";
//print_r($a);
echo"</pre>";
function getfileinfo($remoteFile)
{
$url=$remoteFile;
$uuid=uniqid("designaeon_", true);
$file="temp/".$uuid.".mp3";
$size=0;
$ch = curl_init($remoteFile);
//==============================Get Size==========================//
$contentLength = 'unknown';
$ch1 = curl_init($remoteFile);
curl_setopt($ch1, CURLOPT_NOBODY, true);
curl_setopt($ch1, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch1, CURLOPT_HEADER, true);
curl_setopt($ch1, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here)
$data = curl_exec($ch1);
curl_close($ch1);
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
$contentLength = (int)$matches[1];
$size=$contentLength;
}
//==============================Get Size==========================//
if (!$fp = fopen($file, "wb")) {
echo 'Error opening temp file for binary writing';
return false;
} else if (!$urlp = fopen($url, "r")) {
echo 'Error opening URL for reading';
return false;
}
try {
$to_get = 65536; // 64 KB
$chunk_size = 4096; // Haven't bothered to tune this, maybe other values would work better??
$got = 0; $data = null;
// Grab the first 64 KB of the file
while(!feof($urlp) && $got < $to_get) { $data = $data . fgets($urlp, $chunk_size); $got += $chunk_size; } fwrite($fp, $data); // Grab the last 64 KB of the file, if we know how big it is.
if ($size > 0) {
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RESUME_FROM, $size - $to_get);
curl_exec($ch);
}
// Now $fp should be the first and last 64KB of the file!!
#fclose($fp);
#fclose($urlp);
}
catch (Exception $e) {
#fclose($fp);
#fclose($urlp);
echo 'Error transfering file using fopen and cURL !!';
return false;
}
$getID3 = new getID3;
$filename=$file;
$ThisFileInfo = $getID3->analyze($filename);
getid3_lib::CopyTagsToComments($ThisFileInfo);
unlink($file);
return $ThisFileInfo;
}
?>
How do I get the filesize of js file on another website. I am trying to create a monitor to check that a js file exists and that it is more the 0 bytes.
For example on bar.com I would have the following code:
$filename = 'http://www.foo.com/foo.js';
echo $filename . ': ' . filesize($filename) . ' bytes';
You can use a HTTP HEAD request.
<?php
$url = "http://www.neti.ee/img/neti-logo.gif";
$head = get_headers($url, 1);
echo $head['Content-Length'];
?>
Notice: this is not a real HEAD request, but a GET request that PHP parses for its Content-Length. Unfortunately the PHP function name is quite misleading. This might be sufficient for small js files, but use a real HTTP Head request with Curl for bigger file sizes because then the server won't have to upload the whole file and only send the headers.
For that case, use the code provided by Jakub.
Just use CURL, here is a perfectly good example listed:
Ref: http://www.php.net/manual/en/function.filesize.php#92462
<?php
$remoteFile = 'http://us.php.net/get/php-5.2.10.tar.bz2/from/this/mirror';
$ch = curl_init($remoteFile);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here)
$data = curl_exec($ch);
curl_close($ch);
if ($data === false) {
echo 'cURL failed';
exit;
}
$contentLength = 'unknown';
$status = 'unknown';
if (preg_match('/^HTTP\/1\.[01] (\d\d\d)/', $data, $matches)) {
$status = (int)$matches[1];
}
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
$contentLength = (int)$matches[1];
}
echo 'HTTP Status: ' . $status . "\n";
echo 'Content-Length: ' . $contentLength;
?>
Result:
HTTP Status: 302
Content-Length: 8808759
Another solution. http://www.php.net/manual/en/function.filesize.php#90913
This is just a two step process:
Crawl the the js file and store it to a variable
Check if the length of the js file is greater than 0
thats it!!
Here is how you can do it in PHP
<?php
$data = file_get_contents('http://www.foo.com/foo.js');
if(strlen($data)>0):
echo "yay"
else:
echo "nay"
?>
Note: You can use HTTP Head as suggested by Uku but then if you are seeking for the page content if js file has content then you would have to crawl again :(