I want to download multiple images from the following website:
www.bbc.co.uk
I want to do it by using PHP cURL, can someone help lead me in the right direction?
It would be nice to download all the images in one shot, but if someone can help me download maybe download just 1 or a bunch that would be great!
Edit: it would be a good idea to show what I have tried:
<?php
$image_url = "www.bbc.co.uk";
$ch = curl_init();
$timeout = 0;
curl_setopt ($ch, CURLOPT_URL, $image_url);
curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
// Getting binary data
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, 1);
$image = curl_exec($ch);
curl_close($ch);
// output to browser
header("Content-type: image/jpeg");
print $image;
?>
For some reason it is not working. It is to be noted I am an absolute amatuer at PHP and programming in general.
The above code you pasted isn't doing what you think it is.
$image = curl_exec($ch);
The $image variable doesn't contain any image, it actually contains the entire HTML of that webpage as a string.
If you replace
// output to browser
header("Content-type: image/jpeg");
print $image;
with:
var_dump($image);
You will see the html.
Something like this:
Try to find the actual champion image source and parse it accordingly
You just need the whole /Air/assets/images/champions folder right?
Nothing easier when you use FireFox and a Download plugin like "Download Them All", open the FTP folder (Eambo's link) where the champions' pictures are located, right clic, select the plug-in.
It's gonna list all the files, select them all or select those you need and start the download.
Also, if you own that game you can take a look in this path:
\League of Legends\rads\projects\lol_air_client\releases\<newest-version>\deploy\assets\images\champions
Since you will probably use that in your university, CHECK THIS and I hope you will find a solution.
Related
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'http://www.example.com/xxx.png');
curl_setopt($curl, CURLOPT_REFERER, 'http://www.example.com');
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
$result = curl_exec($curl);
header('Content-type:image/PNG');
echo $result;
curl_close($curl);
The function header() didn't work, it always dispalyed binary data.
Maybe because I used these codes in the middle of webpage what has existed.
The webpage outputed some texts before header(), so it didn't work.
I want to get image by url, and display image directly, no need save file to disc.
So how can I do ? Please help me !!!
—————————
I need set referer, so I used curl.
Use the image directly as follows:
<img src ='http://www.example.com/xxx.png'>
I'm willing to use thumbnails into my website which is mainly like websites directory.
I've been thinking to save url thumbnails into certain directory !
Example :-
I'm going to use free websites thumbnails service that gives me code to show thumbnail image of any URL as follow
<img src='http://thumbnails_provider.com/code=MY_ID&url=ANY_SITE.COM'/>
This would show the thumbnail of ANY_SITE.COM
i want to save the generate thumbnail image into certain directory my_site.com/thumbnails
Why i'm doing this ?
in fact my database table is like my_table {id,url,image} where i'm going to give the image thumbnail random name and store its new name into my_table related to its url then i can call it back anytime and i know how to do it but i don't know how to save it into certain directory.
any help ~thanks
Using cURL should work for you:
$file = 'the URL';
$ch = curl_init ($file);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,1);
$rawdata=curl_exec($ch);
curl_close ($ch);
$fullpath = 'path to destination';
$fp = fopen($fullpath);
fwrite($fp, $rawdata);
fclose($fp);
You could use curl to fetch the remote image. You can save it with curl_setopt($handler, CURLOPT_FILE, '/my/image/path/here.jpg');. The id could be something simple like a hash of the original URL. Obviously you'd have to check to make sure the directories exist before you save the file (using is_dir() and creating them with mkdir() if they don't).
Is this possible using php + ffmpeg?
ffmpeg-php has the ability to:
Ability to grab frames from movie files and return them as images that
can be manipulated using PHP's built-in image functions. This is great
for automatically creating thumbnails for movie files.
I just don't want to download the whole file before doing so.
So lets say i want to grab a frame # 10% of the movie:
First lets get the size of remote file:
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_URL, $url); //specify the url
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$head = curl_exec($ch);
$size = curl_getinfo($ch,CURLINFO_CONTENT_LENGTH_DOWNLOAD);
Then it's quite easy to download only 10% of the .flv or .mov file using curl.
But the framegrab trick using ffmpeg-php probably won't work because the file probably is corrupted?
Any other ideas?
Yes I believe this will work. For video files as long as you do have the start of the file, processing like this should be possible. (If you only had, for example, a chunk of the file from the middle, it probably wouldn't work.)
On the command line I downloaded the first part of an .FLV file with Curl, then grabbed frames using ffmpeg and it worked correctly. Doing the same in PHP should work as well.
The following code transfers an image that is created on the fly from a server to a client site using cURL. It stopped working recently and have not been able to find out what the problem is:
// get_image.php
ob_start();
// create a new CURL resource
$ch = curl_init();
// set URL and other appropriate options
curl_setopt($ch, CURLOPT_URL, 'url/to/image.php');
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
// set timeouts
set_time_limit(30);
curl_setopt($ch, CURLOPT_TIMEOUT, 30);
// open a stream for writing
$outFile = fopen($fileDestination, 'wb');
curl_setopt($ch, CURLOPT_FILE, $outFile);
// grab file from URL
curl_exec($ch);
fclose($outFile);
// close CURL resource, and free up system resources
curl_close($ch);
ob_end_clean();
//image.php
/*
* Create image based on client site ...
*/
$filePath = 'path/to/image.png'
$imageFile = file_get_contents($filePath);
header("content-type: image/png");
echo $imageFile;
unlink($filePath);
The file get_image.php is located in a client site and calls the file image.php located in my server.
After running this code the image in the client site is about 7 bytes larger than the original, these bytes seem to be line breaks. After debugging for several hours I found out that these bytes are added when I echo $imageFile. If the 7 bytes are manually removed from the resulting image, the image displays correctly.
There are no errors nor exceptions thrown. The image created in the server is created with no issues. The only output in FF is "The image 'url/to/image.php' cannot be displayed, because it contains errors"
I am not sure what is causing this. Help is greatly appreciated.
Onema
UPDATE:
http://files.droplr.com/files/38059844/V5Jd.Screen%20shot%202011-01-12%20at%2012.17.53%20PM.png
http://files.droplr.com/files/38059844/QU4Z.Screen%20shot%202011-01-12%20at%2012.23.37%20PM.png
Some things to check.
That both files are stored without BOMs
That '<?php' are the first five characters and '?>' the last two in both files.
That when you remove the ob_start() and ob_end-clean(() it should show no error messages.
If you put the unlink before the genereation, you can see the genereated file - check it is valid.
You might want to start the practice of leaving the final ?> from the end of your files - it isn't necessary, and can cause problems if there is whitespace and newlines following the php delimiter.
I want to download files, from an website using PHP.
And i want to create an php script to download files without going on their website to download files. I just want to pun their link on my script an download the file automatically.
I try with CURL, but doesn't work.... The link is like this <a rel="nofollow" href="/download-15866-114621.srt"><b>Download</b></a>
the code :
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,
'http://subtitrari.regielive.ro/download-15866-114621.srt');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$contents = curl_exec ($ch);
echo $contents;
curl_close ($ch);
I get "download failed!" as content, which means they probably have some sort of download protection. The best thing is probably to ask them what you should do (assuming you have their permission to download the file) or stop trying (assuming you don't).
Eitherway, try setting a referer header with CURLOPT_REFERER. Maybe they check that header to see that no-one is hotlinked to the file.