I have a class that allows me to download images using php curl. My class looks like this:
function getImage($img, $path) {
$fullpath = basename($img);
$ch = curl_init($img);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, 1);
$rawData = curl_exec($ch);
curl_close($ch);
if(file_exists($fullpath)) {
unlink($fullpath);
}
$fp = fopen($path.$fullpath, "w+");
fwrite($fp, $rawData);
fclose($fp);
}
This works okay for most images but there are also instances in which I get broken images instead. I've tried checking the path of the image from the website and it's correct. My question is, why is this happening and how can I prevent images being downloaded broken?
Related
I would like to copy a PDF from a URL (API) to our server with PHP.
When I call the URL in the browser, the file start to download directly, so this ain't a static PDF/URL. I think that's problem.
I've tried different functions with PHP but with no luck:
file_put_contents, copy, fopen/write.
Could you please advise?
For example i've tried:
$url_label = "http://example.com/Public/downloadlabel.aspx?username=$username&password=$password&layout=label10x15&zipcode=$zipcode&shipment=$ShipmentID";
file_put_contents("label.pdf", fopen($url_label, 'r'));
The PDF-file is created in my folder, but this file is empty (0 bytes).
And with curl I hoped to pass the forced download:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url_label);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, false);
$data = curl_exec($ch);
curl_close($ch);
$destination = dirname(__FILE__) . '/file.pdf';
$file = fopen($destination, "w+");
fputs($file, $data);
fclose($file);
curl_close($ch);
The PDF-file is created with 277 bytes, but is corrupted.
Any ideas?
I need to download a zipped .csv file from this website. http://www.phrfsocal.org/web-lookup-2/ The file is the link Download Data above the table on the right.
The gotcha is the link is created dynamically. So I need to extract it first.
That part seems to work fine. I get this link for the href.
https://b6.caspio.com/dp.asp?appSession=68982476236455965042483715808486764445346819370685922723164994812296661481433499615115137717633929851735433386281180144919150987&RecordID=&PageID=2&PrevPageID=&cpipage=&download=1
When I paste that link into a new browser tab, the browser downloads the zip file containing the csv that I am interested in.
However when a use CURL to try to get the zip, it instead gets the html of the table below the link. Can't seem to figure out how to grab the .zip.
Below is my code the first part finds the link and seems to be working.
The second part is where I having trouble.
PS I have permission from the owner of this page to download this data nightly using a Cron job.
thanks in advance,
Dave
$url = "http://www.phrfsocal.org/web-lookup-2/";
// url to the dynamic content doesn't seem to change.
$url = "https://b6.caspio.com/dp.asp?AppKey=0dc330000cbc1d03fd244fea82b4";
$header = get_web_page($url);
// Find the location of the Download Data link and extract the href
$strpos = strpos($header['content'], 'Download Data');
$link = substr($header['content'], $strpos, 300);
$link = explode(" ", $link);
$link = explode('"', $link[2]);
$url1 = $link[1];
print_r($url1);
print "<p>";
// Now Go get the zip file.
$zipFile = "temp/SoCalzipfile.zip"; // Local Zip File Path
$zipResource = fopen($zipFile, "w+");
// Get The Zip File From Server
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url1);
curl_setopt($ch, CURLOPT_FAILONERROR, true);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_AUTOREFERER, true);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
curl_setopt($ch, CURLOPT_FILE, $zipResource);
$page = curl_exec($ch);
if (!$page) {
echo "Error :- " . curl_error($ch);
}
curl_close($ch);
echo "zip file recieved";
/* Open the Zip file */
$zip = new ZipArchive;
$extractPath = "temp";
if ($zip->open($zipFile) != "true") {
echo "Error :- Unable to open the Zip File";
}emphasized text
/* Extract Zip File */
$zip->extractTo($extractPath);
$zip->close();
The following code will download the zip file and unzip it into the given folder. Make sure that the folder is writable. So in this example make sure the temp folder has write permission.
You also don't need to fetch the html version of the page to extract the link. I had a play around with the URLs and you can get the zip file for each page by using the cpipage variable. You can change the $page_num variable to grab the zip from the given page.
$page_num = 1;
$url = 'https://b6.caspio.com/dp.asp?AppKey=0dc330000cbc1d03fd244fea82b4&RecordID=&PageID=2&PrevPageID=&cpipage=' .$page_num. '&download=1';
$zipFile = "temp/SoCalzipfile.zip"; // Local Zip File Path
$zipResource = fopen($zipFile, "w");
// Get The Zip File From Server
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_FAILONERROR, true);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_AUTOREFERER, true);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,true);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
curl_setopt($ch, CURLOPT_FILE, $zipResource);
$page = curl_exec($ch);
if(!$page) {
echo "Error :- ".curl_error($ch);
}
curl_close($ch);
$zip = new ZipArchive;
$extractPath = "temp";
if($zip->open($zipFile) != "true"){
echo "Error :- Unable to open the Zip File";
}
/* Extract Zip File */
$zip->extractTo($extractPath);
$zip->close();
I am trying to get files from an rss feed XML, change the name of them and store them locally on a Window 8 machine. It runs perfectly in MAMP on OS X but when I run the same code in WAMP the files are 0 bytes, just the file name is there from the fopen() command.
$content = $domain . $feed;
$file = file_get_contents($content);
$xml = simplexml_load_string($file);
for ($x = 0; $x < $max; $x++) {
$link = $xml->channel->item[$x]->link;
$i = explode("/", $link);
set_time_limit(0);
$fileName = 'videos/video-' . $i[7] .'.mp4';
if (!file_exists($fileName)){
$fp = fopen($fileName, 'w+');
$url = $link;
$ch = curl_init(str_replace(" ","%20",$url));
curl_setopt($ch, CURLOPT_TIMEOUT, 50);
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
$data = curl_exec($ch);
curl_close($ch);
}
};
echo $file;
This is in a .php file called by AJAX and it meant to download the videos then the echo $file creates an XML file that is further parsed by JS. The point being the files are then local not on the internet if the connection goes down. This works perfectly on OS X in MAMP. It is on Windows and AMP that It does not work, it has something to do with the cURL command and the directory buy I am not familiar with cURL in anyway to be able to troubleshoot.
Try to add next curl options:
curl_setopt($ch, CURLOPT_AUTOREFERER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, 1);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_TIMEOUT, 0);
Also try to change line $ch = curl_init(str_replace(" ","%20",$url)); to $ch = curl_init(urlencode($url));
Add next options for writing download result to a file:
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
There were two things I seemed to be missing as it now works.
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER, FALSE);
and I also needed to close fopen().
fclose($fp);
The files were on a self signed certificate and apparently cURL doesn't like that.
Hey guys I'm looking for help downloading 1000+ images from one of my vendors for an e-commerce site. They provided me with the proper URLs which I set up in an array but I can't seem to get any PHP scripts I find to functionally download the images.
I have:
ArrayOf1000ULRS[];
Loop through Array
-save_image(URL)
Example function I found online:
function save_image($img,$fullpath){
$ch = curl_init ($img);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,1);
$rawdata=curl_exec($ch);
curl_close ($ch);
if(file_exists($fullpath)){
unlink($fullpath);
}
$fp = fopen($fullpath,'x');
fwrite($fp, $rawdata);
fclose($fp);
}
foreach($fileNames as $url)
{
set_time_limit(0);
//Get the filename from the end of the URL.
$imgName = explode("/", $url);
//Used this to run multiple scripts.
//Basically don't download files you have.
if(!file_exists("./images/".$imgName[-int-]))
{
$ch = curl_init ($url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,1);
$rawdata=curl_exec ($ch);
curl_close ($ch);
$fp = fopen("./images/".$imgName[-int-],'w');
fwrite($fp, $rawdata);
fclose($fp);
}
}
Assuming you're on a *nix machine, you could just throw a system call in there and use WGET.
foreach($arrayOfImages as $url){
$cmd = "wget $url";
system($cmd);
}
insecure as heck but if this is a one time personal thing I say do it.
I've to download the pdf files related to the data from the web source. I know the full path of the file. I've tried with curl but it is taking long time and writing a 0 byte file.
$ch = curl_init ($url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,1);
$rawdata = curl_exec($ch);
curl_close ($ch);
if(file_exists($fullpath)){
unlink($fullpath);
}
$fp = fopen($fullpath,'x');
fwrite($fp, $rawdata);
fclose($fp);
$ch = curl_init("http://www.example.com/");
$fp = fopen("example_homepage.txt", "w");
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_exec($ch);
curl_close($ch);
fclose($fp);
http://www.php.net/manual/en/curl.examples-basic.php
Or with this (if fopen wrappers are set up in your PHP conf):
$file = 'http://somehosted.com/file.pdf'; // URL to the file
$contents = file_get_contents($file); // read the remote file
touch('somelocal.pdf'); // create a local EMPTY copy
file_put_contents('somelocal.pdf', $contents); // put the fetchted data into the newly created file
// done :)
And this one might fit you the best: http://www.jonasjohn.de/snippets/php/curl-example.htm
It's hard to say without seeing what your code looks like and where you might be going wrong, but take a look at this and see if there's anything that stands out as something you might have overlooked:
http://davidwalsh.name/download-urls-content-php-curl