I need to save an image from url directly to my server, i've tried many methods but all seems doesn't work properly. file_put_contents($file_location, file_get_contents($image_url)); keeps me getting no file directory found error. Simple fopen and fwrite keeps returning corrupted image. This one worked, but it keeps returning html file instead of jpg file.
function getimg($url) {
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg';
$headers[] = 'Connection: Keep-Alive';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$user_agent = 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)';
$process = curl_init($url);
curl_setopt($process, CURLOPT_HTTPHEADER, $headers);
curl_setopt($process, CURLOPT_HEADER, 0);
curl_setopt($process, CURLOPT_USERAGENT, $user_agent);
curl_setopt($process, CURLOPT_TIMEOUT, 30);
curl_setopt($process, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($process, CURLOPT_FOLLOWLOCATION, 1);
$return = curl_exec($process);
curl_close($process);
return $return;
}
$imgurl = 'http://some/url/to/image.jpg';
$imagename= basename($imgurl);
if(file_exists('./image/'.$imagename)){continue;}
$image = getimg($imgurl);
file_put_contents('image/'.$imagename,$image);
Something is missing?
Thanks.
Your code works correct. It downloads the image from the given url.
Your issue will be in the path where the image is stored.
if(file_exists('./image/'.$imagename)){continue;}
$image = getimg($imgurl);
file_put_contents('image/'.$imagename,$image);
In the above code check the path ./image/ and give the path as in the file_put_contents path.
This method works:
<?php
file_put_contents("/var/www/test/test.png", file_get_contents("http://www.google.com/intl/en_com/images/srpr/logo3w.png"));
?>
You need to enable allow_url_fopen and it's the simplest method. See http://php.net/manual/en/features.remote-files.php
Related
When i try this code on some other server it works properly, but when i run it on server where is SSL "installed" i get empty string from var_dump.
$feedUrl = 'https://api.pinnaclesports.com/v1/feed?sportid=29&leagueid=1980-1977-1957-1958-1983-2421-2417-2418-2419-1842-1843-2436-2438-2196-2432-2036-2037-1928-1817-2386-2592-2081';
// Set your credentials here, format = clientid:password from your account.
$credentials = base64_encode("password");
// Build the header, the content-type can also be application/json if needed
$header[] = 'Content-length: 0';
$header[] = 'Content-type: application/xml';
$header[] = 'Authorization: Basic ' . $credentials;
// Set up a CURL channel.
$httpChannel = curl_init();
// Prime the channel
curl_setopt($httpChannel, CURLOPT_URL, $feedUrl);
curl_setopt($httpChannel, CURLOPT_RETURNTRANSFER, true);
curl_setopt($httpChannel, CURLOPT_HTTPHEADER, $header);
curl_setopt($httpChannel, CURLOPT_USERAGENT, 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)' );
// Unless you have all the CA certificates installed in your trusted root authority, this should be left as false.
curl_setopt($httpChannel, CURLOPT_SSL_VERIFYPEER, false);
// This fetches the initial feed result. Next we will fetch the update using the fdTime value and the last URL parameter
$initialFeed = curl_exec($httpChannel);
//var_dump($initialFeed);
I already have script on this ssl server who downloads csv files from an other url and it works normally, so i think that problem is in my header, but how it works on other servers, same code?
Try this
Basically says to do:
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, true);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 2);
curl_setopt($ch, CURLOPT_CAINFO, getcwd() . "/CAcerts/BuiltinObjectToken-EquifaxSecureCA.crt");
Or try this
I have been searching for an answer for this all day, but with no luck!
I want to download/copy an image from the web to a location on my server, The code below doesn't seam to throw any errors other than the image is just not saving to the required or any directory.
As you can see I am using cURL to get the image and the variable $contents is returning true (1) so I am assuming the script works but I am actually missing something.
Many thanks in advance for your help. :-)
$dir = URL::base() . "/img/products/";
$imgSrc = "an image on the web";
$file = fopen($dir, "wb");
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg';
$headers[] = 'Connection: Keep-Alive';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$user_agent = 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)';
$ch = curl_init($imgSrc);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_USERAGENT, $user_agent);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FILE, $file); // location to write to
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 60);
$contents = curl_exec($ch);
$curl_errno = curl_errno($ch);
$curl_error = curl_error($ch);
curl_close($ch);
fclose($lfile);
if ($curl_errno > 0)
{
Log::write("CURL", "cURL Error (".$curl_errno."): ".$curl_error);
}
else
{
Log::write("CURL", "Data received: " . $contents);
}
return;
Provide the file the writing access to PHP FILE using curl to store the contents. This can be done in three ways:
If you have the terminal access then use chmod to provide the writing access
If you have the CPanel access then use directory explorer then provide the writing access to the file by changing file properties.
You must have the access to FTP and change the file access attributes and provide the writing access.
Don't use curl.
If all you need to do is download an image, go for "file_get_contents" instead.
It's dead easy:
$fileContents = file_get_contents("https://www.google.com/images/srpr/logo4w.png");
File::put('where/to/store/the/image.jpg', $fileContents);
function saveImageToFile($image_url,$output_filename)
{
$ch = curl_init ($url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,1);
$raw=curl_exec($ch);
curl_close ($ch);
if(file_exists($saveto))
{
unlink($saveto); //Saves over files
}
$fp = fopen($saveto,'x');
fwrite($fp, $raw);
fclose($fp);
}
Your problem is quite simple, and I have no idea how everyone else ignored it. It is Laravel-specific. Your $dir variable returns an HTTP resource identifier. What you need is a filesystem identifier.
For laravel, change your URL::to() to path("public") to tell Laravel to stop using HTTP URIs and instead take the local path to the public folder (/your/laravel/setup/path/public/).
code
$dir = path("public") . "img/products/";
$imgSrc = "an image on the web";
$file = fopen($dir . substr($imgSrc,strrpos("/",$imgSrc)+1), "wb");
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg';
$headers[] = 'Connection: Keep-Alive';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$user_agent = 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)';
$ch = curl_init($imgSrc);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_USERAGENT, $user_agent);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FILE, $file); // location to write to
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 60);
$contents = curl_exec($ch);
$curl_errno = curl_errno($ch);
$curl_error = curl_error($ch);
curl_close($ch);
if ($curl_errno > 0)
{
Log::write("CURL", "cURL Error (".$curl_errno."): ".$curl_error);
}
else
{
Log::write("CURL", "Data received: " . $contents);
fwrite($file,$contents);
fclose($file);
}
return;
OK, finally got it all working and here is the code if anyone else ever tries to do the same sort of thing!
I was missing these parts:
$dir = $_SERVER['DOCUMENT_ROOT'] . "/img/products/";
and
fwrite($file,$contents);
So here is my final code... credit to Sébastien for pointing me in the right direction. Thanks.
if($method == 'save')
{
$productId = Input::get('pId');
$removeProductImages = DB::table('product_ref_images')->where('product_id', '=', $productId)->delete();
$imagesData = Input::get('imageRefs');
$dir = $_SERVER['DOCUMENT_ROOT'] . "/img/products/";
$sortOrder = 0;
for ($i=0; $i < count($imagesData); $i++) {
$imgSrc = trim($imagesData[$i]['imgSrc']);
$imgId = trim($imagesData[$i]['imgId']);
$file = fopen($dir . basename($imgSrc), "wb");
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg';
$headers[] = 'Connection: Keep-Alive';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$user_agent = 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)';
$ch = curl_init($imgSrc);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_USERAGENT, $user_agent);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FILE, $file);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 60);
$contents = curl_exec($ch);
$curl_errno = curl_errno($ch);
$curl_error = curl_error($ch);
curl_close($ch);
if ($curl_errno > 0)
{
Log::write("CURL", "cURL Error (".$curl_errno."): ".$curl_error);
break;
}
else
{
fwrite($file,$contents);
fclose($file);
$imageIds = DB::table('product_ref_images')->order_by('image_id', 'desc')->first();
if($imageIds == null)
{
$imageIds = 0;
}
else
{
$imageIds = $imageIds->image_id;
}
$updateImages = DB::table('product_ref_images')
->insert(array(
'image_id' => $imageIds + 1,
'product_id' => $productId,
'flickr_image_id' => $imgId,
'sort_order' => $sortOrder++,
'local_storage_url' => $dir . basename($imgSrc),
'created_at' => date("Y-m-d H:i:s"),
'updated_at' => date("Y-m-d H:i:s")
));
}
}
return Response::json('Complete');
}
Remove this line:
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
You're storing the response to a file, not to the return variable. Otherwise, you have to save it yourself (like you did in the other solution).
I am using cURL to change the HTTP ref for a site that only allow you to see their content if it's coming from search engins:
I was able to do that!
But the problem is: the IP address of the visitor the site get is not mine! it's the IP address of the site I'm using to change the ref !, here is the code:
echo geturl('http://example.com', 'http://referring-site.com');
function geturl($url, $referer) {
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg,text/html,application/xhtml+xml';
$headers[] = 'Connection: Keep-Alive';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$useragent = 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.0.3705; .NET CLR 1.1.4322; Media Center PC 4.0)';
$process = curl_init($url);
curl_setopt($process, CURLOPT_HTTPHEADER, $headers);
curl_setopt($process, CURLOPT_HEADER, 0);
curl_setopt($process, CURLOPT_USERAGENT, $useragent);
curl_setopt($process, CURLOPT_REFERER, $referer);
curl_setopt($process, CURLOPT_TIMEOUT, 30);
curl_setopt($process, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($process, CURLOPT_FOLLOWLOCATION, 1);
$return = curl_exec($process);
curl_close($process);
return $return;
}
Let say I use that code on
mysite.com
So example.com will si referring-site.com as an HTTP ref, but it thinks that the visitors IP is the same as mysite.com !!!
Ho can I get it to get the real IP address of the visitor in stead of the site's IP I use the code on?
I tried to replace
return $return;
with
return "<?php
header( 'Location: http://example.com' ) ;
?>";
or
echo '<META HTTP-EQUIV='Refresh' Content='0; URL=http://example.com'>';
But it doesn't work!
What you are doing is proxying the request and lying about the referer. Since the request is coming from your server, it is under your control so that sort of underhand deception is possible.
There is no way for a website to induce a browser into making that sort of lie.
I have problem with downloading files via PHP.
The funny thing is that I can not trace the problem. The code works good for some websites and not good with other. It is loop in PHP that downloads the backup files from websites (there is delay with sleep before requests).
Why I can not trace the problem?
Because when I run manually the code, it works (downloads the file). And when it is run by CRON, sometimes it downloads the file, sometimes it does NOT download the file (only downloads 2 empty new lines).
The download is with curl (I have also tried with different code with fsockopen and fread).
Does anyone have an idea on how I can solve this?
Headers are removed with CURL by setting the correct option.
function fetch_url($url) {
$c = curl_init();
curl_setopt($c, CURLOPT_URL, $url);
curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($c, CURLOPT_TIMEOUT, 20);
if ($cookiejar != '') {
curl_setopt($c, CURLOPT_COOKIEJAR, $cookiejar);
curl_setopt($c, CURLOPT_COOKIEFILE, $cookiejar);
}
curl_setopt($c, CURLOPT_HEADER , false);
curl_setopt($c, CURLOPT_SSL_VERIFYHOST , false);
curl_setopt($c, CURLOPT_SSL_VERIFYPEER , false);
curl_setopt($c, CURLOPT_FOLLOWLOCATION , true);
curl_setopt($c, CURLOPT_AUTOREFERER , true);
curl_setopt($c, CURLOPT_USERAGENT, 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12');
$con = curl_exec($c);
curl_close($c);
return $con;
}
echo fetch_url('http://www.example.com/zip.zip');
Try using http://www.php.net/manual/en/function.curl-getinfo.php to display information about the curl request
echo curl_errno($c);
print_r(curl_getinfo($c));
Also, Maybe it's in your code elsewhere, but I'm not seeing any content type headers for your echoing of the file
$file = fetch_url('http://www.example.com/zip.zip');
header('Content-type: application/zip');
header('Content-Disposition: attachment; filename="zip.zip"');
header("Content-length: " . strlen($file));
echo $file;
Hi I am using following api to get the data from mediawiki. When I copy this url and paste it into a browser, an xml response appears.
http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=API|Main_Page&rvprop=timestamp|user|comment|content
but when I try to do with curl it gives me the error "Scripts should use an informative User-Agent string with contact information, or they may be IP-blocked without notice. ".
I am using following code for this. Can any one trace my error?
$url='http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=API|Main_Page&rvprop=timestamp|user|comment|content';
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
//curl_setopt($curl, CURLOPT_TIMEOUT, 1);
$objResponse = curl_exec($curl);
curl_close($curl);
echo $objResponse;die;
this will work to bypass there referrer user agent checks:
<?php
function getwiki($url="", $referer="", $userAgent="") {
if($url==""||$referer==""||$userAgent=="") { return false;};
$headers[] = 'Accept: image/gif, image/x-bitmap, image/jpeg, image/pjpeg';
$headers[] = 'Connection: Keep-Alive';
$headers[] = 'Content-type: application/x-www-form-urlencoded;charset=UTF-8';
$user_agent = $userAgent;
$process = curl_init($url);
curl_setopt($process, CURLOPT_HTTPHEADER, $headers);
curl_setopt($process, CURLOPT_HEADER, 0);
curl_setopt($process, CURLOPT_USERAGENT, $user_agent);
curl_setopt($process, CURLOPT_REFERER, $referer);
curl_setopt($process, CURLOPT_TIMEOUT, 30);
curl_setopt($process, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($process, CURLOPT_FOLLOWLOCATION, 1);
$return = curl_exec($process);
curl_close($process);
return $return;
}
//edited to include Adam Backstrom's sound advice
echo getwiki('http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=API|Main_Page&rvprop=timestamp|user|comment|content', 'http://en.wikipedia.org/', 'Mozilla/5.0 (compatible; YourCoolBot/1.0; +http://yoursite.com/botinfo)');
?>
From the MediaWiki API:Quick start guide:
Pass a User-Agent header that properly identifies your client: don't use the default User-Agent from your client library, but use a custom one including the name of your client and the version number, something like MyCuteBot/0.1.
On Wikimedia wikis, failing to supply a User-Agent header or supplying an empty or generic one will cause the request to fail with an HTTP 403 error. See meta:User-Agent policy. Other MediaWiki wikis may have similar policies.
From meta:User-Agent policy:
If you run a bot, please send a User-Agent header identifying the bot and supplying some way of contacting you, e.g.: User-Agent: MyCoolTool (+http://example.com/MyCoolToolPage/)