I am looking to write a function on one server to accept files uploaded from any other server in other words similar to a api.
Assuming on www.upload.com there is a upload script to upload a file. Instead of doing the post and saving on that server i would like them to curl my script on fileserver.com to save the files to.
My function on fileserver.com to save the file looks like this
function upload($data) {
$uploaddir = '/data/';
$uploadfile = $uploaddir . 'filename.jpg';
if(move_uploaded_file($data['upload_file'], $uploadfile)) {
return 'saved successfully';
} else {
return 'bad file';
}
}
and on the upload.com server i am testing the example with this:
if($_POST) {
$ch=curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://fileserver.com/upload.php');
curl_setopt($ch, CURLOPT_POST, TRUE);
curl_setopt($ch, CURLOPT_POSTFIELDS, 'upload_file=#'.$data['img_file']['tmp_name']);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$result = curl_exec($ch);
curl_close($ch);
}
Please keep in mind that this is intended as a API so the functionality for doing the curl is strictly for testing. the upload function is the api function. Does anyone know how to do this or what I am doing wrong and is this even possible.
Well, actually I have coded something for that: https://github.com/chris-l/urImgAPI
My need was like this:
I had a site, hosted on a virtual machine, and there I had a form that allowed to upload images, but I didn't wanted to waste the bandwidth of the virtual machine by serving images, so I got a cheap shared hosting service (which only had php) for the images and developed a RESTful API on php that is installed there, that allowed me to save images from anywhere by using this API.
The API has a security feature to prevent that a random person could save things on my file server, by requiring a signature. The signature works like this:
$signature = md5(md5sum(your_image_file) . "your secret key");
And you must pass the signature as parameter on the request. That way, you never transmit the secret key and help to prevent that some other person could store images on your file server.
And while I created the API for uploading images, it can easily changed to host any kind of file. (maybe I will change it for that in the future)
Is released under the AGPL3 license, so check it and I hope it could be useful for you.
Related
1) Below is my code to download a private file from Amazon S3 in codeigniter, $this->S3 has my client information(key/secretkey), when I use below code it saves file on my root directory on server, I want to save it on users local machine's default download location, as I do not know specific end user download location, any help to tackle this??
I have researched whole day, couldn't find a specific solution.
$result1 = $this->S3->getObject([
'Bucket' => $bucket,
'Key' => $keyname,
'SaveAs' => $keyname
]);
Note: I do not want to show the end user that we are downloading files from amazon server, that's why below solution gets failed on firefox
2)
$cmd = $this->S3->getCommand('GetObject', [
'Bucket' => $bucket,
'Key' => $keyname,
'ResponseContentDisposition' => 'attachment; filename="'.$keyname.'"'
]);
$signed_url = $this->S3->createPresignedRequest($cmd, '+1 minute');
$url=$signed_url->getUri();
header('Location: '.$url);
Screenshot of Firefox save as popup showing source download url
Step 1 put the code in your controller (its working on AWS new image url also)
$url="http//:projectname/foldername/map.jpg"
$data = $this->file_get_contents_curl($url);
$fp = 'map.png';
$this->load->helper('download');
force_download($fp,$data);
Step 2 also put the code in your controller
private function file_get_contents_curl($url) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $url);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
Depending on how much code you want to write, there's many ways to tackle this. The most recent I used was a little over complex because of a lot of security requirements. It was, in a nutshell, like this:
Create a table with the basic data for each uploaded/stored file.
This includes basically all parameters that the upload library outputs after processing a file, plus some additional ones like a user group ID (because each file can be viewed only by a specific user group), a Y/N available flag and a soft-delete date.
Create a downloads controller that will handle all download-related tasks
I created a couple methods within, tasked with verifying that the user was indeed allowed to access that particular file (or redirecting him to an error page if not)
The file is on a S3 bucket, mounted on my webserver with s3fs, by the way but that's irrelevant. What you need is agnostic to where the file actually is.
Generate non-obvious download links
Instead of linking to example.com/files/filename.pdf (which would make it trivial to harvest files) my download links point to a specific method in my download controller (downloads/get_file) and passes a non-obvious identifier as a parameter. IIRC I used the encrypted rawname parameter so the links look like
get hello.pdf here
Find file, stream it to the user
When going to the above link, the get_file method takes the rawname as parameter, finds the file in the table (gets the local path) and outputs it to the browser. This way, the user never sees the real route/parh or even the real file name as stored on disk.
Tumblr image example:
https://69.media.tumblr.com/ec3874e541a767ab578495b762201a53/tumblr_ph9je0Akck1r89eh5o1_1280.jpg
Code:
<form method="get">
<input type="text" name="url"/>
<input type="submit" value=" загрузить "/>
</form>
<?php
$name = md5(date('Y-m-d H:i:s').rand(0, 1000));
$folder = 'upload/';
$source = ($_GET['url']);
$dest = $folder.$name.'.png';
copy($source, $dest);
echo 'http://mysite.example/'.$folder.$name.'.png';
?>
I found this in another question on this site:
If you can view the image in a browser that is on the same machine as your program, then it might be that the server won't send the picture unless you look like a user rather than a program. In that case, modifying the browser identification string might fix your problem.
If you cannot view the image from a browser running on the program's PC, you will need to look elsewhere for the source of your problem.
I think, a problem i have is similar to this. Tumblr gives picture to view in a browser, but doesn't allow to copy it with a script.
How to fix that? For example, sites like imgur can upload Tumblr images by url with no any problem.
P.S. For images from other sites copying with this script goes normally.
Addition 01:
As it turned out, the problem is with my site. When i run this code on another site, it works normally with Tumblr images. I have a free domain .ml and free hosting Byethost. I have two guessings. The first is, my domain or hosting is in a blacklist on Tubmlr. The second one, i have some wrong settings on my site. If first guessing is right, is there any way to make it works without changing domain or hosting? If the second is true, what a settings i must check and change?
Tumblr appears to be inspecting the HTTP request and generating different responses depending on how you get it. Your code is fine, as you know, for most sites. When I run it as-is, I get a 403 denied error.
Changing the code to use Curl instead allowed me to download your file. My guess is the default headers used in PHP's copy() are blocked.
<?php
function grab_image($url,$saveto){
$ch = curl_init ($url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,1);
$raw=curl_exec($ch);
curl_close ($ch);
if(file_exists($saveto)){
unlink($saveto);
}
$fp = fopen($saveto,'x');
fwrite($fp, $raw);
fclose($fp);
}
$name = md5(date('Y-m-d H:i:s').rand(0, 1000));
$folder = 'upload/';
$source = ($_GET['url']);
$dest = $folder.$name.'.png';
grab_image($source, $dest);
The grab_image() function was a SO answer for another question.
I am building a PHP app which will be distributed to hundreds or thousands of users as a SugarCRM module. The functionality I am working on allows users to upload images from a remote URL.
StackOverflow has this same functionality, shown in the image at the bottom.
I mention this being on other servers because my upload function needs to be very reliable across many server configurations and web hosts!
To help make it more reliable in fetching and downloading remote images, I have some checks in my fetch_image($image_url) function like...
ini_get('allow_url_fopen') to see if they allow file_get_contents() to use URLs instead of file paths.
I use function_exists('curl_init') to see if CURL is installed.
Besides getting the remote image using several methods. I now also need to ensure that the file returned or built from the remote server is actually a legit image file and not some sort of malicious file!
Most servers at least have GD image processor installed so perhaps it could be used somehow on my image to make sure it is an image?
My code so far is below...
Any help appreciated in checking to ensure image is image!
The sockets method seems to actually generate a file saved in the server temp folder. Other methods just return the string of the image.
<?php
class GrabAndSave {
public $imageName;
public $imageFolderPath = 'remote-uploads/'; // Folder to Cache Amazon Images in
public $remote_image_url;
public $local_image_url;
public $temp_file = '';
public $temp_file_prefix = 'tmp';
public function __construct(){
//
}
public function fetch_image($image_url) {
// check if CURL is installed
if (function_exists('curl_init')){
return $this->curl_fetch_image($image_url);
// Check if PHP allows file_get_contents to use URL instead of file paths
}elseif(ini_get('allow_url_fopen')){
return $this->fopen_fetch_image($image_url);
// Try Sockets
}else{
return $this->sockets_fetch_image($image_url);
}
}
public function curl_fetch_image($image_url) {
if (function_exists('curl_init')) {
//Initialize a new resource for curl
$ch = curl_init();
//Set the url the retrieve
curl_setopt($ch, CURLOPT_URL, $image_url);
//Return the value instead of outputting to the browser
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$image = curl_exec($ch);
curl_close($ch);
if ($image) {
//Do stuff with the image
return $image;
} else {
//Show error message
}
}else{
die('cURL is not enabled on this server.');
}
}
public function fopen_fetch_image($url) {
$image = file_get_contents($url, false, $context);
return $image;
}
public function sockets_fetch_image($image_url)
{
if($this->temp_file)
{
throw new Exception('Resource has been downloaded already.');
}
$this->temp_file = tempnam(sys_get_temp_dir(), $this->temp_file_prefix);
$srcResource = fopen($image_url, 'r');
$destResource = fopen($this->temp_file, 'w+');
stream_copy_to_stream($srcResource, $destResource);
return $this->temp_file;
}
public function save_image($image_filename, $raw_image_string){
$local_image_file = fopen($this->imageFolderPath . $image_filename, 'w+');
chmod($this->imageFolderPath . $image_filename, 0755);
fwrite($local_image_file, $raw_image_string);
fclose($local_image_file);
}
}
Preview of the StackOverflow image dialog using remote URL image upload...
A simple and effective method is using getimagesize on the file. A legitimate file will deliver an array of image meta data for many common image file types. (This can, by the way, also be used to enforce additional constraints, such as the image dimensions.)
Keep in mind that even legitimate images may contain malicious code, as various image viewers have exposed security issues with image data streams in the past. It may add a layer of security to internally convert the image before delivering it in order to protect clients.
You can
check response headers, especially "Content-Type" header
check file mime type
check actual file signature
try to actually create image resource from the file
other...
I'm scraping a site, searching for JPGs to download.
Scraping the site's HTML pages works fine.
But when I try getting the JPGs with CURL, copy(), fopen(), etc., I get a 403 forbiden status.
I know that's because the site owners don't want their images scraped, so I understand a good answer would be just don't do it, because they don't want you to.
Ok, but let's say it's ok and I try to work around this, how could this be achieved?
If I get the same URL with a browser, I can open the image perfectly, it's not that my IP is banned or anything, and I'm testing the scraper one file at a time, so it's not blocking me because I make too many requests too often.
From my understanding, it could be that either the site is checking for some cookies that confirm that I'm using a browser and browsing their site before I download a JPG.
Or that maybe PHP is using some user agent for the requests that the server can detect and filter out.
Anyway, have any idea?
Actually it was quite simple.
As #Leigh suggested, it only took spoofing an http referrer with the option CURLOPT_REFERER.
In fact for every request, I just provided the domain name as the referrer and it worked.
Are you able to view the page through a browser? Wouldn't a simple search of the page source find all images?
` $findme = '.jpg';
$pos = strpos($html, $findme);
if ($pos === false) {
echo "The string '$findme' was not found in the string '$html'";
} else {
echo "Images found..
///grab image location code
} `
Basic image retrieval:
Using the GD Library plugin commonly installed by default with many web hosts. This is something of an ugly hack but some may find the fact it can be done this way useful.
$remote_img = 'http://www.somwhere.com/images/image.jpg';
$img = imagecreatefromjpeg($remote_img);
$path = 'images/';
imagejpeg($img, $path);
Classic cURL image grabbing function for when you have extracted the location of the image from the donor pages HTML.
function save_image($img,$fullpath){
$ch = curl_init ($img);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,1);
$rawdata=curl_exec($ch);
curl_close ($ch);
if(file_exists($fullpath)){
unlink($fullpath);
}
$fp = fopen($fullpath,'x');
fwrite($fp, $rawdata);
fclose($fp);
}
If the basic cURL image grabbing function fails then the donor site probably has some form of server side defences in place to prevent retrieval and so you are probably breaching the terms of service by proceeding further. Though rare some sites do create images 'on the fly' using the GD library module, so what may look like a link to an image is actually a PHP script and that could be checking for things like a cookie, referer or session value being passed to it before the image is created and outputted.
I want be able to upload an image or just paste the URL of an image in order to upload it a sa profile picture for my website users.
the point is i dont wanna store the url but i want to have a copy of that image on my server because if that external image will be lost i dont want to lose it either...
i believe facebook and tumblr etc do so... what the php script or best practice to do that?
thanks!
You can get the contents (bytes) from the image using the PHP function (http://php.net/manual/en/function.file-get-contents.php)
$contents = file_get_contents('http://www.google.com/images/logos/ps_logo2.png');
You can use CURL library as well... Here's an example of how you can downloadImageFromUrl with and save it in a local SaveLocation
function downloadImageFromUrl($imageLinkURL, $saveLocationPath) {
$channel = curl_init();
$curl_setopt($channel, CURLOPT_URL, $imageLinkURL);
$curl_setopt($channel, CURLOPT_POST, 0);
$curl_setopt($channel, CURLOPT_RETURNTRANSFER, 1);
$fileBytes = curl_exec($channel);
curl_close($channel);
$fileWritter = fopen($saveLocationPath, 'w');
fwrite($fileWritter, $fileBytes);
fclose($fileWritter);
}
You can use this as follows:
downloadImageFromUrl("http://www.google.com/images/logos/ps_logo2.png", "/tmp/ps_logo2.png")
You can also get the same name of the image by parsing the URL as well...
I think this is you are looking for: Copy Image from Remote Server Over HTTP
You can use a function called imagecreatefromjpeg or something. It takes a URL path to the image and creates a new image off of that. Have a look at http://php.net/manual/en/function.imagecreatefromjpeg.php
There's different functions for different extensions, though (if you prefer using such). You may need to check for the image extension from the URL and use the appropriate function I suppose.
Handling uploads is covered in this documentation, and if users paste a URL, I'd recommend using file_get_contents to save a copy of the image to your server, and then you can simply store the path to that image, rather than the external image.