I am using the Amazon SDK for PHP and wideimage. I am resizing an image with wideimage and trying to then upload that resized image to Amazon S3.
$resized = $image->resize($width,$height);
upload
$response = $s3->create_object($myBucket, $newFilename, array(
'fileUpload' => $resized, //this does not work
));
Does anyone know the proper way to do this?
You can use a stream wrapper and use WideImage's saveToFile method. There are many stream wrappers for S3, this is one example: https://github.com/jakajancar/S3StreamWrapper.
You don't need to save an image and then upload from there.
When you resize the image, you have to convert to a string. You can do that with WideImage class.
Example:
$image = WideImage::load($_FILES["file"]['tmp_name']);
$resized = $image->resize(1024);
$data = $resized->asString('jpg');
And then when you're uploading on Amazon, you have to use the param 'body' instead of 'fileUpload'.
Example:
$response = $s3->create_object($myBucket, $newFilename, array(
'body' => $data,
));
I hope that helps.
I would like to point out few things might help someone in making choice.
First of all, I think you better go with what you are trying to do first resize image over your server and then move it to Amazon because suppose if there is some kind of way to resize and upload image at same time on the fly then your script will perform slow because script will have to resize and save it to server which is far away destined. It would be minor if there are few images but can be a problem when it's huge resizing even on high speed bandwidth and as PHP will not be able to release its resources used for image resizing until it has not completely saved the target image.
Second that if you are using a CDN (Content Delivery Network) then CDN uses PULL SERVER technique means that we do not push static content to CDN server but when a user/client ask for static content then CDN first checks its entire servers and if not found then it asks our main server for that.
Amazon S3 is not a true CDN. S3 was designed for content storage. The correct Amazon service to use for content delivery is Amazon CloudFront. And if we are saving files to any of our files to any storage server or CDN then that's called PUSH SERVER
A thorough article can be read on http://www.binarymoon.co.uk/2010/11/timthumb-cdn-amazon-s3-good/. That's actually about TimThumb but worth a good knowledge.
I ended up saving the file to the server and then uploading the file from there. If there is a better way then please let me know.
Related
I have a site with about 1500 JPEG images, and I want to compress them all. Going through the directories is not a problem, but I cannot seem to find a function that compresses a JPEG that is already on the server (I don't want to upload a new one), and replaces the old one.
Does PHP have a built in function for this? If not, how do I read the JPEG from the folder into the script?
Thanks.
you're not telling if you're using GD, so i assume this.
$img = imagecreatefromjpeg("myimage.jpg"); // load the image-to-be-saved
// 50 is quality; change from 0 (worst quality,smaller file) - 100 (best quality)
imagejpeg($img,"myimage_new.jpg",50);
unlink("myimage.jpg"); // remove the old image
I prefer using the IMagick extension for working with images. GD uses too much memory, especially for larger files. Here's a code snippet by Charles Hall in the PHP manual:
$img = new Imagick();
$img->readImage($src);
$img->setImageCompression(Imagick::COMPRESSION_JPEG);
$img->setImageCompressionQuality(90);
$img->stripImage();
$img->writeImage($dest);
$img->clean();
You will need to use the php gd library for that... Most servers have it installed by default. There are a lot of examples out there if you search for 'resize image php gd'.
For instance have a look at this page http://911-need-code-help.blogspot.nl/2008/10/resize-images-using-phpgd-library.html
The solution provided by vlzvl works well. However, using this solution, you can also overwrite an image by changing the order of the code.
$image = imagecreatefromjpeg("image.jpg");
unlink("image.jpg");
imagejpeg($image,"image.jpg",50);
This allows you to compress a pre-existing image and store it in the same location with the same filename.
I'm using GAE version 1.9.0 and I want to delete an image from the data storage and upload another image to its location. This is how I'm doing it right now.
unlink("gs://my_storage/images/test.jpg");
move_uploaded_file($_FILES['image']['tmp_name'],'gs://my_storage/images/test.jpg');
And then I want to get the Image serving URL of the latest uploaded image, and I do it like this.
$image_link = CloudStorageTools::getImageServingUrl("gs://my_storage/images/test.jpg");
The issue is, when the name of the deleted image("test.jpg") and the uploaded image("test.jpg") is the same, the old file is served when I call for the newly uploaded file(I think it is cached.)
Is there anyway I can permanently delete this file without caching it?
You should probably delete the original serving URL before creating another with the same name.
There's a deleteImageServingUrl() method in CloudStorageTools that you can use to do this.
Here it is how to do in php laravel.
$object = $post_media->media_cloud;
$objectname = substr($object,48,100);
$bucket = Storage::disk('gcs')->delete($objectname);
Here in $object i get google cloud image url from db
Then we take only object name from that url, by substr.
Since you have given in your config Storage class as Storage::disk('gcs')
so this will call the function delete by taking the objectname.
Hope it helps anyone.
Note : For multiple images either pass an array of objects, or repeat it foreach loop.
After doing research, I found that it is more recommended to save the image name in database and the actual image in a file directory. Two of the few reasons is that it is more safer and the pictures load a lot quicker. But I don't really get the point of doing this procedure because every time I retrieve the pictures with the firebug tool i can find out the picture path in the file directory which can lead to potential breach.
Am I doing this correctly or it is not suppose to show the complete file directory path of the image?
PHP for saving image into database
$images = retrieve_images();
insert_images_into_database($images);
function retrieve_images()
{
$images = explode(',', $_GET['i']);
return $images;
}
function insert_images_into_database($images)
{
if(!$images) //There were no images to return
return false;
$pdo = get_database_connection();
foreach($images as $image)
{
$path = Configuration::getUploadUrlPath('medium', 'target');
$sql = "INSERT INTO `urlImage` (`image_name`) VALUES ( ? )";
$prepared = $pdo->prepare($sql);
$prepared->execute(array($image));
echo ('<div><img src="'. $path . $image . '" /></div>');
}
}
One method to achieve what you originally intended to do by storing images in database is still continue to serve image via a PHP script, thus:
Shielding your users from knowing the actual path of an image.
You can, and should have, images stored outside of your DocumentRoot, so that they are not able to be served by web server.
Here's one way you can achieve that through readfile():
<?php
// image.php
// Translating file_id to image path and filename
$path = getPathFromFileID($_GET['file_id']);
$image = getImageNameFromFileID($_GET['file_id']);
// Actual full path to the image file
// Hopefully outside of DocumentRoot
$file = $path.$image;
if (userHasPermission()) {
readfile($file);
}
else {
// Better if you are actually outputting an image instead of echoing text
// So that the MIME type remains compatible
echo "You do not have the permission to load the image";
}
exit;
You can then serve the image by using standard HTML:
<img src="image.php?file_id=XXXXX">
You can use .htaccess to protect your images.
See here:
http://michael.theirwinfamily.net/articles/csshtml/protecting-images-using-php-and-htaccess
I'm also working on a project which stores the url path of images on the database (Amazon RDS) and the actual images in a cloud managed file system in Amazon S3.
The decision to do so came primarily with the concern of price, scalability and ease of implementation.
Cheaper: Firstly, it is cheaper to store data in a file system (Amazon S3) compared to a database (Amazon EC2 / RDS).
Scalable: And since the repository of images may grow pretty big in the future, you might also need to ensure that you have the adequate capacity to serve them. On this point, it is easier to scale up a filesystem compared to a database. In fact, if you are using cloud storage (like Amazon S3), you don't even need to worry about having not enough space as it has been managed for you by Amazon! you would just need to pay for what you use.
Ease of Implementation: In terms of implementation, storing images in a file system is much easier. If you were to serve images directly from databases, you would probably need to implement additional logic to convert blob files into html src blob strings to serve images. And from the look of it, this might actually take up quite substantial processing power which might slow your web server down.
On the other hand, if you were to use a filesystem, all you would require is to put down the url path of the image from the database to the src attribute of the image and its all done!
Security: As for security of the images, i have changed the image name to a timestamp concatenated with a random string so that it will prove really difficult for someone to browse for pictures without knowing the file name.
ie. 1342772480UexbblEY7Xj3Q4VtZ.png
Hope this helps!
NB: Please edit my post if you find anything wrong here! this is just my opinion and everyone is welcome to edit!
I'm writing a web app that at one point allows a user to upload a photo to a flickr account (mine). I want to do this without saving the intermediate image on the server my web app is on.
What I've got so far is a page which implements phpFlickr and accepts a POST from a simple html form. I use $_FILES['file']['tmp_name'] as the path for phpFlickr to use. Here's the code:
<?php
require_once("phpFlickr.php");
$f = new phpFlickr("apikey", "secret", true);
$_SESSION['phpFlickr_auth_redirect'] = "post_upload.php";
$myPerms = $f->auth("write");
$token = $f->auth_checkToken();
$phid = $f->sync_upload($_FILES['file']['tmp_name']);
echo "Uploading Photo..." . $phid;
?>
I'm guessing that the tmp file is being lost because of the redirect that happens when $f->auth("write") is called, but I don't know. Is there a way to preserve it? Is there any way to do this without saving the file to the server?
Answer: There is No way to directly upload a file to Flickr without saving it as an intermediate file.
I've moved on to using move_uploaded_file() followed by a flickr API call, and its working perfectly.
I've also managed to get it to play nice with the excellent Jquery Uploadify, which lets me send multiple files to it in one go.
I am working on building gallery where the user uploads all the images. I had tried to use GD originally but found that it used way too much memory when dealing with images from a digital camera. So I have been looking into ImageMagick and ran into this problem.
My end goal is to resize the image and then upload it. I am not sure if this is possible with ImageMagick or not. I have gotten it to resize the image after upload but it doesn't save the resized image, just the original size.
This is the code I am currently using: ($image is the path to the file on my server)
$resource = NewMagickWand();
MagickReadImage($resource,$image);
MagickSetImageCompressionQuality( $resource, 100);
$resource = MagickTransformImage($resource,'0x0','660x500');
Any input would be appreciated,
Levi
Your code will send the modified image to the client (the web browser), but it will not save it to the server (replacing the original image, for example)
To save the image, use:
MagickWriteImage( $resource, 'new_image.jpg' );