I have the contents of an SVG file in a variable in my code, and I would like to store it using the documented Automatic Streaming method of Laravel (basically using putFile or putFileAs methods) in a remote AWS S3 bucket.
SVG content (QR code) is dynamically generated by a package, so I do not have it previously stored in the filesystem (do not have a path):
$svg_contents = '<svg xmlns="http://www.w3.org/2000/svg" version="1.1" ...';
The problem is Laravel's putFile and putFileAs methods only accept Illuminate\Http\File or Illuminate\Http\UploadedFile instances as arguments. From Laravel docs:
// Automatically generate a unique ID for filename...
$path = Storage::putFile('photos', new File('/path/to/photo'));
// Manually specify a filename...
$path = Storage::putFileAs('photos', new File('/path/to/photo'), 'photo.jpg');
Any thoughts?
In other words, I am asking for a smart way to avoid storing it temporarily in local filesystem and then remotely store it in S3.
Thanks in advance.
Related
I am using Php PDFLIB to generate the pdf in my application I have gone through its block api by which we can define the block in pdf and we can populate that block values from db values
Eg::
http://www.pdflib.com/en/pdflib-cookbook/block-handling-and-pps/business-cards/php-business-cards/
I wanted know is there way we use the pdf loaded from AWS s3 link instead of storing that in Searchpath
i.e instead of the lines that says
$infile = "businesscard_blocks.pdf";
can we load something
$infile = aws/s3/path/businesscard_blocks.pdf
you should check out the PVF feature of PDFlib. With the PVF you can load resources from memory. So you create a named mapping between a variable data and file name.
So in your case you might load the data from AWS via a PHP function
$PDFfiledata = file_get_contents('https://XYZ.AWS.com/aws/s3/path/businesscard_blocks.pdf');
$p->create_pvf("/pvf/input.pdf", $PDFfiledata, "");
$doc = $p->open_pdi_document("/pvf/input.pdf", "");
then you can go ahead, as when you load the file from disc.
Please see the starter_pvf sample (http://www.pdflib.com/de/pdflib-cookbook/general-programming/starter-pvf/php-starter-pvf/) and PDFlib 9 Tutorial, chapter 3.1.2 "The PDFlib Virtual File System (PVF)"
I'm using AWS PHP sdk to save images on S3. Files are saved privately. Then, I'm showing the image thumbnails using the S3 file url in my web application but since the files are private so the images are displayed as corrupt.
When the user clicks on the name of file, a modal is opened to show the file in larger size but file is displayed as corrupt there as well due to the same issue.
Now, I know that there are two ways to make this working. 1. Make the files public. 2. Generate pre-signed urls for files. But I cannot go with any of these two options due to the requirements of my project.
My question is that is there any third way to resolve this issue?
I'd highly advise against this, but you could create a script on your own server that pulls the image via the API, caches it and serves. You can then restrict access however you like without making the images public.
Example pass through script:
$headers = get_headers($realpath); // Real path being where ever the file really is
foreach($headers as $header) {
header($header);
}
$filename = $version->getFilename();
// These lines if it's a download you want to do
// header('Content-Description: File Transfer');
// header("Content-Disposition: attachment; filename={$filename}");
$file = fopen($realpath, 'r');
fpassthru($file);
fclose($file);
exit;
This will barely "touch the sides" and shouldn't delay the appearance of your files too much, but t's still going to take some resources and bandwidth.
You will need to access the files through a script on your server. That script will do some kind of authentication to make sure the request is valid and you want them to see the file. Then fetch the file from S3 using a valid IAM profile that can access the private files. Output the file
Instead of requesting the file from S3 request it from
http://www.yourdomain.com/fetchimages.php?key=8498439834
Then here is some pseudocode in fetchimages.php
<?php
//if authorized to get this image
$key=$_GET['key'];
//validate key is the proper format
//get s3 url from a database based on the $key
//connect to s3 securely and read the file from s3
//output the file
?>
as far as i know you could try to make your S3 bucket a "web server" like this but then you would probably "Make the files public".Then if you have some kind of logic to restrict the access you could create a bucket policy
Well, I've uploaded an app to Heroku, and I've discovered that I can't upload files to it. Then I started to use Dropbox as storage option, and I've done a few tests, of send and retrieve link, and all worked fine.
Now, the problem is to use the uploadFile() method on DropboxAdapter. He accepts an resource as the file, and I did'nt work well. I've done a few tests, and still no way. Here is what I am doing, if anyone could me point a solution, or a direction to this problem, please. :)
Here is my actual code for the update user (Update the user image, and get the link to the file).
$input = $_FILES['picture'];
$inputName = $input['name'];
$image = imagecreatefromstring(file_get_contents($_FILES['picture']['tmp_name']));
Storage::disk('dropbox')->putStream('/avatars/' . $inputName, $image);
// $data = Storage::disk('dropbox')->getLink('/avatars/' . $inputName);
return dd($image);
In some tests, using fopen() into a file on the disk, and doing the same process, I've noticed this:
This is when I've used fopen() on a file stored on the public folder
http://i.imgur.com/07ZiZD5.png
And this, when i've die(var_dump()) the $image that I've tried to create. (Which is a suggestion from this two links: PHP temporary file upload not valid Image resource, Dropbox uploading within script.
http://i.imgur.com/pSv6l1k.png
Any Idea?
Try a simple fopen on the uploaded file:
$image = fopen($_FILES['picture']['tmp_name'], 'r');
https://www.php.net/manual/en/function.fopen.php
You don't need an image stream but just a filestream, which fopen provides.
I'm using GAE version 1.9.0 and I want to delete an image from the data storage and upload another image to its location. This is how I'm doing it right now.
unlink("gs://my_storage/images/test.jpg");
move_uploaded_file($_FILES['image']['tmp_name'],'gs://my_storage/images/test.jpg');
And then I want to get the Image serving URL of the latest uploaded image, and I do it like this.
$image_link = CloudStorageTools::getImageServingUrl("gs://my_storage/images/test.jpg");
The issue is, when the name of the deleted image("test.jpg") and the uploaded image("test.jpg") is the same, the old file is served when I call for the newly uploaded file(I think it is cached.)
Is there anyway I can permanently delete this file without caching it?
You should probably delete the original serving URL before creating another with the same name.
There's a deleteImageServingUrl() method in CloudStorageTools that you can use to do this.
Here it is how to do in php laravel.
$object = $post_media->media_cloud;
$objectname = substr($object,48,100);
$bucket = Storage::disk('gcs')->delete($objectname);
Here in $object i get google cloud image url from db
Then we take only object name from that url, by substr.
Since you have given in your config Storage class as Storage::disk('gcs')
so this will call the function delete by taking the objectname.
Hope it helps anyone.
Note : For multiple images either pass an array of objects, or repeat it foreach loop.
After doing research, I found that it is more recommended to save the image name in database and the actual image in a file directory. Two of the few reasons is that it is more safer and the pictures load a lot quicker. But I don't really get the point of doing this procedure because every time I retrieve the pictures with the firebug tool i can find out the picture path in the file directory which can lead to potential breach.
Am I doing this correctly or it is not suppose to show the complete file directory path of the image?
PHP for saving image into database
$images = retrieve_images();
insert_images_into_database($images);
function retrieve_images()
{
$images = explode(',', $_GET['i']);
return $images;
}
function insert_images_into_database($images)
{
if(!$images) //There were no images to return
return false;
$pdo = get_database_connection();
foreach($images as $image)
{
$path = Configuration::getUploadUrlPath('medium', 'target');
$sql = "INSERT INTO `urlImage` (`image_name`) VALUES ( ? )";
$prepared = $pdo->prepare($sql);
$prepared->execute(array($image));
echo ('<div><img src="'. $path . $image . '" /></div>');
}
}
One method to achieve what you originally intended to do by storing images in database is still continue to serve image via a PHP script, thus:
Shielding your users from knowing the actual path of an image.
You can, and should have, images stored outside of your DocumentRoot, so that they are not able to be served by web server.
Here's one way you can achieve that through readfile():
<?php
// image.php
// Translating file_id to image path and filename
$path = getPathFromFileID($_GET['file_id']);
$image = getImageNameFromFileID($_GET['file_id']);
// Actual full path to the image file
// Hopefully outside of DocumentRoot
$file = $path.$image;
if (userHasPermission()) {
readfile($file);
}
else {
// Better if you are actually outputting an image instead of echoing text
// So that the MIME type remains compatible
echo "You do not have the permission to load the image";
}
exit;
You can then serve the image by using standard HTML:
<img src="image.php?file_id=XXXXX">
You can use .htaccess to protect your images.
See here:
http://michael.theirwinfamily.net/articles/csshtml/protecting-images-using-php-and-htaccess
I'm also working on a project which stores the url path of images on the database (Amazon RDS) and the actual images in a cloud managed file system in Amazon S3.
The decision to do so came primarily with the concern of price, scalability and ease of implementation.
Cheaper: Firstly, it is cheaper to store data in a file system (Amazon S3) compared to a database (Amazon EC2 / RDS).
Scalable: And since the repository of images may grow pretty big in the future, you might also need to ensure that you have the adequate capacity to serve them. On this point, it is easier to scale up a filesystem compared to a database. In fact, if you are using cloud storage (like Amazon S3), you don't even need to worry about having not enough space as it has been managed for you by Amazon! you would just need to pay for what you use.
Ease of Implementation: In terms of implementation, storing images in a file system is much easier. If you were to serve images directly from databases, you would probably need to implement additional logic to convert blob files into html src blob strings to serve images. And from the look of it, this might actually take up quite substantial processing power which might slow your web server down.
On the other hand, if you were to use a filesystem, all you would require is to put down the url path of the image from the database to the src attribute of the image and its all done!
Security: As for security of the images, i have changed the image name to a timestamp concatenated with a random string so that it will prove really difficult for someone to browse for pictures without knowing the file name.
ie. 1342772480UexbblEY7Xj3Q4VtZ.png
Hope this helps!
NB: Please edit my post if you find anything wrong here! this is just my opinion and everyone is welcome to edit!