deleting files from s3 bucket using simple db - php

I am trying to delete the files on Amazon s3 bucket using simpledb. But for some reason it does not delete the file and says that it has deleted it.
I am using S3 classdeleteObject method to delete the file.
Below is the sample code :
$bucketName = "bucket";
$s3 = new S3($awsAccessKey, $awsSecretKey);
if ($s3->deleteObject($bucketName, $url))
{
echo "deleted url";
}
else
{
echo "cannot delete";
}
After execution the script echoes "deleted url" which should happen when deletion is successfully completed. But when I open the URL again, the file is still there and has not been deleted.
Please help.
Thanks a lot.

You are using the unofficial S3.php class. The GitHub repo with documentation is here: https://github.com/tpyo/amazon-s3-php-class
This code is not provided by AWS, and should not be confused with either AWS SDK for PHP 1.x or AWS SDK for PHP 2.x.

Make sure you are giving the right file name for the object in parameter. I guess, you are storing file in Amazon S3 and you are keeping the URL of that file into Amazon SimpleDB. So you need to provide a valid file name in parameter instead of URL.

Related

Downloading directly from S3 vs Downloading through the server

This is regards users' uploads - which are hosted in an S3 bucket - and the best approach for downloading them. Currently I use the following:
return response()->streamDownload(function(){
// Fetch the file
print Storage::disk('s3')->get('file');
}, 'file-name.ext');
This works just fine, but as you can see, the file is first fetched from S3, then streamed via the server to the user's browser. Which, I think, unnecessary work (calls to S3 and bandwidth), since we could just force-download it off S3 servers instead.
I have two questions here: How to force-download the file off s3, and more importantly; am I giving this too much thought? But I really hate the idea of downloading the file twice and putting more pressure on the server!
The pre-signed urls feature was the way to go! Here is the code:
$url = Storage::disk('s3')->temporaryUrl($path, now()->addMinutes(5), [
'ResponseContentDisposition' => "attachment; filename=$fileName.txt"
]);

Laravel storage:link does not work on heroku?

So I've been playing around with heroku and I really like it. it's fast and it just works. However i have encountered a problem with my gallery app: https://miko-gallery.herokuapp.com . Create a free account , create an album and try uploading a photo. It will not display. I have run the php artisan storage:link command, but it does not work. What am i missing here?
EDIT
I've just tried a new thing, I tried running heroku run bash and i cd'ed into storage/app/public folder, and it does not contain the folder images which was supposed to be there.
My code for saving the photo is here (works on localhost):
public function store(Request $request)
{
$ext = $request->file('items')->getClientOriginalExtension();
$filename = str_random(32).'.'.$ext;
$file = $request->file('items');
$path = Storage::disk('local')->putFileAs('public/images/photos', $file, $filename);
$photo = new Photo();
$photo->album_id = $request->album_id;
$photo->caption = $request->caption;
$photo->extension = $request->file('items')->getClientOriginalExtension();
$photo->path = $path.'.'.$photo->extension;
$photo->mime = $request->file('items')->getMimeType();
$photo->file_name = $filename;
$photo->save();
return response()->json($photo, 200);
}
Heroku's filesystem is dyno-local and ephemeral. Any changes you make to it will be lost the next time each dyno restarts. This happens frequently (at least once per day).
As a result, you can't store uploads on the local filesystem. Heroku's official recommendation is to use something like Amazon S3 to store uploads. Laravel supports this out of the box:
Laravel provides a powerful filesystem abstraction thanks to the wonderful Flysystem PHP package by Frank de Jonge. The Laravel Flysystem integration provides simple to use drivers for working with local filesystems, Amazon S3, and Rackspace Cloud Storage. Even better, it's amazingly simple to switch between these storage options as the API remains the same for each system.
Simply add league/flysystem-aws-s3-v3 ~1.0 to your dependencies and then configure it in config/filesystems.php.
if you don't have ssh access then simply create a route.so you can hit this command simply by hitting url
Route::get('/artisan/storage', function() {
$command = 'storage:link';
$result = Artisan::call($command);
return Artisan::output();
})
firstly unlink existing link from storage

Copying files from Google Drive Server to Own server directly

I'm trying to download files from a Google Drive link from Google server to my web server, to avoid the 100 max size in PHP POST.
<?php
$link = $_POST["linkToUpload"];
$upload = file_put_contents("uploads/test".rand().".txt", fopen($link, 'r'));
header("Location: index.php");
?>
Inserting a normal link like http://example.com/text.txt it works fine. The problem comes linking google drive https://drive.google.com/uc?authuser=0&id=000&export=download. This is a direct link from Google Drive, but it doesn't work. So I tried to insert the link that I obtained downloading the file locally https://doc-08-bo-docs.googleusercontent.com/docs/securesc/000/000/000/000/000/000?e=download and it's still not working. Do you think Google is trying to avoid the server-to-server copies? Or there is another method to do it?
If you want to fetch files with your own application you should use the API (Application Programming Interface) to get these.
Have a look at the file download documentation for Google Drive
Example download snippet in PHP:
$fileId = '0BwwA4oUTeiV1UVNwOHItT0xfa2M';
$response = $driveService->files->get($fileId, array(
'alt' => 'media'));
$content = $response->getBody()->getContents();

Writing to GC Bucket using AppEngine

The google docs states
The GCS stream wrapper is built in to the run time, and is used when you supply a file name starting with gs://.
When I look into the app.yaml, I see where the runtime is selected. I have selected php runtime. However when I try to write to my bucket I get an error saying the wrapper is not found for gs://. But when I try to write to my bucket using the helloworld.php script that is provided by google here https://cloud.google.com/appengine/docs/php/gettingstarted/helloworld and modifying it so that it says
<?php
file_put_contents('gs://<app_id>.appspot.com/hello.txt', 'Hello');
I have to deploy the app in order for the write to be successful. I am not understanding why I have to deploy the app everytime to get the wrapper I need to write to my bucket. How come I can not write to my bucket from a random php script?
Google says
"In the Development Server, when a Google Cloud Storage URI is specified we emulate this functionality by reading and writing to temporary files on the user's local filesystem"
So, "gs://" is simulated locally - to actually write to GCS buckets using the stream wrapper, it has to run from App Engine itself.
Try something like this:
use google\appengine\api\cloud_storage\CloudStorageTools;
$object_url = "gs://bucket/file.png";
$options = stream_context_create(['gs'=>['acl'=>'private', 'Content-Type' => 'image/png']]);
$my_file = fopen($object_url, 'w', false, $options);
fwrite($my_file, $file_data));
fclose($my_file);

Downloading a file from a S3 Server using TPYO's PHP Class

So, here's what I want to do:
I want to use TPYO's (Undesigned) Amazon S3 Class to get a file from my S3 Bucket, and download it. I'm having alot of trouble getting it to work.
I'm using this code, but it's not working for me for some reason:
if ($s3->copyObject($bucketName, $filename, $bucketName, $filename, "public-read", array(), array("Content-Type" => "application/octet-stream", "Content-Disposition" => "attachment"))) {
// Successful
} else {
// Failed
}
I tried using any other questions, but I couldn't manage to do it.
Ok,
I found a way of doing it. I basically dismissed the whole idea of using TPYO's S3 PHP class to download a file from my bucket server. So instead, what I did was I made my bucket's stream an octet-stream, and made any files accessible via URL.
Thanks,
Tom.

Categories