I am trying to store image in S3 bucket and i am using laravel 5.5 i am new and i am stuck here: What i am trying is:
My Controller:
public function imageUploadPost(Request $request)
{
$this->validate($request, [
'image' => 'required|image|mimes:jpeg,png,jpg,gif,svg|max:2048',
]);
$imageName = time().'.'.$request->image->getClientOriginalExtension();
$image = $request->file('image');
$t = Storage::disk('s3')->put($imageName, file_get_contents($image), 'public');
$imageName = Storage::disk('s3')->url($imageName);
return back()
->with('success','Image Uploaded successfully.')
->with('path',$imageName);
}
My routes:
Route::post('s3-image-upload','S3ImageController#imageUploadPost');
My config/filesystems.php
's3' => [
'driver' => 's3',
'key' => env('AccessKeyID'),
'secret' => env('SecretAccessKey'),
'region' => env('region'),
'bucket' => env('mybucket'),
],
And i am getting these values from my env file my .env file looks like:
AccessKeyID=xyz
SecretAccessKey=xyz
region=us-east-2
mybucket=spikessales
Now when i upload file and hit upload button it says :
Encountered a permanent redirect while requesting https://spikessales.s3.us-east-2.amazonaws.com/1519812331.jpg. Are you sure you are using the correct region for this bucket?
Here I am confused how to put my region also I have created bucket name (spikessales)
and I dont know how to give region as I am giving region which is present as aws browser url: look like:
https://s3.console.aws.amazon.com/s3/home?region=us-east-2
I am giving rgion which present at the end of this url (us-east-2) as u can see in my env file.
And the region which I have created during creating bucket name is US East(N.Virginia). please tell me how to write region correctly.
Any help will be highly appreciated!
In your AWS API call set the region from your AWS S3 settings (it is shown right in S3 bucket GUI), and do not pay attention to region shown in URL.
In my AWS S3 console for example it also shows region=us-east-2 in URL, although I set up EU (Frankfurt) region in AWS S3 settings.
To find your S3 bucket region follow these steps from this zappysys.com article
Open your AWS Console by visiting https://console.aws.amazon.com/
From dashboard click on S3 option (or visit
https://console.aws.amazon.com/s3/home)
You will see all buckets in the left side list
Click on desired S3 bucket name
Click on Properties Tab at the top
Now you will see Region for the selected bucket along with many
other properties.
You can now change the region in your env file based on what you see here.
Related
I have a flutter form which contains both textual data and an image. I am using http package to send post request to the Laravel backend that has a route in api.php like:
Route::post('/locations', 'LocationController#store');
So at first since I have never tried to upload an image to laravel I had to disable my image input just to check if things were working and and I was successfully able to send a post request from my frontend with a status code of 201 Created with the record successfully created in the database. So I have added the first two lines below in my controller to save the image which are causing my requests to fail with 500 | server error. (The rest lines were already available and working with previous requests):
public function store(Request $request)
{
$locationName = $request["name"];
$data = base64_decode($request["image"]); // Image sent from flutter like: _image != null ? base64Encode(_image.readAsBytesSync()) : ''
Storage::disk('public_images')->put("${$locationName}.jpg", $data); // public_images is defined below
return location::create([
'name' => $request["name"],
'time' => $request["time"],
'package' => $request["package"],
'summary' => $request["summary"],
'info' => $request["info"],
'image' => $request["image"]
]);
}
I have defined the public_images disk(in config/filesystems.php) as follows: (I have to save in the public directory because my server doesn't work with symlinks, but that isn't my problem):
'public_images' => [
'driver' => 'local',
'root' => public_path() . '/images',
],
Now What I want is to save the image to the disk and get the path to store it to the database table so that when accessing from flutter I will only use the api url along with that path to display the image as a network resource. So what is the right way to upload a file from flutter app to laravel backend? Or am I doing it all wrong?
Also: since I am new to file uploads in laravel from flutter, what generally are the best practices in uploading files to a laravel server? Thanks in advance!
You may want to check if request object has any file and the file is valid, then store the file in public directory and store the uri in db.
if($request->hasFile('image') && $request->file('image')->isValid()){
$file = $request->file('image');
$filePath = str_replace('public/', '', $file->storeAs('public/dir', 'name.' . $file->guessExtension()));
}
now with Laravel baseUrl helper you can generate the full path of stored file and using for loading, streaming or force to download.
I am using laravel-google-cloud-storage to store images and retrieve them one by one. Is it possible that I can get all the folders and images from the Google Cloud Storage? If possible, how do I get this done?
I was trying to use this flysystem-google-cloud-storage to retrieve it but they are similar to the first link I have provided.
What I want to achieve is I want to select an image using the Google Cloud Storage with all the folders and images in it and put it in my form instead of selecting an image from my local.
UPDATE:
This is what I have tried so far base from this documentation.
$storageClient = new StorageClient([
'projectId' => 'project-id',
'keyFilePath' => 'myKeyFile.json',
]);
$bucket = $storageClient->bucket('my-bucket');
$buckets = $storageClient->buckets();
Then tried adding foreach which returns empty and also I have 6 folders in my Bucket.
foreach ($buckets as $bucket) {
dd($bucket->name());
}
It's been a week since my post has not been answered. I'll just post and share to anyone of what I did since last week.
I am using Laravel 5.4 at the moment.
So I installed laravel-google-cloud-storage and flysystem-google-cloud-storage in my application.
I created a Different controller since I am retrieving the images from Google Cloud Storage via Ajax.
All you need to do is to get your Google Cloud Storage credentials which can be located in your Google Cloud Storage Dashboard > Look for the APIs then click the link below that stated "Go to APIs overview > Credentials. Just download the credentials which is in JSON file format and put it in your root or anywhere you wanted to (I still don't know where should I properly put this file). Then the next is we get your Google Cloud Storage Project ID which can be located in the Dashboard.
Then this is my setup in my controller that connects from my Laravel application to Google Cloud Storage which I am able to Upload, Retrieve, Delete, Copy files.
use Google\Cloud\Storage\StorageClient;
use League\Flysystem\Filesystem;
use League\Flysystem\Plugin\GetWithMetadata;
use Superbalist\Flysystem\GoogleStorage\GoogleStorageAdapter;
class GoogleStorageController extends Controller
{
// in my method
$storageClient = new StorageClient([
'projectId' => 'YOUR-PROJECT-ID',
'keyFilePath' => '/path/of/your/keyfile.json',
]);
// name of your bucket
$bucket = $storageClient->bucket('your-bucket-name');
$adapter = new GoogleStorageAdapter($storageClient, $bucket);
$filesystem = new Filesystem($adapter);
// this line here will retrieve all your folders and images
$contents = $filesystem->listContents();
// you can get the specific directory and the images inside
// by adding a parameter
$contents = $filesystem->listContents('directory-name');
return response()->json([
'contents' => $contents
]);
}
i am in the proccess of creating a "Content Management System" for a "start up company". I have a Post.php model in my project, the following code snippet is taken from the Create method:
if(Request::file('display_image') != null){
Storage::disk('s3')->put('/app/images/blog/'.$post->slug.'.jpg', file_get_contents(Request::file('display_image')));
$bucket = Config::get('filesystems.disks.s3.bucket');
$s3 = Storage::disk('s3');
$command = $s3->getDriver()->getAdapter()->getClient()->getCommand('GetObject', [
'Bucket' => Config::get('filesystems.disks.s3.bucket'),
'Key' => '/app/images/blog/'.$post->slug.'.jpg',
'ResponseContentDisposition' => 'attachment;'
]);
$request = $s3->getDriver()->getAdapter()->getClient()->createPresignedRequest($command, '+5 minutes');
$image_url = (string) $request->getUri();
$post->display_image = $image_url;
The above code checks if there is a "display_image" file input in the request object.
If it finds a file it uploads it directly to AWS S3 storage. I want to save the link of the file in the Database, so i can link it later in my views.
Hence i use this piece of code:
$request = $s3->getDriver()->getAdapter()->getClient()->createPresignedRequest($command, '+5 minutes');
$image_url = (string) $request->getUri();
$post->display_image = $image_url;
I get a URL, the only problem is that whenever i visit the $post->display_image URL i get a 403 permission denied. Obviously no authentication takes place when using the URL of the image.
How to solve this? I need to be able to link all my images/files from amazon S3 to the front-end interface of the website.
You could open up those S3 URLs to public viewing, but you probably wouldn't want to. You have to pay for the outgoing bandwidth every time someone views one of those images.
You might want to check out Glide, a pretty simple-to-use image library that supports S3. Make sure to reduce the load requirements on your server and wallet by setting caching headers on the images you serve.
Alternatively, you could use a CloudFront distribution as a caching proxy in front of your S3 bucket.
I'm using the AWS PHP SDK to upload a file to S3 then trancode it with Elastic Transcoder.
First pass everything works fine, the putobject command overwrites the old file (always named the same) on s3:
$s3->putObject([
'Bucket' => Config::get('app.aws.S3.bucket'),
'Key' => $key,
'SourceFile' => $path,
'Metadata' => [
'title' => Input::get('title')
]
]);
However when creating a second transcoding job, i get the error:
The specified object could not be saved in the specified bucket because an object by that name already exists
the transcoder role has full s3 access. Is there a way around this or will i have to delete the files using the sdk everytime before its transcoded?
my create job:
$result = $transcoder->createJob([
'PipelineId' => Config::get('app.aws.ElasticTranscoder.PipelineId'),
'Input' => [
'Key' => $key
],
'Output' => [
'Key' => 'videos/'.$user.'/'.$output_key,
'ThumbnailPattern' => 'videos/'.$user.'/thumb-{count}',
'Rotate' => '0',
'PresetId' => Config::get('app.aws.ElasticTranscoder.PresetId')
],
]);
The Amazon Elastic Transcoder service documents that this is the expected behavior here: http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/job-settings.html#job-settings-output-key.
If your workflow requires you to overwrite the same key, then it sounds like you should have the job output somewhere unique and then issue an S3 CopyObject operation to overwrite the older file.
If you enable versioning on the S3 bucket, then Amazon Elastic Transcoder will be happy overwriting the same key with the transcoded version.
I can think of two ways to implement it:
Create two buckets, one for temp file storage (where its uploaded) and another where transcoded file is placed. Post transcoding when new file is created, you can delete temp file.
Use single bucket and upload file with some suffix/prefix. Create transcoded file in same bucket removing prefex/suffix (which you used for temp name).
In both cases for automated deletion of uploaded files you can use Lambda function with S3 notifications.
I'm trying to upload files to my bucket using a piece of code like this:
$s3 = new AmazonS3();
$bucket = 'host.domain.ext'; // My bucket name matches my host's CNAME
// Open a file resource
$file_resource = fopen('picture.jpg', 'r');
// Upload the file
$response = $s3->create_object($bucket, 'picture.jpg', array(
'fileUpload' => $file_resource,
'acl' => AmazonS3::ACL_PUBLIC,
'headers' => array(
'Cache-Control' => 'public, max-age=86400',
),
));
But I get the "NoSuchBucket" error, the weird thing is that when I query my S3 account to retrieve the list of buckets, I get the exact same name I'm using for uploading host.domain.ext.
I tried creating a different bucket with no dots in the name and it works perfectly... yes, my problem is my bucket name, but I need to keep the FQDN convention in order to map it as a static file server on the Internet. Does anyone know if is there any escaping I can do to my bucket name before sending it to the API to prevent the dot crash? I've already tried regular expressions and got the same result.
I'd try using path style urls as suggested in the comments in a related AWS forum thread...
$s3 = new AmazonS3();
$s3->path_style = true;