Change S3 Endpoint in Laravel - php

I used the filesystems.php file to configure my S3.
When I try to put content in the bucket I receive this error:
Encountered a permanent redirect while requesting https://s3-us-west-2.amazonaws.com/MYBUCKET... Are you sure you are using the correct region for this bucket?
I then try to acces the url and I get this message:
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
On the same page I get:
<Endpoint>s3.amazonaws.com</Endpoint>
Then how could I remove the region from the URL Laravel generates?

You can create a customer service provider like that:
use Illuminate\Support\ServiceProvider;
use Aws\S3\S3Client;
use League\Flysystem\AwsS3v3\AwsS3Adapter;
use League\Flysystem\Filesystem;
use Aws\Laravel\AwsServiceProvider;
use Storage;
class AwsS3ServiceProvider extends ServiceProvider
{
/**
* Perform post-registration booting of services.
*
* #return void
*/
public function boot()
{
Storage::extend('s3', function($app, $config) {
$client = new S3Client([
'credentials' => [
'key' => $config['key'],
'secret' => $config['secret'],
],
'region' => $config['region'],
'version' => $config['version'],
'endpoint' => $config['endpoint'],
'ua_append' => [
'L5MOD/' . AwsServiceProvider::VERSION,
],
]);
return new Filesystem(new AwsS3Adapter($client, $config['bucket_name']));
});
}
/**
* Register bindings in the container.
*
* #return void
*/
public function register()
{
//
}
}
And add endpoint variable into config/filesystems.php as well:
's3' => [
'driver' => 's3',
'key' => env('AWS_KEY'),
'secret' => env('AWS_SECRET'),
'region' => env('AWS_REGION'),
'version' => 'latest',
'endpoint' => env('AWS_ENDPOINT'),
'bucket_name' => env('AWS_BUCKET_NAME')
]
Look at docs to get details about how to extend Storage facade.

The simplest approach is to create a new disk that uses the S3 driver in config/filesystems.php. You don't need to create a service provider - the S3 driver will pick up the endpoint from the disk config if supplied.
'spaces' => [
'driver' => 's3',
'key' => env('DO_SPACES_KEY'),
'secret' => env('DO_SPACES_SECRET'),
'endpoint' => 'https://nyc3.digitaloceanspaces.com',
'region' => 'nyc3',
'bucket' => env('DO_SPACES_BUCKET'),
],
Set the DO_SPACES_KEY, DO_SPACES_SECRET and DO_SPACES_BUCKET environment variables to the appropriate values.
Source: https://laracasts.com/discuss/channels/laravel/custom-file-driver-digital-ocean-spaces?page=1

You can use this package https://github.com/aws/aws-sdk-php-laravel
In this package you can specify your settings in config file as desired.

The specific bucket is not created in AWS, so please create a bucket in AWS, and use same bucket at filesystem config, the issue will be resolved.

Related

Credentials must be an instance of Aws\\Credentials\\CredentialsInterface error on PHP SDK with SQS

I am trying to delete a message from SQS in AWS using the php SDK. I have the following configuration.
$sqsClient = new SqsClient([
'version' => '2012-11-05',
'region' => 'us-east-1',
'credentials' => [
'key' => KEY,
'secret' => SECRET
]
]);
Then I am trying to delete like the following :
$sqsClient->deleteMessage([
'QueueUrl' => quque-url
'ReceiptHandle' => handle
]);
I am getting the following error on initialization :
Credentials must be an instance of Aws\\Credentials\\CredentialsInterface, an associative array that contains \"key\", \"secret\", and an optional \"token\" key-value pairs, a credentials provider function, or false."
The credentials I am using is correct. Before I was not passing the config and then also the same error was appearing. How this can be fixed ?
I found this code example and I think your code is correct.
How about create instance of Credentials?
You can use these code for initiating instance.
$credentials = new Aws\Credentials\Credentials(KEY, SECRET);
$sqsClient = new SqsClient([
'version' => '2012-11-05',
'region' => 'us-east-1',
'credentials' => $credentials
]);
Check this document.
And for security reason, I recommend that you'd better use credentials file or set environment variable.
I hope all is well.

How to fix upload image to s3 using Laravel

I try to upload an image to s3 using Laravel but I receive a runtime error. Using Laravel 5.8, PHP7 and API REST with Postman I send by body base64
I receive an image base64 and I must to upload to s3 and get the request URL.
public function store(Request $request)
{
$s3Client = new S3Client([
'region' => 'us-east-2',
'version' => 'latest',
'credentials' => [
'key' => $key,
'secret' => $secret
]
]);
$base64_str = substr($input['base64'], strpos($input['base64'], ",") + 1);
$image = base64_decode($base64_str);
$result = $s3Client->putObject([
'Bucket' => 's3-galgun',
'Key' => 'saraza.jpg',
'SourceFile' => $image
]);
return $this->sendResponse($result['ObjectURL'], 'message.', 'ObjectURL');
}
Says:
RuntimeException: Unable to open u�Z�f�{��zڱ��� .......
The SourceFile parameter is leading to the path of file to upload to S3, not the binary
You can use Body parameter to replace the SourceFile, or saving the file to local temporary and get the path for SourceFile
Like this:
public function store(Request $request)
{
$s3Client = new S3Client([
'region' => 'us-east-2',
'version' => 'latest',
'credentials' => [
'key' => $key,
'secret' => $secret
]
]);
$base64_str = substr($input['base64'], strpos($input['base64'], ",") + 1);
$image = base64_decode($base64_str);
Storage::disk('local')->put("/temp/saraza.jpg", $image);
$result = $s3Client->putObject([
'Bucket' => 's3-galgun',
'Key' => 'saraza.jpg',
'SourceFile' => Storage::disk('local')->path('/temp/saraza.jpg')
]);
Storage::delete('/temp/saraza.jpg');
return $this->sendResponse($result['ObjectURL'], 'message.', 'ObjectURL');
}
And, if you're using S3 with Laravel, you should consider the S3 filesystem driver instead of access the S3Client manually in your controller
To do this, add the S3 driver composer require league/flysystem-aws-s3-v3, put your S3 IAM settings in .env or config\filesystems.php
Then update the default filesystem in config\filesystems, or indicate the disk driver when using the Storage Storage::disk('s3')
Detail see document here
Instead of SourceFile you have to use Body. SourceFile is a path to a file, but you do not have a file, you have a base64 encoded source of img. That is why you need to use Body which can be a string. More here: https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#putobject
Fixed version:
public function store(Request $request)
{
$s3Client = new S3Client([
'region' => 'us-east-2',
'version' => 'latest',
'credentials' => [
'key' => $key,
'secret' => $secret
]
]);
$base64_str = substr($input['base64'], strpos($input['base64'], ",") + 1);
$image = base64_decode($base64_str);
$result = $s3Client->putObject([
'Bucket' => 's3-galgun',
'Key' => 'saraza.jpg',
'Body' => $image
]);
return $this->sendResponse($result['ObjectURL'], 'message.', 'ObjectURL');
}
A very simple way to uploads Any file in AWS-S3 Storage.
First, check your ENV setting.
AWS_ACCESS_KEY_ID=your key
AWS_SECRET_ACCESS_KEY= your access key
AWS_DEFAULT_REGION=ap-south-1
AWS_BUCKET=your bucket name
AWS_URL=Your URL
The second FileStorage.php
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
//'visibility' => 'public', // do not use this line for security purpose. try to make bucket private.
],
Now come on main Code.
Upload Binary File from HTML Form.
$fileName = 'sh_'.mt_rand(11111,9999).".".$imageFile->clientExtension();;
$s3path = "/uploads/".$this::$SchoolCode."/";
Storage::disk('s3')->put($s3path, file_get_contents($req->file('userDoc')));
Upload Base64 File
For Public Bucket or if you want to keep file Public
$binary_data = base64_decode($file);
Storage::disk('s3')->put($s3Path, $binary_data, 'public');
For Private Bucket or if you want to keep file Private
$binary_data = base64_decode($file);
Storage::disk('s3')->put($s3Path, $binary_data);
I Recommend you keep your file private... that is a more secure way and safe. for this, you have to use PreSign in URL to access that file.
For Pre sign-In URL check this post. How access image in s3 bucket using pre-signed url

How to use functions in services.php laravel config file

I needed to use a dynamic callback url for socialite so I added the url() function to my services.php file it worked fine(and its still working on my live server) But when tried to start the project locally I get the following error. When I remove the url() method everything works fine please help.
PHP Fatal error: Uncaught ReflectionException: Class log does not exist in /home/fenn/projects/jokwit/vendor/laravel/framework/src/Illuminate/Container/Container.php:734
Stack trace:
#0 /home/fenn/projects/jokwit/vendor/laravel/framework/src/Illuminate/Container/Container.php(734): ReflectionClass->__construct('log')
#1 /home/fenn/projects/jokwit/vendor/laravel/framework/src/Illuminate/Container/Container.php(629): Illuminate\Container\Container->build('log', Array)
#2 /home/fenn/projects/jokwit/vendor/laravel/framework/src/Illuminate/Foundation/Application.php(697): Illuminate\Container\Container->make('log', Array)
#3 /home/fenn/projects/jokwit/vendor/laravel/framework/src/Illuminate/Container/Container.php(849): Illuminate\Foundation\Application->make('log')
#4 /home/fenn/projects/jokwit/vendor/laravel/framework/src/Illuminate/Container/Container.php(804): Illuminate\Container\Container->resolveClass(Object(ReflectionParameter))
#5 /home/fenn/projects/jokwit/vendor/laravel/framework/src/Illuminate/Container/Container.php(7 in /home/fenn/projects/jokwit/vendor/laravel/framework/src/Illuminate/Container/Container.php on line 734
Here is my services.php file
<?php
return [
/*
|--------------------------------------------------------------------------
| Third Party Services
|--------------------------------------------------------------------------
|
| This file is for storing the credentials for third party services such
| as Stripe, Mailgun, Mandrill, and others. This file provides a sane
| default location for this type of information, allowing packages
| to have a conventional place to find your various credentials.
|
*/
'mailgun' => [
'domain' => env('MAILGUN_DOMAIN'),
'secret' => env('MAILGUN_SECRET'),
],
'mandrill' => [
'secret' => env('MANDRILL_SECRET'),
],
'ses' => [
'key' => env('SES_KEY'),
'secret' => env('SES_SECRET'),
'region' => 'us-east-1',
],
'stripe' => [
'model' => App\User::class,
'key' => env('STRIPE_KEY'),
'secret' => env('STRIPE_SECRET'),
],
'facebook' => [
'client_id' => '1700935300171729',
'client_secret' => 'XXXXXXXXXXXXXXXXXXX',
'redirect' => url('/facebook/callback'),
],
'google' => [
'client_id' => 'XXXXXXXXXXXXXXXXXXXXXXXX',
'client_secret' => 'XXXXXXXXXXXXXXXXXXXXXXXX',
'redirect' => url('google/callback'),
],
];
In services.php file
...
'redirect' => 'google/callback',
...
Next create service provider for example ConfigServiceProvider
<?php
namespace App\Providers;
use Illuminate\Support\ServiceProvider;
class ConfigServiceProvider extends ServiceProvider
{
/**
* Bootstrap the application services.
*
* #return void
*/
public function boot()
{
//
}
/**
* Register the application services.
*
* #return void
*/
public function register()
{
\Config::set("services.google.redirect", url(\Config::get('services')['google']['redirect']));
}
}
now should work fine
It doesn't work because in production Laravel cached all configuration files and using this cache. In development environment Laravel didn't create cache.
You can check it by commenting url() in config and then running php artisan config:cache command. Uncomment url() part and you'll see error is disappeared.
The best you can do here is to not use Laravel or manually defined functions in config files and find another solution for your problem.

Make AWS s3 directory public using php

I know there is no concept of folders in S3, it uses a flat file structure. However, i will use the term "folder" for the sake of simplicity.
Preconditions:
An s3 bucket called foo
The folder foo has been made public using the AWS Management Console
Apache
PHP 5
Standard AWS SDK
The problem:
It's possible to upload a folder using the AWS PHP SDK. However, the folder is then only accessible by the user that uploaded the folder and not public readable as i would like it to be.
Procedure:
$sharedConfig = [
'region' => 'us-east-1',
'version' => 'latest',
'visibility' => 'public',
'credentials' => [
'key' => 'xxxxxx',
'secret' => 'xxxxxx',
],
];
// Create an SDK class used to share configuration across clients.
$sdk = new Aws\Sdk($sharedConfig);
// Create an Amazon S3 client using the shared configuration data.
$client = $sdk->createS3();
$client->uploadDirectory("foo", "bucket", "foo", array(
'params' => array('ACL' => 'public-read'),
'concurrency' => 20,
'debug' => true
));
Success Criteria:
I would be able to access a file in the uploaded folder using a "static" link. Fx:
https://s3.amazonaws.com/bucket/foo/001.jpg
I fixed it by using a defined "Before Execute" function.
$result = $client->uploadDirectory("foo", "bucket", "foo", array(
'concurrency' => 20,
'debug' => true,
'before' => function (\Aws\Command $command) {
$command['ACL'] = strpos($command['Key'], 'CONFIDENTIAL') === false
? 'public-read'
: 'private';
}
));
Use can use this:
$s3->uploadDirectory('images', 'bucket', 'prefix',
['params' => array('ACL' => 'public-read')]
);

Add Metadata, headers (Expires, CacheControl) to a file uploaded to Amazon S3 using the Laravel 5.0 Storage facade

I am trying to find out how to add in Metadata or headers (Expires, CacheControl etc.) to a file uploaded using the Laravel 5.0 Storage facade. I have use the page here as reference.
http://laravel.com/docs/5.0/filesystem
The following code works correctly:
Storage::disk('s3')->put('/test.txt', 'test');
After digging I also found that there is a 'visibility' parameter which sets the ACL to 'public-read' so the following also works correctly.
Storage::disk('s3')->put('/test.txt', 'test', 'public');
But I would like to be able to set some other values to the header of the file. I have tried the following:
Storage::disk('s3')->put('/index4.txt', 'test', 'public', array('Expires'=>'Expires, Fri, 30 Oct 1998 14:19:41 GMT'));
Which doesn't work, I have also tried:
Storage::disk('s3')->put('/index4.txt', 'test', array('ACL'=>'public-read'));
But that creates an error where the 'visibility' parameter can not be converted from a string to an array. I have checked the source of AwsS3Adapter and it seems there is code for options but I can not seem to see how to pass them correctly. I think it takes the following:
protected static $metaOptions = [
'CacheControl',
'Expires',
'StorageClass',
'ServerSideEncryption',
'Metadata',
'ACL',
'ContentType',
'ContentDisposition',
'ContentLanguage',
'ContentEncoding',
];
Any help on how to accomplish this would be appreciated.
First, you need to call getDriver so you can send over an array of options. And then you need to send the options as an array.
So for your example:
Storage::disk('s3')->getDriver()->put('/index4.txt', 'test', [ 'visibility' => 'public', 'Expires' => 'Expires, Fri, 30 Oct 1998 14:19:41 GMT']);
Be aware that if you're setting Cache-Control it has to be passed as CacheControl. This may well be true for other keys with non-alphanumierc characters.
If you want to have global defaults with headers, this works in Laravel 5.4. Change your config/filesystems.php file like so:
s3' => [
'driver' => 's3',
'key' => env('AWS_KEY'),
'secret' => env('AWS_SECRET'),
'region' => env('AWS_REGION'),
'bucket' => env('AWS_BUCKET'),
'options' => ['CacheControl' => 'max-age=315360000, no-transform, public',
'ContentEncoding' => 'gzip']
],
After attempting the above answers and failing to be able to add customer user-metadata it turns out that after digging through the SDK code it is a bit easier than I thought (Assume $path is a path to an image file). I didn't appear to need to call the getDriver() method either, not too sure if that makes any difference with the current version of the AWS SDK.
Storage::put(
'image.jpg',
file_get_contents($path),
[
'visibility' => 'public',
'Metadata' => [
'thumb' => '320-180',
],
]
);
So now if you view the newly uploaded file in S3 you will see the custom metadata:
Hope this helps someone.
The answer from #Paras is good. But there is one thing that can confuse newcommers:
'options' => [
'Expires' => gmdate('D, d M Y H:i:s GMT', strtotime('+1 month')),
>>> WRONG visibility' => 'public', WRONG <<<
]
If you want to define global options for the HEADERS, the options array is the right way to go. But if you also want to define the visibility, you can not mix it up. Visibility has to be defined outside of options array.
👍
'visibility' => 'public',
'options' => ['Expires' => gmdate('D, d M Y H:i:s GMT', strtotime('+1 month'))]
This is an example of how to upload a file to S3 as of Laravel 5.8 with expiry and cache control headers, for example:
Storage::put($directory . '/' . $imageName,
$image, [
'visibility' => 'public',
'Expires' => gmdate('D, d M Y H:i:s \G\M\T', time() + (60 * 60 * 24 * 7)),
'CacheControl' => 'max-age=315360000, no-transform, public',
]);
Also don't forget to uncheck the 'Disable cache' checkbox in Chrome if you're testing and it never seems to work, that got me bad for an hour when my browser wouldn't cache things even though I finally got the headers right in S3.
For Laravel 9 users this has became more easy. You do not need to call ->getDriver() anymore. You can directly pass options to the put command.
Storage::disk('s3')->put('/index.txt', 'file content', [
// S3 Object ACL
'visibility' => 'public', // or 'private',
// HTTP Headers
'CacheControl' => 'public,max-age=315360000',
'ContentDisposition' => 'attachment; filename="index.txt"',
'Expires' => 'Thu, 12 Feb 2032 08:24:43 GMT',
// Metadata or other S3 options
'MetadataDirective' => 'REPLACE'
'Metadata' => [
'Custom-Key' => 'test',
],
])
In case you need other headers or options, please checkout the flysystem source code for all available headers and options.
https://github.com/thephpleague/flysystem-aws-s3-v3/blob/master/src/AwsS3Adapter.php#L38
public const AVAILABLE_OPTIONS = [
'ACL',
'CacheControl',
'ContentDisposition',
'ContentEncoding',
'ContentLength',
'ContentType',
'Expires',
'GrantFullControl',
'GrantRead',
'GrantReadACP',
'GrantWriteACP',
'Metadata',
'MetadataDirective',
'RequestPayer',
'SSECustomerAlgorithm',
'SSECustomerKey',
'SSECustomerKeyMD5',
'SSEKMSKeyId',
'ServerSideEncryption',
'StorageClass',
'Tagging',
'WebsiteRedirectLocation',
];
Hey I solved this problem, you need to create a custom S3 filesystem
First, create a new file CustomS3Filesystem.php and save into app/providers, this custom S3 filesystem uses the S3 Adapter, but you can add metadata and headers.
<?php namespace App\Providers;
use Storage;
use League\Flysystem\Filesystem;
use Aws\S3\S3Client;
use League\Flysystem\AwsS3v2\AwsS3Adapter as S3Adapter;
use Illuminate\Support\ServiceProvider;
class CustomS3Filesystem extends ServiceProvider {
public function boot()
{
Storage::extend('s3_custom', function($app, $config)
{
$s3Config = array_only($config, ['key', 'region', 'secret', 'signature', 'base_url']);
$flysystemConfig = ['mimetype' => 'text/xml'];
$metadata['cache_control']='max-age=0, no-cache, no-store, must-revalidate';
return new Filesystem(new S3Adapter(S3Client::factory($s3Config), $config['bucket'], null, ['mimetype' => 'text/xml', 'Metadata' => $metadata]), $flysystemConfig);
});
}
public function register()
{
//
}
}
Add provider into providers list at config/app.php
'App\Providers\CustomS3Filesystem',
create new filesistem name in config/filesystems
's3-new' => [
'driver' => 's3_custom',
'key' => 'XXX',
'secret' => 'XXX',
'bucket' => 'XXX',
],
Use the new created custom s3 adapter
Storage::disk('s3-new')->put(filename, file_get_contents($file), public);
I used laravel documentation to customize the s3 adapter
http://laravel.com/docs/5.0/filesystem#custom-filesystems
I hope this may help you.
I am using Laravel 4.2, but I think my solution might also help on Laravel 5.0 (cannot say for sure, as I have not tried to upgrade yet). You need to update the meta options in the config for the Flysystem driver that you are using. In my case, I created a connection called s3static to access the bucket where I am storing images that will not be changing.
My config file:
's3static' => [
'driver' => 'awss3',
'key' => 'my-key',
'secret' => 'my-secret',
'bucket' => 'my-bucket',
// 'region' => 'your-region',
// 'base_url' => 'your-url',
'options' => array(
'CacheControl' => 'max_age=2592000'
),
// 'prefix' => 'your-prefix',
// 'visibility' => 'public',
// 'eventable' => true,
// 'cache' => 'foo'
],
Now when I put any files on to S3 using this connection, they have the Cache-Control meta data set.
To expand on #sergiodebcn 's answer, here is the same CustomS3Filesystem class working for S3 v3 and the latest Laravel. Note I have removed the XML mimetype and set up a 5 day cache time:
namespace App\Providers;
use Illuminate\Support\Arr;
use Storage;
use League\Flysystem\Filesystem;
use Aws\S3\S3Client;
use League\Flysystem\AwsS3v3\AwsS3Adapter as S3Adapter;
use Illuminate\Support\ServiceProvider;
class CustomS3Filesystem extends ServiceProvider
{
/**
* Format the given S3 configuration with the default options.
*
* #param array $config
* #return array
*/
protected function formatS3Config(array $config)
{
$config += ['version' => 'latest'];
if ($config['key'] && $config['secret']) {
$config['credentials'] = Arr::only($config, ['key', 'secret']);
}
return $config;
}
/**
* Bootstrap a custom filesystem
*
* #return void
*/
public function boot()
{
Storage::extend('s3_custom', function($app, $config)
{
$s3Config = $this->formatS3Config($config);
return new Filesystem(
new S3Adapter(
new S3Client($s3Config),
$config['bucket'],
null,
[
'CacheControl' => 'max-age=432000'
]
)
);
});
}
public function register()
{
//
}
}
Using Laravel 8 here:
I didn't see this mentioned elsewhere, but the metadata option key => values listed by Christoph Kluge
appear to only accept string values, and fail silently if passed an integer, bool, etc... So if you're passing in a variable you'll need to convert to a string value:
$fileID = $fileData['FileId'];
$fileExt = $fileData['FileExtension'];
$fileUnique = $fileData['UniqueFileId'];
$isImage = $fileData['IsImage'];
$isDefault = $fileData['IsDefaultImage'];
$filePath = $fileUnique . "." . $fileExt;
$file = $mp->fileID($fileID)->get();
if (Storage::disk('s3')->missing('img/' . $filePath)) {
Storage::disk('s3')->put(
'img/' . $filePath,
$file,
[
// Metadata or other S3 options
'MetadataDirective' => 'REPLACE',
'Metadata' => [
'is-image' => strval($isImage),
'is-default' => strval($isDefault),
'unique-file-id' => strval($fileUnique),
'file-extension' => strval($fileExt),
]
]
);
echo nl2br('uploading file: ' . $filePath . "\n");
} else {
echo nl2br('file already exists:' . $filePath . "\n");
}

Categories