I have a task of pulling down assets which are stored on an AWS S3 bucket and storing those in a local project using Laravel. Also, the files are encrypted.
I need to write a script to do this.
Any ideas on how to do this?
Assuming you have following disks :
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path('app'),
],
's3' => [
'driver' => 's3',
'key' => env('S3_KEY'),
'secret' => env('S3_SECRET'),
'region' => env('S3_REGION'),
'bucket' => env('S3_BUCKET'),
'http' => [
'connect_timeout' => 30,
],
],
],
Then you can copy file using :
if(Storage::disk('s3')->exists('path/yourfile.txt')){
Storage::disk('local')->writeStream('path/yourfile.txt', Storage::disk('s3')->readStream('path/yourfile.txt'));
}
To move the file :
if(Storage::disk('s3')->exists('path/yourfile.txt')){
Storage::disk('local')->writeStream('path/yourfile.txt', Storage::disk('s3')->readStream('path/yourfile.txt'));
Storage::disk('s3')->delete('path/yourfile.txt');
}
If you have set default disk then you can skip mentioning it spefically and directly do Storage::something()
Moving all files from s3 to local disk :
Considering you have different disks which are not on the same server, you need to do little bit extra as compared to both disks on the same server :
$s3Files = Storage::disk('s3')->allFiles();
foreach ($s3Files as $file) {
// copy
Storage::disk('local')->writeStream($file, Storage::disk('s3')->readStream($file));
// move
Storage::disk('local')->writeStream($file, Storage::disk('s3')->readStream($file));
Storage::disk('s3')->delete($file);
}
Or You can move the delete() after the entire moving and delete all files together like :
Storage::disk('s3')->delete(Storage::disk('s3')->allFiles());
which is essentially similar but just one function call.
Related
I've obtained the following code from searching about the topic
Route::get('/test', function () {
//disable execution time limit when downloading a big file.
set_time_limit(0);
$fs = Storage::disk('local');
$path = 'uploads/user-1/1653600850867.mp3';
$stream = $fs->readStream($path);
if (ob_get_level()) ob_end_clean();
return response()->stream(function () use ($stream) {
fpassthru($stream);
},
200,
[
'Accept-Ranges' => 'bytes',
'Content-Length' => 14098560,
'Content-Type' => 'application/octet-stream',
]);
});
However when I click play on the UI, it takes a good four seconds to start playing. If I switch the disk to local though, it plays almost instantly.
Is there a way to improve the performance or, read the stream by range as per request?
Edit
My current DO config is as per below
'driver' => 's3',
'key' => env('DO_ACCESS_KEY_ID'),
'secret' => env('DO_SECRET_ACCESS_KEY'),
'region' => env('DO_DEFAULT_REGION'),
'bucket' => env('DO_BUCKET'),
'url' => env('DO_URL'),
'endpoint' => env('DO_ENDPOINT'),
'use_path_style_endpoint' => env('DO_USE_PATH_STYLE_ENDPOINT', false),
But I find two type of integration online one specifying the CDN endpoint and one doesn't. I am not sure which one is relevant, though the one that specifies CDN is for Laravel 8 and I am on Laravel 9.
I had to change my code such that:
I had to use the php SDK client for connecting to Aws for the Laravel API isn't flexible to allow passing additional arguments (at least I haven't found anything while researching)
Change to streamDownload as I can't see any description to the stream method in the docs despite that it is present in code.
So the below code allows to achieve what I was aiming for which is, download by chunk based on the range received in the request.
return response()->streamDownload(function(){
$client = new Aws\S3\S3Client([
'version' => 'latest',
'region' => config('filesystems.disks.do.region'),
'endpoint' => config('filesystems.disks.do.endpoint'),
'credentials' => [
'key' => config('filesystems.disks.do.key'),
'secret' => config('filesystems.disks.do.secret'),
],
]);
$path = 'uploads/user-1/1653600850867.mp3';
$range = request()->header('Range');
$result = $client->getObject([
'Bucket' => 'wyxos-streaming',
'Key' => $path,
'Range' => $range
]);
echo $result['Body'];
},
200,
[
'Accept-Ranges' => 'bytes',
'Content-Length' => 14098560,
'Content-Type' => 'application/octet-stream',
]);
Note:
In a live scenario, you would need to cater for if range isn't specified, the content length will need to be the actual file size
When range is present however, the content length should then be the size of the segment being echoed
I am trying to upload a file to a public folder which was working lately but now it is showing below error:
Disk [public] does not have a configured driver.
I tried checking for configured driver in config/filesystems.php but, it is already set there. I am not getting where the issue might be.
Upload code:
public function upload(ProductImageRequest $request, Product $product)
{
$image = $request->file('file');
$dbPath = $image->storePublicly('uploads/catalog/'.$product->id, 'public');
if ($product->images === null || $product->images->count() === 0) {
$imageModel = $product->images()->create(
['path' => $dbPath,
'is_main_image' => 1, ]
);
} else {
$imageModel = $product->images()->create(['path' => $dbPath]);
}
return response()->json(['image' => $imageModel]);
}
Code in config/filesystems.php
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path('app'),
],
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
i use this code for moving the picture and storing its name you may want to give it a shot
//get icon path and moving it
$iconName = time().'.'.request()->icon->getClientOriginalExtension();
$icon_path = '/category/icon/'.$iconName;
request()->icon->move(public_path('/category/icon/'), $iconName);
$category->icon = $icon_path;
i usually move the image then store its path in db and this is what my code shows you can edit it as desired
I'm setting up Google Cloud Storage bucket CORS configuration using PHP API, but it doesn't seem to work
I read the document given in : https://googleapis.github.io/google-cloud-php/#/docs/google-cloud/v0.96.0/storage/bucket
Here's my Laravel source code:
use Google\Cloud\Core\ServiceBuilder;
...
$projectId = 'myProjectId';
$bucketName = 'myBucketName';
$gcloud = new ServiceBuilder([
'keyFilePath' => 'resources/google-credentials.json',
'projectId' => $projectId
]);
$storage = $gcloud->storage();
$bucket = $storage->bucket($bucketName);
//change bucket configuration
$result = $bucket->update([
'cors' => [
'maxAgeSeconds' => 3600,
'method' => [
"GET","HEAD"
],
"origin" => [
"*"
],
"responseHeader" => [
"Content-Type"
]
]
]);
//print nothing and bucket doesn't changed
dd($bucket->info()['cors']);
After execute this code, the bucket CORS configuration doesn't changed
(My boss don't want me to use gsutil shell command to deal with this)
You're very close! CORS accepts a list, so you'll just need to make a slight modification:
$result = $bucket->update([
'cors' => [
[
'maxAgeSeconds' => 3600,
'method' => [
"GET","HEAD"
],
"origin" => [
"*"
],
"responseHeader" => [
"Content-Type"
]
]
]
]);
Let me know if it helps :).
The only thing I needed to change was when I config disks in laravel, using this code in config/filesystems.php when adding a disk for google:
'google' => [
'driver' => 's3',
'key' => 'xxx',
'secret' => 'xxx',
'bucket' => 'qrnotesfiles',
'base_url'=>'https://storage.googleapis.com'
]
Here is the code example fist get file contents from request:
$file = $request->file('avatar')
second save it into storage:
Storage::disk('google')->put('avatars/' , $file);
I used this tutorial to define a driver and connect to my spaces on Digital Ocean.
In my config\filesystems.php I have this code:
'spaces' => [
'driver' => 'spaces',
'version' => '2006-03-01',
'key' => env('DO_SPACES_KEY'),
'secret' => env('DO_SPACES_SECRET'),
'endpoint' => env('DO_SPACES_ENDPOINT'),
'region' => env('DO_SPACES_REGION'),
'bucket' => env('DO_SPACES_BUCKET'),
'bucket_name' => env('DO_SPACES_BUCKET'),
],
In one of my controllers iI have this code:
$client->subdomain = 'acme';
$directories_client = Storage::disk('spaces')->directories('clients/'.$client->subdomain);
Problem
The connection to spaces driver works perfectly in my local environment.
However, in remote environment, this line
$directories_client = Storage::disk('spaces')->directories('clients/'.$client->subdomain);
produces error. Here is hat my log says:
[2017-09-29 07:19:08] remote.ERROR: Driver [] is not supported.
{"userId":5,"email":"_________","exception":"[object]
(InvalidArgumentException(code: 0): Driver [] is not supported. at
/.../src/Illuminate/ Filesystem/FilesystemManager.php:124)
The local code works perfectly at the very same time as the remote fails.
Any ideas?
Peter
you need use s3 as driver name, just change this
'driver' => 'spaces', to 'driver' => 's3',
I know there is no concept of folders in S3, it uses a flat file structure. However, i will use the term "folder" for the sake of simplicity.
Preconditions:
An s3 bucket called foo
The folder foo has been made public using the AWS Management Console
Apache
PHP 5
Standard AWS SDK
The problem:
It's possible to upload a folder using the AWS PHP SDK. However, the folder is then only accessible by the user that uploaded the folder and not public readable as i would like it to be.
Procedure:
$sharedConfig = [
'region' => 'us-east-1',
'version' => 'latest',
'visibility' => 'public',
'credentials' => [
'key' => 'xxxxxx',
'secret' => 'xxxxxx',
],
];
// Create an SDK class used to share configuration across clients.
$sdk = new Aws\Sdk($sharedConfig);
// Create an Amazon S3 client using the shared configuration data.
$client = $sdk->createS3();
$client->uploadDirectory("foo", "bucket", "foo", array(
'params' => array('ACL' => 'public-read'),
'concurrency' => 20,
'debug' => true
));
Success Criteria:
I would be able to access a file in the uploaded folder using a "static" link. Fx:
https://s3.amazonaws.com/bucket/foo/001.jpg
I fixed it by using a defined "Before Execute" function.
$result = $client->uploadDirectory("foo", "bucket", "foo", array(
'concurrency' => 20,
'debug' => true,
'before' => function (\Aws\Command $command) {
$command['ACL'] = strpos($command['Key'], 'CONFIDENTIAL') === false
? 'public-read'
: 'private';
}
));
Use can use this:
$s3->uploadDirectory('images', 'bucket', 'prefix',
['params' => array('ACL' => 'public-read')]
);