Google Cloud Storage cache management with PHP - php

I have a PHP file to upload images to Google cloud storage, but everytime an user changes their profile image, it shows the previous one for a long time, so they think the image is not updated at all. I would like to know what cache and metadata configuration is required to avoid this issue.
I would prefer not to use "custom metadata" because this message on the Cloud documentation https://cloud.google.com/storage/docs/metadata -> "Note that using custom metadata incurs storage and network costs."
This is my code:
require 'vendor/autoload.php';
use Google\Cloud\Storage\StorageClient;
$storage = new StorageClient([
'keyFilePath' => $keypath
]);
$storage->registerStreamWrapper();
$bucket = $storage->bucket($url);
$filename = $_FILES['profile_picture']['name'];
$newFileName = $_POST['user_id'].'_profile.jpg';
$bucket->upload(
fopen($_FILES["profile_picture"]["tmp_name"], 'r'),
[
'name' => $dir.$newFileName,
'metadata' => [
'cacheControl' => 'Cache-Control: no-cache, max-age=0',
]
]
);
I dont know if CacheControl is enough (and if the sintax is correct) or I need to include Custom-Time and what is the sintax for it.
My try:
$bucket->upload(
fopen($_FILES["profile_picture"]["tmp_name"], 'r'),
[
'name' => $dir.$newFileName,
'metadata' => [
'cacheControl' => 'Cache-Control: no-cache, max-age=0',
'Custom-Time' => date('Y-m-d\TH:i:s.00\Z');
]
]
);
The documentation dont have anything abour PHP syntax: https://cloud.google.com/storage/docs/metadata
Please help

I believe you should use:
$bucket->upload(
fopen($_FILES["profile_picture"]["tmp_name"], 'r'),
[
'name' => $dir.$newFileName,
'metadata' => [
'cacheControl' => 'no-cache, max-age=0',
]
]
);
I have noticed that you repeating the cacheControl in your code. Also, remember that:
If you allow caching, downloads may continue to receive older versions
of an object, even after uploading a newer version. This is because
the older version remains "fresh" in the cache for a period of time
determined by max-age. Additionally, because objects can be cached at
various places on the Internet, there is no way to force a cached
object to expire globally. If you want to prevent serving cached
versions of publicly readable objects, set Cache-Control:no-cache,
max-age=0 on the object.
for further read, refer to this documentation

My solution with the correct metadata syntax:
$bucket->upload(
fopen($_FILES["profile_picture"]["tmp_name"], 'r'),
[
'name' => $_SESSION['idcompany'].'/uploads/'.$newFileName,
'metadata' => [
'cacheControl' => 'no-cache, max-age=0',
'customTime' => gmdate('Y-m-d\TH:i:s.00\Z')
]
]
);

Related

Performance of Laravel storage streaming file from Digital ocean spaces

I've obtained the following code from searching about the topic
Route::get('/test', function () {
//disable execution time limit when downloading a big file.
set_time_limit(0);
$fs = Storage::disk('local');
$path = 'uploads/user-1/1653600850867.mp3';
$stream = $fs->readStream($path);
if (ob_get_level()) ob_end_clean();
return response()->stream(function () use ($stream) {
fpassthru($stream);
},
200,
[
'Accept-Ranges' => 'bytes',
'Content-Length' => 14098560,
'Content-Type' => 'application/octet-stream',
]);
});
However when I click play on the UI, it takes a good four seconds to start playing. If I switch the disk to local though, it plays almost instantly.
Is there a way to improve the performance or, read the stream by range as per request?
Edit
My current DO config is as per below
'driver' => 's3',
'key' => env('DO_ACCESS_KEY_ID'),
'secret' => env('DO_SECRET_ACCESS_KEY'),
'region' => env('DO_DEFAULT_REGION'),
'bucket' => env('DO_BUCKET'),
'url' => env('DO_URL'),
'endpoint' => env('DO_ENDPOINT'),
'use_path_style_endpoint' => env('DO_USE_PATH_STYLE_ENDPOINT', false),
But I find two type of integration online one specifying the CDN endpoint and one doesn't. I am not sure which one is relevant, though the one that specifies CDN is for Laravel 8 and I am on Laravel 9.
I had to change my code such that:
I had to use the php SDK client for connecting to Aws for the Laravel API isn't flexible to allow passing additional arguments (at least I haven't found anything while researching)
Change to streamDownload as I can't see any description to the stream method in the docs despite that it is present in code.
So the below code allows to achieve what I was aiming for which is, download by chunk based on the range received in the request.
return response()->streamDownload(function(){
$client = new Aws\S3\S3Client([
'version' => 'latest',
'region' => config('filesystems.disks.do.region'),
'endpoint' => config('filesystems.disks.do.endpoint'),
'credentials' => [
'key' => config('filesystems.disks.do.key'),
'secret' => config('filesystems.disks.do.secret'),
],
]);
$path = 'uploads/user-1/1653600850867.mp3';
$range = request()->header('Range');
$result = $client->getObject([
'Bucket' => 'wyxos-streaming',
'Key' => $path,
'Range' => $range
]);
echo $result['Body'];
},
200,
[
'Accept-Ranges' => 'bytes',
'Content-Length' => 14098560,
'Content-Type' => 'application/octet-stream',
]);
Note:
In a live scenario, you would need to cater for if range isn't specified, the content length will need to be the actual file size
When range is present however, the content length should then be the size of the segment being echoed

How do I upload a gzip object to s3?

I am creating a gzip string and uploading it as an object to s3. However when I download the same file from s3 and decompress it locally with gunzip I get this error: gunzip: 111.gz: not in gzip format When I look at the mime_content_type returned in the file downloaded from s3 it is set as: application/zlib
Here is the code I am running to generate the gzip file and push it to s3:
for($i=0;$i<=100;$i++) {
$content .= $i . "\n";
}
$result = $this->s3->putObject(array(
'Bucket' => 'my-bucket-name',
'Key' => '111.gz',
'Body' => gzcompress($content),
'ACL' => 'authenticated-read',
'Metadata' => [
'ContentType' => 'text/plain',
'ContentEncoding' => 'gzip'
]
));
The strange thing is that if I view the gzip content locally before I send it to s3 I am able to decompress it and see the original string. So I must be uploading the file incorrectly, any thoughts?
According to http://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#putobject the ContentType and ContentEncoding parameters belong on top level, and not under Metadata. So your call should look like:
$result = $this->s3->putObject(array(
'Bucket' => 'my-bucket-name',
'Key' => '111.gz',
'Body' => gzencode($content),
'ACL' => 'authenticated-read',
'ContentType' => 'text/plain',
'ContentEncoding' => 'gzip'
));
Also it's possible that by setting ContentType to text/plain your file might be truncated whenever a null-byte occurs. I would try with'application/gzip' if you still have problems unzipping the file.
I had a very similar issue, and the only way to make it work for our file was with a code like this (slightly changed according to your example):
$this->s3->putObject(array(
'Bucket' => 'my-bucket-name',
'Key' => '111.gz',
'Body' => gzcompress($content, 9, ZLIB_ENCODING_GZIP),
'ACL' => 'public-read',
'ContentType' => 'text/javascript',
'ContentEncoding' => 'gzip'
));
The relevant part being gzcompress($content, 9, ZLIB_ENCODING_GZIP), as AWS S3 wouldn't recognize the file nor serve it in the right format without the last ZLIB_ENCODING_GZIP parameter.

Add Metadata, headers (Expires, CacheControl) to a file uploaded to Amazon S3 using the Laravel 5.0 Storage facade

I am trying to find out how to add in Metadata or headers (Expires, CacheControl etc.) to a file uploaded using the Laravel 5.0 Storage facade. I have use the page here as reference.
http://laravel.com/docs/5.0/filesystem
The following code works correctly:
Storage::disk('s3')->put('/test.txt', 'test');
After digging I also found that there is a 'visibility' parameter which sets the ACL to 'public-read' so the following also works correctly.
Storage::disk('s3')->put('/test.txt', 'test', 'public');
But I would like to be able to set some other values to the header of the file. I have tried the following:
Storage::disk('s3')->put('/index4.txt', 'test', 'public', array('Expires'=>'Expires, Fri, 30 Oct 1998 14:19:41 GMT'));
Which doesn't work, I have also tried:
Storage::disk('s3')->put('/index4.txt', 'test', array('ACL'=>'public-read'));
But that creates an error where the 'visibility' parameter can not be converted from a string to an array. I have checked the source of AwsS3Adapter and it seems there is code for options but I can not seem to see how to pass them correctly. I think it takes the following:
protected static $metaOptions = [
'CacheControl',
'Expires',
'StorageClass',
'ServerSideEncryption',
'Metadata',
'ACL',
'ContentType',
'ContentDisposition',
'ContentLanguage',
'ContentEncoding',
];
Any help on how to accomplish this would be appreciated.
First, you need to call getDriver so you can send over an array of options. And then you need to send the options as an array.
So for your example:
Storage::disk('s3')->getDriver()->put('/index4.txt', 'test', [ 'visibility' => 'public', 'Expires' => 'Expires, Fri, 30 Oct 1998 14:19:41 GMT']);
Be aware that if you're setting Cache-Control it has to be passed as CacheControl. This may well be true for other keys with non-alphanumierc characters.
If you want to have global defaults with headers, this works in Laravel 5.4. Change your config/filesystems.php file like so:
s3' => [
'driver' => 's3',
'key' => env('AWS_KEY'),
'secret' => env('AWS_SECRET'),
'region' => env('AWS_REGION'),
'bucket' => env('AWS_BUCKET'),
'options' => ['CacheControl' => 'max-age=315360000, no-transform, public',
'ContentEncoding' => 'gzip']
],
After attempting the above answers and failing to be able to add customer user-metadata it turns out that after digging through the SDK code it is a bit easier than I thought (Assume $path is a path to an image file). I didn't appear to need to call the getDriver() method either, not too sure if that makes any difference with the current version of the AWS SDK.
Storage::put(
'image.jpg',
file_get_contents($path),
[
'visibility' => 'public',
'Metadata' => [
'thumb' => '320-180',
],
]
);
So now if you view the newly uploaded file in S3 you will see the custom metadata:
Hope this helps someone.
The answer from #Paras is good. But there is one thing that can confuse newcommers:
'options' => [
'Expires' => gmdate('D, d M Y H:i:s GMT', strtotime('+1 month')),
>>> WRONG visibility' => 'public', WRONG <<<
]
If you want to define global options for the HEADERS, the options array is the right way to go. But if you also want to define the visibility, you can not mix it up. Visibility has to be defined outside of options array.
👍
'visibility' => 'public',
'options' => ['Expires' => gmdate('D, d M Y H:i:s GMT', strtotime('+1 month'))]
This is an example of how to upload a file to S3 as of Laravel 5.8 with expiry and cache control headers, for example:
Storage::put($directory . '/' . $imageName,
$image, [
'visibility' => 'public',
'Expires' => gmdate('D, d M Y H:i:s \G\M\T', time() + (60 * 60 * 24 * 7)),
'CacheControl' => 'max-age=315360000, no-transform, public',
]);
Also don't forget to uncheck the 'Disable cache' checkbox in Chrome if you're testing and it never seems to work, that got me bad for an hour when my browser wouldn't cache things even though I finally got the headers right in S3.
For Laravel 9 users this has became more easy. You do not need to call ->getDriver() anymore. You can directly pass options to the put command.
Storage::disk('s3')->put('/index.txt', 'file content', [
// S3 Object ACL
'visibility' => 'public', // or 'private',
// HTTP Headers
'CacheControl' => 'public,max-age=315360000',
'ContentDisposition' => 'attachment; filename="index.txt"',
'Expires' => 'Thu, 12 Feb 2032 08:24:43 GMT',
// Metadata or other S3 options
'MetadataDirective' => 'REPLACE'
'Metadata' => [
'Custom-Key' => 'test',
],
])
In case you need other headers or options, please checkout the flysystem source code for all available headers and options.
https://github.com/thephpleague/flysystem-aws-s3-v3/blob/master/src/AwsS3Adapter.php#L38
public const AVAILABLE_OPTIONS = [
'ACL',
'CacheControl',
'ContentDisposition',
'ContentEncoding',
'ContentLength',
'ContentType',
'Expires',
'GrantFullControl',
'GrantRead',
'GrantReadACP',
'GrantWriteACP',
'Metadata',
'MetadataDirective',
'RequestPayer',
'SSECustomerAlgorithm',
'SSECustomerKey',
'SSECustomerKeyMD5',
'SSEKMSKeyId',
'ServerSideEncryption',
'StorageClass',
'Tagging',
'WebsiteRedirectLocation',
];
Hey I solved this problem, you need to create a custom S3 filesystem
First, create a new file CustomS3Filesystem.php and save into app/providers, this custom S3 filesystem uses the S3 Adapter, but you can add metadata and headers.
<?php namespace App\Providers;
use Storage;
use League\Flysystem\Filesystem;
use Aws\S3\S3Client;
use League\Flysystem\AwsS3v2\AwsS3Adapter as S3Adapter;
use Illuminate\Support\ServiceProvider;
class CustomS3Filesystem extends ServiceProvider {
public function boot()
{
Storage::extend('s3_custom', function($app, $config)
{
$s3Config = array_only($config, ['key', 'region', 'secret', 'signature', 'base_url']);
$flysystemConfig = ['mimetype' => 'text/xml'];
$metadata['cache_control']='max-age=0, no-cache, no-store, must-revalidate';
return new Filesystem(new S3Adapter(S3Client::factory($s3Config), $config['bucket'], null, ['mimetype' => 'text/xml', 'Metadata' => $metadata]), $flysystemConfig);
});
}
public function register()
{
//
}
}
Add provider into providers list at config/app.php
'App\Providers\CustomS3Filesystem',
create new filesistem name in config/filesystems
's3-new' => [
'driver' => 's3_custom',
'key' => 'XXX',
'secret' => 'XXX',
'bucket' => 'XXX',
],
Use the new created custom s3 adapter
Storage::disk('s3-new')->put(filename, file_get_contents($file), public);
I used laravel documentation to customize the s3 adapter
http://laravel.com/docs/5.0/filesystem#custom-filesystems
I hope this may help you.
I am using Laravel 4.2, but I think my solution might also help on Laravel 5.0 (cannot say for sure, as I have not tried to upgrade yet). You need to update the meta options in the config for the Flysystem driver that you are using. In my case, I created a connection called s3static to access the bucket where I am storing images that will not be changing.
My config file:
's3static' => [
'driver' => 'awss3',
'key' => 'my-key',
'secret' => 'my-secret',
'bucket' => 'my-bucket',
// 'region' => 'your-region',
// 'base_url' => 'your-url',
'options' => array(
'CacheControl' => 'max_age=2592000'
),
// 'prefix' => 'your-prefix',
// 'visibility' => 'public',
// 'eventable' => true,
// 'cache' => 'foo'
],
Now when I put any files on to S3 using this connection, they have the Cache-Control meta data set.
To expand on #sergiodebcn 's answer, here is the same CustomS3Filesystem class working for S3 v3 and the latest Laravel. Note I have removed the XML mimetype and set up a 5 day cache time:
namespace App\Providers;
use Illuminate\Support\Arr;
use Storage;
use League\Flysystem\Filesystem;
use Aws\S3\S3Client;
use League\Flysystem\AwsS3v3\AwsS3Adapter as S3Adapter;
use Illuminate\Support\ServiceProvider;
class CustomS3Filesystem extends ServiceProvider
{
/**
* Format the given S3 configuration with the default options.
*
* #param array $config
* #return array
*/
protected function formatS3Config(array $config)
{
$config += ['version' => 'latest'];
if ($config['key'] && $config['secret']) {
$config['credentials'] = Arr::only($config, ['key', 'secret']);
}
return $config;
}
/**
* Bootstrap a custom filesystem
*
* #return void
*/
public function boot()
{
Storage::extend('s3_custom', function($app, $config)
{
$s3Config = $this->formatS3Config($config);
return new Filesystem(
new S3Adapter(
new S3Client($s3Config),
$config['bucket'],
null,
[
'CacheControl' => 'max-age=432000'
]
)
);
});
}
public function register()
{
//
}
}
Using Laravel 8 here:
I didn't see this mentioned elsewhere, but the metadata option key => values listed by Christoph Kluge
appear to only accept string values, and fail silently if passed an integer, bool, etc... So if you're passing in a variable you'll need to convert to a string value:
$fileID = $fileData['FileId'];
$fileExt = $fileData['FileExtension'];
$fileUnique = $fileData['UniqueFileId'];
$isImage = $fileData['IsImage'];
$isDefault = $fileData['IsDefaultImage'];
$filePath = $fileUnique . "." . $fileExt;
$file = $mp->fileID($fileID)->get();
if (Storage::disk('s3')->missing('img/' . $filePath)) {
Storage::disk('s3')->put(
'img/' . $filePath,
$file,
[
// Metadata or other S3 options
'MetadataDirective' => 'REPLACE',
'Metadata' => [
'is-image' => strval($isImage),
'is-default' => strval($isDefault),
'unique-file-id' => strval($fileUnique),
'file-extension' => strval($fileExt),
]
]
);
echo nl2br('uploading file: ' . $filePath . "\n");
} else {
echo nl2br('file already exists:' . $filePath . "\n");
}

Set Cache-Control HTTP Header for S3 Objects from PHP AWS SDK

I using the Amazon SDK for PHP and trying to set Cache-control Header on the image. When I try to add it via MetaData = array("Cache-Control") it changes it to be x-amz-meta-cache-control when I login to the S3 bucket, and when I download the file, there is no Cache-control set. But if I manually change this setting, the Cache-control works perfectly. Is there some parameter I missing that I can use to set HTTP Request Headers programmatically on upload? I'm using the PutObject method. I believe the AWS SDK is from 2013.
The cache control isn't set via the "MetaData" index, "CacheControl" is at the same level as "MetaData", not contained within it.
http://docs.aws.amazon.com/aws-sdk-php-2/latest/class-Aws.S3.S3Client.html#_putObject
You'd use something like this as your configuration array for the putObject() method...
$s3client->putObject(array(
'Bucket' => '...',
'key' => '...',
'body' => '...',
'CacheControl' => 'max-age=172800',
'MetaData' => array(
'metaKey' => 'metaValue',
'metaKey' => 'metaValue'
)));
For the upload() method...
$s3client->upload(
'bucket',
'key',
fopen('sourcefile','r'),
'public-read',
array('params' => array(
'CacheControl' => 'max-age=172800',
'Metadata' => array(
'metaKey' => 'metaValue',
'metaKey' => 'metaValue'
))));
Also, it's worth pointing out that upload() will wrap putObject() for files of 5MB in size, otherwise it will initiate a multipart upload request.
If you want to add the CacheControl header to an item already in your bucket, use the SDK's copyObject method. Set the MetadataDirective param to REPLACE to make the item overwrite itself.
I noticed one weird thing: I had to set the ContentType header too, even though it was already set. Otherwise the image would not display inline in the browser but be offered as a download.
$result = $s3->copyObject(array(
'ACL' => 'public-read',
'Bucket' => $bucket, // target bucket
'CacheControl' => 'public, max-age=86400',
'ContentType' => 'image/jpeg', // !!
'CopySource' => urlencode($bucket . '/' . $key),
'Key' => $key, // target file name
'MetadataDirective' => 'REPLACE'
));

AWS PHP SDK Version 2 S3 putObject Error

So the AWS php sdk 2.x library has been put out recently and I've taken a turkey day plunge into upgrading from 1.5x. My first was to upgrade my S3 backup class. I've quickly run into an error:
Fatal error: Class 'EntityBody' not found in /usr/share/php/....my file here
when trying to upload a zipped file to an S3 bucket. I wrote a class to abstract the writing a bit to allow for multi-region backup, so the code below references to $this are that.
$response1 = $s3->create_object(
$this->bucket_standard,
$this->filename,
array(
'fileUpload' => $this->filename,
'encryption' => 'AES256',
//'acl' => AmazonS3::ACL_PRIVATE,
'contentType' => 'text/plain',
'storage' => AmazonS3::STORAGE_REDUCED,
'headers' => array( // raw headers
'Cache-Control' => 'max-age',
//'Content-Encoding' => 'gzip',
'Content-Language' => 'en-US'
//'Expires' => 'Thu, 01 Nov 2012 16:00:00 GMT'
),
'meta' => array(
'param1' => $this->backupDateTime->format('Y-m-d H:i:s'), // put some info on the file in meta tags
'param2' => $this->hostOrigin
)
)
);
The above worked fine on 1.5.x.
Now, in 2.x, I'm looking into their docs and they've changed just about everything (great...maximum sarcasm)
$s3opts=array('key'=> $this->accessKey, 'secret' => $this->secretKey,'region' => 'us-east-1');
$s3 = Aws\S3\S3Client::factory($s3opts);
so now I've got a new S3 object. And here is my 2.x syntax to do the same exact thing. My problem arises where they've (sinisterly) changed the old "fileupload" to "Body" and made it more abstract in how to actually attach a file! I've tried both and I'm thinking it has to do with the dependencies (Guzzle or Smyfony etc), but I get the error above (or substitute Stream if you like) whenever I try to execute this.
I've tried using Composer with composer.json, and the aws.phar but before I get into that, is there something dumb I'm missing?
$response1 = $s3->putObject(array(
'Bucket' => $this->bucket_standard,
'Key' => $this->filename,
'ServerSideEncryption' => 'AES256',
'StorageClass' => 'REDUCED_REDUNDANCY',
'Body' => EntityBody::factory(fopen($this->filename, 'r')),
//'Body' => new Stream(fopen($fullPath, 'r')),
'MetaData' => array(
'BackupTime' => $this->backupDateTime->format('Y-m-d H:i:s'), // put some info on the file in meta tags
'HostOrigin' => $this->hostOrigin
)
));
Thanks as always,
R
Did you import the EntityBody into your namespace?
use Guzzle\Http\EntityBody;
Otherwise, you'd have to do
'Body' => \Guzzle\Http\EntityBody::factory(fopen($this->filename, 'r')),

Categories