waitUntilObjectExists() Amazon S3 PHP SDK method, exactly how does it work? - php

Will the function pause the php script until it finds the object on s3 servers?
I have it inside a foreach loop, uploading images one by one. After the object is found I call a method to delete the image locally then delete the local folder if empty. Is this a proper way of going about it? Thanks
foreach ($fileNames as $fileName)
{
$imgSize = getimagesize($folderPath . $fileName);
$width = (string)$imgSize[0];
$height = (string)$imgSize[1];
//upload the images
$result = $S3->putObject(array(
'ACL' => 'public-read',
'Bucket' => $bucket,
'Key' => $keyPrefix . $fileName,
'SourceFile' => $folderPath . $fileName,
'Metadata' => array(
'w' => $width,
'h' => $height
)
));
$S3->waitUntilObjectExists(array(
'Bucket' => $bucket,
'Key' => $keyPrefix . $fileName));
$this->deleteStoreDirectory($folderPath, $fileName);
}

waitUntilObjectExists is basically a waiter that periodically checks (polls) S3 at specific time intervals to see if the resource is available. The script's execution is blocked until the resource is located or the maximum number of retries is reached.
As the AWS docs defines them:
Waiters help make it easier to work with eventually consistent systems by providing an easy way to wait until a resource enters into a particular state by polling the resource.
By default, the waitUntilObjectExists waiter is configured to try to locate the resource 20 times, with a 5 seconds delay between each try. You can override these default values with your desired ones by passing additional parameters to the waitUntilObjectExists method.
If the waiter is unable to locate the resource after the maximum number of tries, it will throw an exception.
You can learn more about waiters at:
http://docs.aws.amazon.com/aws-sdk-php-2/guide/latest/feature-waiters.html
For your use case, I don't think it makes sense to call waitUntilObjectExists after you uploaded the object, unless the same PHP script tries to retrieve the same object from S3 later in the code.
If the putObject API call has returned a successful response, then the object will eventually show up in S3 and you don't necessarily need to wait for this to happen before you remove the local files.

Related

Google Cloud Storage - Upload Large File (4GB)

I have the following script which works with small files, however fails when I try a huge file (4GB):
<?php
require 'vendor/autoload.php';
use Google\Cloud\Storage\StorageClient;
$storage = new StorageClient([
'keyFilePath' => 'keyfile.json',
'projectId' => 'storage-123456'
]);
$bucket = $storage->bucket('my-bucket');
$options = [
'resumable' => true,
'chunkSize' => 200000,
'predefinedAcl' => 'publicRead'
];
// Upload a file to the bucket.
$bucket->upload(
fopen('data/file.imgc', 'r'),
$options
);
?>
The error I receive is:
Fatal error: Uncaught exception 'Google\Cloud\Core\Exception\GoogleException' with message 'Upload failed. Please use this URI to resume your upload:
Any ideas how to upload a large file?
http://googlecloudplatform.github.io/google-cloud-php/#/docs/google-cloud/v0.61.0/storage/bucket?method=upload
I've also tried the getResumableUploader():
$uploader = $bucket->getResumableUploader(fopen('data/file.imgc', 'r'), [
'name' => 'file.imgc'
]);
try {
$object = $uploader->upload();
} catch (GoogleException $ex) {
$resumeUri = $uploader->getResumeUri();
$object = $uploader->resume($resumeUri);
}
When navigating to the resume URI it returns "Method Not Allowed"
I haven’t used this API, but I don’t think you’re supposed to just open a massive file into memory and then stuff it into this single request. You’re requesting a resumable operation, so read small portions of the file, like a couple MB at a time, and loop through it, until you have uploaded all parts of it.
One thing that stands out as an issue, potentially, is the chosen chunkSize of 200000. Per the documentation, the chunkSize must be provided as multiples of 262144.
Also, when dealing with large files, I would highly recommend usingBucket::getResumableUploader(). It will help give you better control over the upload process and you should find it will be more reliable :). There is a code snippet in the link I shared that should help get you started.

Uploading to S3 from Laravel Quality Lost

My application uploads images to s3. I use front-end rendering to get the colors of the image, but because uploading to s3 lowers the quality (jpeg'ish), I get more colors then desired.
$s3 = \Storage::disk('s3');
$s3->put('/images/file.jpg', '/images/file.jpg', 'public');
Is there way to prevent this quality loss? I noticed that if I upload the file directly using aws console website, the quality stays the same which is ideal.
Thank you!
In the controller Action
public function uploadFileToS3(Request $request)
{
$image = $request->file('image');
}
Next we need to assign a file name to the uploaded file.You could leave this as the original filename, but in most cases you will want to change it to keep things consistent. Let’s change it to a timestamp, and append the file extension to it.
$imageFileName = time() . '.' . $image->getClientOriginalExtension();
Now get the image content as follows
$s3 = \Storage::disk('s3');
$filePath = '/images/file.jpg' . $imageFileName;
$s3->put($filePath, file_get_contents($image), 'public');
For further info you can refer to this
I am not familiar with Laravel but i am familiar with AWS S3, and i'm using aws-sdk-php.
As far as i know, neither AWS S3 nor php-sdk don't do something implicitly under the hood. So it must be something going wrong elsewhere in your project.
You can try use plain aws-sdk-php:
$s3 = S3Client::factory([
'region' => 'us-west-2',
'credentials' => $credentials,
'version' => 'latest',
]);
$s3->putObject([
'Bucket' => 'myBucket',
'Key' => 'test/img.jpg',
'SourceFile' => '/tmp/img.jpg',
]);
It works perfectly.

S3 Force Download

I have a website here http://www.voclr.it/acapellas/ my files are hosting on my Amazon S3 Account, but when a visitor goes to download an MP3 from my website it forces them to stream it but what I actually want it to do is download it to there desktop.
I have disabled S3 on the website for now, so the downloads are working fine. but really I want S3 to search the MP3s
Basically, you have to tell S3 to override the content-disposition header of the response. You can do that by appending the response-content-disposition query string parameter to the S3 file url and setting it to the desired content-disposition value. To force download try:
<url>&response-content-disposition="attachment; filename=somefilename"
You can find this in the S3 docs. For information on the values that the content-disposition header can assume you can look here.
As an additional information this also works with Google Cloud Storage.
require_once '../sdk-1.4.2.1/sdk.class.php';
// Instantiate the class
$s3 = new AmazonS3();
// Copy object over itself and modify headers
$response = $s3->copy_object(
array( // Source
'bucket' => 'your_bucket',
'filename' => 'Key/To/YourFile'
),
array( // Destination
'bucket' => 'your_bucket',
'filename' => 'Key/To/YourFile'
),
array( // Optional parameters
**'headers' => array(
'Content-Type' => 'application/octet-stream',
'Content-Disposition' => 'attachment'**
)
)
);
// Success?
var_dump($response->isOK());
Amazon AWS S3 to Force Download Mp3 File instead of Stream It
I created a solution for doing this via CloudFront functions (no php required since it all runs at AWS by linking to the .mp3 file on Cloudfront with a ?title=TitleGoesHere querystring to force a downloaded file with that filename). This is a fairly recent way of doing things (as of August 2022). I documented my function and how I set up my "S3 bucket behind a CloudFront Distribution" here: https://stackoverflow.com/a/73451456/19823883

How can I upload files to an S3 bucket with dots in its name using the PHP SDK?

I'm trying to upload files to my bucket using a piece of code like this:
$s3 = new AmazonS3();
$bucket = 'host.domain.ext'; // My bucket name matches my host's CNAME
// Open a file resource
$file_resource = fopen('picture.jpg', 'r');
// Upload the file
$response = $s3->create_object($bucket, 'picture.jpg', array(
'fileUpload' => $file_resource,
'acl' => AmazonS3::ACL_PUBLIC,
'headers' => array(
'Cache-Control' => 'public, max-age=86400',
),
));
But I get the "NoSuchBucket" error, the weird thing is that when I query my S3 account to retrieve the list of buckets, I get the exact same name I'm using for uploading host.domain.ext.
I tried creating a different bucket with no dots in the name and it works perfectly... yes, my problem is my bucket name, but I need to keep the FQDN convention in order to map it as a static file server on the Internet. Does anyone know if is there any escaping I can do to my bucket name before sending it to the API to prevent the dot crash? I've already tried regular expressions and got the same result.
I'd try using path style urls as suggested in the comments in a related AWS forum thread...
$s3 = new AmazonS3();
$s3->path_style = true;

Creating an image without storing it as a local file

Here's my situation - I want to create a resized jpeg image from a user uploaded image, and then send it to S3 for storage, but am looking to avoid writing the resized jpeg to the disk and then reloading it for the S3 request.
Is there a way to do this completely in memory, with the image data JPEG formatted, saved in a variable?
Most people using PHP choose either ImageMagick or Gd2
I've never used Imagemagick; the Gd2 method:
<?php
// assuming your uploaded file was 'userFileName'
if ( ! is_uploaded_file(validateFilePath($_FILES[$userFileName]['tmp_name'])) ) {
trigger_error('not an uploaded file', E_USER_ERROR);
}
$srcImage = imagecreatefromjpeg( $_FILES[$userFileName]['tmp_name'] );
// Resize your image (copy from srcImage to dstImage)
imagecopyresampled($dstImage, $srcImage, 0, 0, 0, 0, RESIZED_IMAGE_WIDTH, RESIZED_IMAGE_HEIGHT, imagesx($srcImage), imagesy($srcImage));
// Storing your resized image in a variable
ob_start(); // start a new output buffer
imagejpeg( $dstImage, NULL, JPEG_QUALITY);
$resizedJpegData = ob_get_contents();
ob_end_clean(); // stop this output buffer
// free up unused memmory (if images are expected to be large)
unset($srcImage);
unset($dstImage);
// your resized jpeg data is now in $resizedJpegData
// Use your Undesigned method calls to store the data.
// (Many people want to send it as a Hex stream to the DB:)
$dbHandle->storeResizedImage( $resizedJpegData );
?>
Hope this helps.
This can be done using the GD library and output buffering. I don't know how efficient this is compared with other methods, but it doesn't require explicit creation of files.
//$image contains the GD image resource you want to store
ob_start();
imagejpeg($image);
$jpeg_file_contents = ob_get_contents();
ob_end_clean();
//now send $jpeg_file_contents to S3
Once you've got the JPEG in memory (using ImageMagick, GD, or your graphic library of choice), you'll need to upload the object from memory to S3.
Many PHP S3 classes seem to only support file uploads, but the one at Undesigned seems to do what we're after here -
// Manipulate image - assume ImageMagick, so $im is image object
$im = new Imagick();
// Get image source data
$im->readimageblob($image_source);
// Upload an object from a resource (requires size):
$s3->putObject($s3->inputResource($im->getimageblob(), $im->getSize()),
$bucketName, $uploadName, S3::ACL_PUBLIC_READ);
If you're using GD instead, you can use
imagecreatefromstring to read an image in from a stream, but I'm not sure whether you can get the size of the resulting object, as required by s3->inputResource above - getimagesize returns the height, width, etc, but not the size of the image resource.
Pretty late to the game on this one, but if you are using the the S3 library mentioned by ConroyP and Imagick you should use the putObjectString() method instead of putObject() due the fact getImageBlob returns a string. Example that finally worked for me:
$headers = array(
'Content-Type' => 'image/jpeg'
);
$s3->putObjectString($im->getImageBlob(), $bucket, $file_name, S3::ACL_PUBLIC_READ, array(), $headers);
I struggled with this one a bit, hopefully it helps someone else!
Realize this is an old thread, but I spent some time banging my head against the wall on this today, and thought I would capture my solution here for the next guy.
This method uses AWS SDK for PHP 2 and GD for the image resize (Imagick could also be easily used).
require_once('vendor/aws/aws-autoloader.php');
use Aws\Common\Aws;
define('AWS_BUCKET', 'your-bucket-name-here');
// Configure AWS factory
$aws = Aws::factory(array(
'key' => 'your-key-here',
'secret' => 'your-secret-here',
'region' => 'your-region-here'
));
// Create reference to S3
$s3 = $aws->get('S3');
$s3->createBucket(array('Bucket' => AWS_BUCKET));
$s3->waitUntilBucketExists(array('Bucket' => AWS_BUCKET));
$s3->registerStreamWrapper();
// Do your GD resizing here (omitted for brevity)
// Capture image stream in output buffer
ob_start();
imagejpeg($imageRes);
$imageFileContents = ob_get_contents();
ob_end_clean();
// Send stream to S3
$context = stream_context_create(
array(
's3' => array(
'ContentType'=> 'image/jpeg'
)
)
);
$s3Stream = fopen('s3://'.AWS_BUCKET.'/'.$filename, 'w', false, $context);
fwrite($s3Stream, $imageFileContents);
fclose($s3Stream);
unset($context, $imageFileContents, $s3Stream);
The Imagemagick library will let you do that. There are plenty of PHP wrappers like this one around for it (there's even example code for what you want to do on that page ;) )
I encounter the same problem, using openstack object store and php-opencloud library.
Here is my solution, which does not use the ob_start and ob_end_clean function, but store the image in memory and in temp file. The size of the memory and the temp file may be adapted at runtime.
// $image is a resource created by gd2
var_dump($image); // resource(2) of type (gd)
// we create a resource in memory + temp file
$tmp = fopen('php://temp', '$r+');
// we write the image into our resource
\imagejpeg($image, $tmp);
// the image is now in $tmp, and you can handle it as a stream
// you can, then, upload it as a stream (not tested but mentioned in doc http://docs.aws.amazon.com/aws-sdk-php/v2/guide/service-s3.html#uploading-from-a-stream)
$s3->putObject(array(
'Bucket' => $bucket,
'Key' => 'data_from_stream.txt',
'Body' => $tmp
));
// or, for the ones who prefers php-opencloud :
$container->createObject([
'name' => 'data_from_stream.txt',
'stream' => \Guzzle\Psr7\stream_for($tmp),
'contentType' => 'image/jpeg'
]);
About php://temp (from the official documentation of php):
php://memory and php://temp are read-write streams that allow temporary data to be stored in a file-like wrapper. The only difference between the two is that php://memory will always store its data in memory, whereas php://temp will use a temporary file once the amount of data stored hits a predefined limit (the default is 2 MB). The location of this temporary file is determined in the same way as the sys_get_temp_dir() function.
The memory limit of php://temp can be controlled by appending /maxmemory:NN, where NN is the maximum amount of data to keep in memory before using a temporary file, in bytes.
Maye by using the GD library.
There is a function to copy out a part of an image and resize it. Of course the part could be the whole image, that way you would only resize it.
see imagecopyresampled

Categories