My application uploads images to s3. I use front-end rendering to get the colors of the image, but because uploading to s3 lowers the quality (jpeg'ish), I get more colors then desired.
$s3 = \Storage::disk('s3');
$s3->put('/images/file.jpg', '/images/file.jpg', 'public');
Is there way to prevent this quality loss? I noticed that if I upload the file directly using aws console website, the quality stays the same which is ideal.
Thank you!
In the controller Action
public function uploadFileToS3(Request $request)
{
$image = $request->file('image');
}
Next we need to assign a file name to the uploaded file.You could leave this as the original filename, but in most cases you will want to change it to keep things consistent. Let’s change it to a timestamp, and append the file extension to it.
$imageFileName = time() . '.' . $image->getClientOriginalExtension();
Now get the image content as follows
$s3 = \Storage::disk('s3');
$filePath = '/images/file.jpg' . $imageFileName;
$s3->put($filePath, file_get_contents($image), 'public');
For further info you can refer to this
I am not familiar with Laravel but i am familiar with AWS S3, and i'm using aws-sdk-php.
As far as i know, neither AWS S3 nor php-sdk don't do something implicitly under the hood. So it must be something going wrong elsewhere in your project.
You can try use plain aws-sdk-php:
$s3 = S3Client::factory([
'region' => 'us-west-2',
'credentials' => $credentials,
'version' => 'latest',
]);
$s3->putObject([
'Bucket' => 'myBucket',
'Key' => 'test/img.jpg',
'SourceFile' => '/tmp/img.jpg',
]);
It works perfectly.
Related
I'm using a NeutrinoAPI API for watermarking images (https://www.neutrinoapi.com/api/image-watermark/)
That's working just fine, but now I need to upload the watermarked image to Amazon S3, which I use this code (https://gist.github.com/keithweaver/70eb06d98b008113ce97f6148fbea83d)
As the first API the response is a image, I don't know how to use it on the AWS API. That's how I'm doing, but it keeps uploading a 0kb file.
// Add it to S3
try {
// Uploaded:
$file = $_FILES[$json];
$response = $s3->putObject(
array(
'Bucket'=>$bucketName,
'Key' => $keyName,
'ACL' => 'public-read',
'SourceFile' => $file
)
);
When I use the 'file_put_contents($filename, $json);' it works, it exports the image I want, but how do I put it on the amazon $file?
On my PHP FPDF script
<?php
...
$mypdf->Image("http://s3-ap-southeast-1.amazonaws.com/mybucket/path/to/the/image/file.png", null, null, 150, 150);
...
?>
and it causes errors. However when I try to do the same thing but with a different image not hosted on S3, it works.
How is this possible that S3 does not work with FPDF?
I was having the same problem and I after some digging in the fpdf source figured out the issue was with fopen(). In order to use this method with an S3 image you need to use an S3 Stream Wrapper. This requires the AWS SDK for PHP or you could also roll your own if really wanted to.
My code looks like this
$credentials = new Aws\Credentials\Credentials('KEY','SECRET');
$client = new Aws\S3\S3Client([
'version'=>'latest',
'region' => 'REGION',
'credentials' => $credentials
]);
$client->registerStreamWrapper();
// Link to file
$url = 's3://bucket/key';
// add background image
$fpdf->Image($url, 0, 0, $fpdf->GetPageWidth(), $fpdf->GetPageHeight());
I am using "aws-sdk-php-laravel" package in my laravel 5 project. which used amazonS3 for storage. But what is OBJECT_KEY and where to get it.
$s3 = App::make('aws')->createClient('s3');
$s3->putObject(array(
'Bucket' => 'YOUR_BUCKET',
'Key' => 'YOUR_OBJECT_KEY',
'SourceFile' => '/the/path/to/the/file/you/are/uploading.ext',
));
It's name of your file on S3.
You can check it here.
It's the name of the file on S3. Just to make things a little easier to understand, don't think of S3 as a filesystem. Think of it as a key data store.
So you have a key, the 'file name', and the data associated with it. In this instance it is your 'SourceFile' variable which is a location to a file on your local filesystem (not s3). All content in that source file will be uploaded against the key name.
Will the function pause the php script until it finds the object on s3 servers?
I have it inside a foreach loop, uploading images one by one. After the object is found I call a method to delete the image locally then delete the local folder if empty. Is this a proper way of going about it? Thanks
foreach ($fileNames as $fileName)
{
$imgSize = getimagesize($folderPath . $fileName);
$width = (string)$imgSize[0];
$height = (string)$imgSize[1];
//upload the images
$result = $S3->putObject(array(
'ACL' => 'public-read',
'Bucket' => $bucket,
'Key' => $keyPrefix . $fileName,
'SourceFile' => $folderPath . $fileName,
'Metadata' => array(
'w' => $width,
'h' => $height
)
));
$S3->waitUntilObjectExists(array(
'Bucket' => $bucket,
'Key' => $keyPrefix . $fileName));
$this->deleteStoreDirectory($folderPath, $fileName);
}
waitUntilObjectExists is basically a waiter that periodically checks (polls) S3 at specific time intervals to see if the resource is available. The script's execution is blocked until the resource is located or the maximum number of retries is reached.
As the AWS docs defines them:
Waiters help make it easier to work with eventually consistent systems by providing an easy way to wait until a resource enters into a particular state by polling the resource.
By default, the waitUntilObjectExists waiter is configured to try to locate the resource 20 times, with a 5 seconds delay between each try. You can override these default values with your desired ones by passing additional parameters to the waitUntilObjectExists method.
If the waiter is unable to locate the resource after the maximum number of tries, it will throw an exception.
You can learn more about waiters at:
http://docs.aws.amazon.com/aws-sdk-php-2/guide/latest/feature-waiters.html
For your use case, I don't think it makes sense to call waitUntilObjectExists after you uploaded the object, unless the same PHP script tries to retrieve the same object from S3 later in the code.
If the putObject API call has returned a successful response, then the object will eventually show up in S3 and you don't necessarily need to wait for this to happen before you remove the local files.
I'm trying to upload files to my bucket using a piece of code like this:
$s3 = new AmazonS3();
$bucket = 'host.domain.ext'; // My bucket name matches my host's CNAME
// Open a file resource
$file_resource = fopen('picture.jpg', 'r');
// Upload the file
$response = $s3->create_object($bucket, 'picture.jpg', array(
'fileUpload' => $file_resource,
'acl' => AmazonS3::ACL_PUBLIC,
'headers' => array(
'Cache-Control' => 'public, max-age=86400',
),
));
But I get the "NoSuchBucket" error, the weird thing is that when I query my S3 account to retrieve the list of buckets, I get the exact same name I'm using for uploading host.domain.ext.
I tried creating a different bucket with no dots in the name and it works perfectly... yes, my problem is my bucket name, but I need to keep the FQDN convention in order to map it as a static file server on the Internet. Does anyone know if is there any escaping I can do to my bucket name before sending it to the API to prevent the dot crash? I've already tried regular expressions and got the same result.
I'd try using path style urls as suggested in the comments in a related AWS forum thread...
$s3 = new AmazonS3();
$s3->path_style = true;