I am storing some customer PDFs in S3 for multiple parties to either view in the browser or download. The trouble is I can only get a single file in S3 to either always download or always view in the browser.
I could just upload the same file twice with each having its own ContentDisposition, but that seems wasteful when ideally it could be as simple as adding something like ?ContentDisposition=inline to the public bucket URL.
My Question: How can dynamically set a ContentDisposition for a single S3 file?
For context, my current code looks something like this:
$s3_object = array(
'ContentDisposition' => sprintf('attachment; filename="%s"', addslashes($basename)),
'ACL' => 'public-read',
'ContentType' => 'pdf',
'StorageClass' => 'REDUCED_REDUNDANCY',
'Bucket' => 'sample',
'Key' => static::build_file_path($path, $filename, $extension),
'Body' => $binary_content,
);
$result = $s3_client->putObject($s3_object);
Also, I did try to search for this elsewhere in SO, but most people seem to just be looking for one or the other, so I didn't find any SO answers that showed how to do this.
I ended up stumbling across the definitive answer for this today (over a month later) while looking at other S3 documentation. Going to the GetObject docs for the S3 API and under the section labeled "Overriding Response Header Values" we find the following:
Note: You must sign the request, either using an Authorization header or a presigned URL, when using these parameters. They cannot be used with an unsigned (anonymous) request.
response-content-language
response-expires
response-cache-control
response-content-disposition
response-content-encoding
This answer's how to dynamically change any S3 object's content-disposition in the URL. However, at least for me, this is an imperfect solution because my intended use case was to store the URL for years as part of an invoicing archive, but signed URLs are only valid for a maximum of 1 week.
I could technically also try to find a way to make the Authorization header work for me or just query the S3 API to get a new signed URL every time I want to link to it, but that has other security, performance, and ROI implications for me that make it not worth it.
Related
I've been working with some example code from Amazon to get a script to upload an object to a bucket in version 3 of the php sdk for aws. I can get the object to upload to a bucket, but I'm trying to add a tag to this new object during this PutObject method call. I've worked through a few examples that I found, but nothing has worked for me. Here is my php code so far:
$cmd = $s3Client->getCommand('PutObject', [
'Bucket' => $config['s3BucketName'],
'Key' => 'file_upload_direct.mp4',
'Tagging' => 'status=notProcessed',
]);
The Tagging property doesn't get applied and doesn't give any error when the form is sent. I've seen a few ways of adding tags to uploads, but none of those have worked for me. I'm trying to avoid using the PutObjectTagging method since that seems to be extra work if I'm able to define the tag in the PutObject method. I'm not sure if the issue is trying to use the PutObject method in the getCommand or not, but as far as I can tell you should be able to pass the normal parameters as an array like this. Has anyone been able to get this to work, or is there a different way I should be trying to accomplish this?
it is better later than never, right?
The problem here is that AWS Docs are poor on emphasising important details like:
Note: Not all operation parameters are supported when using pre-signed URLs. Certain parameters, such as SSECustomerKey, ACL, Expires, ContentLength, or Tagging must be provided as headers when sending a request.
You can find these in API Docs and easy to overlook.
This means you prepare data to create a signature with this:
$cmd = $s3Client->getCommand('PutObject', [
'Bucket' => $config['s3BucketName'],
'Key' => 'file_upload_direct.mp4',
'Tagging' => 'status=notProcessed',
]);
Then when you get the URL back from
$s3Client->createPresignedRequest($cmd, '+5 minutes');
And you PUT to that URL, you have to also send HTTP Header X-Amz-Tagging: status=notProcessed
Another important thing to keep in mind is that, if you are PUTting from the frontend, your bucket must have CORS policies properly set up and allow headers like x-amz-tagging.
I am going to answer my own question here. It took me hours to figure this out as there is no info on this anywhere so I thought I should post this somewhere I would look first.
I was using the AWS PHP SDK to send a PUT request to add a lifecycle policy to my Digital Ocean space and it would not take since it requires a ContentMD5 header. There are two problems here, the first problem is the SDK URL encodes the path/key, which is a problem with /?lifecycle, /?location, and /?acl since they become "/%3Flifecycle" -- skip this paragraph if this isn't part of your request path. To temporarily stop this to add or update a bucket policy you have to find the file RestSerializer.php in the SDK files, if you added the API with composer it will be in a path like /vendor/aws/aws-sdk-php/src/Api/Serializer/RestSerializer.php in your composer/websites root, which will likely be in /var/www. In RestSerializer.php find the two rawurlencode function calls and remove them but leave the value/argument "rawurlencode($varspecs[$k])" becomes "$varspecs[$k]".
Now the request is going to the correct URL, to generate the ContentMD5 you need to construct a little PHP code depending on what you're doing. If you have put the XML text for your policy in a file use md5_file(PATH_TO_FILE_HERE, true) if you are using a string use md5(STRING_HERE, true). Then wrap that in base64_encode() so it looks something like base64_encode(md5_file('/path/file.xml', true)). Finally, add that to your putObject array with 'ContentMD5' => base64_encode(md5_file('/path/file.xml', true)) .
PHP Example with File:
// $spaceS3Client is a new S3Client object.
// since its a file, I need to get the file first
$xmlfile = fopen('/spaces.xml', 'r+');
$request = $spaceS3Client->putObject([
'Bucket' => 'myspacename',
'Key' => '?lifecycle',
'Body' => $xmlfile,
'ContentType' => 'application/xml',
'ContentMD5' => base64_encode(md5_file('/spaces.xml'', true))
]);
// close file
fclose($xmlfile);
// if you are having trouble connecting to your space in the first place with an S3Client object, since its set up for AWS and not DO you need to add an 'endpoint' to the array in new S3Client like 'endpoint' => 'https://'.$myspace.'.'.$myspaceregion.'.digitaloceanspaces.com'. You also need to add 'bucket_endpoint' => true.
There are two problems here, the first problem is the SDK URL encodes the path/key, which is a problem with /?lifecycle, /?location, and /?acl since they become "/%3Flifecycle" -- skip this paragraph if this isn't part of your request path. To temporarily stop this to add or update a bucket policy you have to find the file RestSerializer.php in the SDK files, if you added the API with composer it will be in a path like /vendor/aws/aws-sdk-php/src/Api/Serializer/RestSerializer.php in your composer/websites root, which will likely be in /var/www. In RestSerializer.php find the two rawurlencode function calls and remove them but leave the value/argument "rawurlencode($varspecs[$k])" becomes "$varspecs[$k]".
Now the request is going to the correct URL, to generate the ContentMD5 you need to construct a little PHP code depending on what you're doing. If you have put the XML text for your policy in a file use md5_file(PATH_TO_FILE_HERE, true) if you are using a string use md5(STRING_HERE, true). Then wrap that in base64_encode() so it looks something like base64_encode(md5_file('/path/file.xml', true)). Finally, add that to your putObject array with 'ContentMD5' => base64_encode(md5_file('/path/file.xml', true)) .
PHP Example with File:
// $spaceS3Client is a new S3Client object.
// since its a file, I need to get the file first
$xmlfile = file_get_contents('/spaces.xml', 'r+');
$request = $spaceS3Client->putObject([
'Bucket' => 'myspacename',
'Key' => '?lifecycle',
'Body' => $xmlfile,
'ContentType' => 'application/xml',
'ContentMD5' => base64_encode(md5_file('/spaces.xml', true))
]);
// if you are having trouble connecting to your space in the first place with an S3Client object, since its set up for AWS and not DO you need to add an 'endpoint' to the array in new S3Client like 'endpoint' => 'https://'.$myspace.'.'.$myspaceregion.'.digitaloceanspaces.com'. You also need to add 'bucket_endpoint' => true.
// to check the rules have been set use a getObject request and then use the code below to parse the response.
header('Content-type: text/xml');
$request = $request->toArray()["Body"];
echo $request;
I am trying to make a PutObject presigned request using the AWS S3 PHP SDK.
I have gotten the request to work but now I only want to allow my users to be able to only upload video files.I have tried a lot of combinations and searched a lot but I could not get it to work.
Here is the sample code I use:
$cmd = $this->s3client->getCommand('PutObject', [
'Bucket' => 'myBucket',
'Key' => 'inputs/' . $movie->getId(),
'ACL' => 'private',
'Conditions' => ['Starts-With', '$Content-Type', 'video/'], // I have tried other combinations but it seems to not work
]);
$request = $this->s3client->createPresignedRequest($cmd, '+30 minutes');
$movie->setSignedUrl((string)$request->getUri());
The signed url generated does never include the Content-Type in the X-Amz-SignedHeaders query parameter, only the host is included.
The putObject() request has no documented Conditions key.
You appear to be confusing S3's PUT upload interface with the pre-signed POST capability, which supports policy document conditions like ['Starts-With', '$Content-Type', 'video/'],
PUT does not support "starts with". It requires the exact Content-Type and the key for this (which should result in the header appearing in the X-Amz-SignedHeaders query string parameter) is simply ContentType. It goes in the outer parameters array, just like Bucket and Key.
But if you want to support multiple content types without knowing the specific type in advance, you need to use POST uploads.
I have been using AWS SDK V3 for PHP to put object to S3 with Server side encryption using customer provided key. The documentation is quite sketchy (or at least I havent found it).
For uploading the object using the S3client, I use putobject with
$params['SSECustomerAlgorithm'] = 'AES256';
$params['SSECustomerKey'] = $this->encryptioncustkey;
$params['SSECustomerKeyMD5'] = $this->encryptioncustkeymd5;
The $this->encryptioncustkey is a plain customer key (not base64_encoded because the SDK seems to be doing that) and this->encryptioncustkeymd5 = md5($this->encryptioncustkey,true);
The put object works fine. However, the problem is in generating a createSignedURL.
$cmd = $client->getCommand('GetObject', array(
'Bucket' => $bucket,
'Key' => $storedPath,
'ResponseContentDisposition' => 'attachment;charset=utf-8;filename="'.utf8_encode($fileName).'"',
'ResponseContentType' => $ctype,
'SSECustomerAlgorithm' => 'AES256',
'SSECustomerKey' => $this->encryptioncustkey,
'SSECustomerKeyMD5' => $this->encryptioncustkey64md5
));
but I get a weird response indicating that it is missing "x-amz-server-side-encryption" (ServerSideEncryption) which according to documentation is not required for SSE-C. Even if I set it to ServerSideEncryption='AES256' it has no effect.
<Error>
<Code>InvalidArgument</Code>
<Message>
Requests specifying Server Side Encryption with Customer provided keys must provide an appropriate secret key.
</Message>
<ArgumentName>x-amz-server-side-encryption</ArgumentName>
<ArgumentValue>null</ArgumentValue>
<RequestId>A3368F6CE5DD310D</RequestId>
<HostId>
nHavXXz/gFOoJT0tnh+wgFTbTgGdpggRkyb0sDh07H7SomcX7HrcKU1dDzgZimrQwyaVQEqAjdk=
</HostId>
</Error>
I was running into the same issue and tried every possible permutation to try to get it to work. I finally concluded that this use case is not supported. Reading through the scattered and arcane documentation on the subject, it seems the only way to access SSE-C content on S3 is by specifying the x-amz-server-side-encryption-customer-algorithm/x-amz-server-side-encryption-customer-key/x-amz-server-side-encryption-customer-key-MD5 fields in the HTTP request header, not in the URL.
What I ended up doing is to store the content in question in S3 unencrypted and with the ACL set to private (you could upload it as such from the get go, or use copyObject() to make a copy with those settings). Then, when I wanted to get the ["time-bombed"] pre-signed URL for the GET request, I just used the command similar to the one in your question, but omitting the SSE parameters. That worked for me.
I have a bucket in S3 that I linked up to a CNAME alias. Let's assume for now that the domain is media.mycompany.com. In the bucket are image files that are all set to private. Yet they are publicly used on my website using URL signing. A signed URL may look like this:
http://media.mycompany.com/images/651/38935_small.JPG?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1466035210&Signature=Uk3K7qVNRFHuIUnMaDadCOPjV%2BM%3D
This works fine as it is. I'm using a S3 helper library in PHP to generate such URLs. Here's the identifier of that library:
$Id: S3.php 44 2008-12-23 15:38:38Z don.schonknecht $
I know that it is old, but I'm relying on a lot of methods in this library, so it's not trivial to upgrade, and as said, it works well for me. Here's the relevant method in this library:
public static function getAuthenticatedURL($bucket, $uri, $lifetime, $hostBucket = false, $https = false) {
$expires = time() + $lifetime;
$uri = str_replace('%2F', '/', rawurlencode($uri)); // URI should be encoded (thanks Sean O'Dea)
return sprintf(($https ? 'https' : 'http').'://%s/%s?AWSAccessKeyId=%s&Expires=%u&Signature=%s',
$hostBucket ? $bucket : $bucket.'.s3.amazonaws.com', $uri, self::$__accessKey, $expires,
urlencode(self::__getHash("GET\n\n\n{$expires}\n/{$bucket}/{$uri}")));
}
In my normal, working setup, I'd call this method like this:
$return = $this->s3->getAuthenticatedURL('media.mycompany.com', $dir . '/' . $filename,
$timestamp, true, false);
This returns the correctly signed URL as shared earlier in this post, and all is good.
However, I'd now like to generate HTTPS URLs, and this is where I'm running into issues. Simply adding HTTPs to the current URL (by setting the last param of the method to true) will not work, it will generate a URL like this:
https://media.mycompany.com/images/651/38935_small.JPG?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1466035210&Signature=Uk3K7qVNRFHuIUnMaDadCOPjV%2BM%3D
This will obviously not work, since my SSL certificate (which is created from letsencrypt) is not installed on Amazon's domain, and as far as I know, there's no way to do so.
I've learned of an alternative URL format to access the bucket over SSL:
https://media.mycompany.com.s3.amazonaws.com/images/651/38935_small.JPG?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1466035210&Signature=Uk3K7qVNRFHuIUnMaDadCOPjV%2BM%3D
This apparently works for some people, but not for me, from what I know, it's due to having a dot (.) character in my bucket name. I cannot change the bucket name, it would have large consequences in my setup.
Finally, there's this format:
https://s3.amazonaws.com/media.mycompany.com/images/2428/39000_small.jpg?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1466035210&Signature=6p3W6GHQtddJNnCoUXaNl970x9s%3D
And here I am getting very close. If I take a working non-secure URL, and edit the URL to take on this format, it works. The image is shown.
Now I'd like to have it working in the automated way, from the signing method I showed earlier. I'm calling it like this:
$return = $this->s3->getAuthenticatedURL("s3.amazonaws.com/media.mycompany.com", $dir . '/' . $filename,
$timestamp, true, true);
The change here is the alternative bucket name format, and the last parameter being set to true, indicating HTTPs. This leads to an output like this:
https://s3.amazonaws.com/media.mycompany.com/images/2784/38965_small.jpg?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1466035210&Signature=Db2ynwWOV852Mn4rpcWA0Q1DrH0%3D
As you can see, it has the same format as the URL I manually crafted to work. But unfortunately, I'm getting signature errors:
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
I'm stuck figuring out why these signatures are incorrect. I tried setting the 4th parameter of the signing method to true and false, but it makes no difference.
What am I missing?
Edit
Based on Michael's answer below I tried to do the simple string replace after the call to the S3 library, which works. Quick and dirty code:
$return = $this->s3->getAuthenticatedURL("media.mycompany.com", $dir . '/' . $filename, $timestamp, true, true);
$return = substr_replace($return, "s3.amazonaws.com/", strpos($return, "media.mycompany.com"), 0);
The change here is the alternative bucket name format
Almost. This library doesn't quite appear to have what you need in order to do what you are trying to do.
For Signature Version 2 (which is what you're using), your easiest workaround will be to take the signed URL with https://bucket.s3.amazonaws.com/path and just doing a string replace to https://s3.amazonaws.com/bucket/path.¹ This works because the signatures are equivalent in V2. It wouldn't work for Signature V4, but you aren't using that.
That, or you need to rewrite the code in the supporting library to handle this case with another option for path-style URLs.
The "hostbucket" option seems to assume a CNAME or Alias named after the bucket is pointing to the S3 endpoint, which won't work with HTTPS. Setting this option to true is actually causing the library to sign a URL for the bucket named s3.amazonaws.com/media.example.com, which is why the signature doesn't match.
If you wanted to hide the "S3" from the URL and use your own SSL certificate, this can be done by using CloudFront in front of S3. With CloudFront, you can use your own cert, and point it to any bucket, regardless of whether the bucket name matches the original hostname. However, CloudFront uses a very differential algorithm for signed URLs, so you'd need code to support that. One advantage of CloudFront signed URLs -- which may or may not be useful to you -- is that you can generate a signed URL that only works from the specific IP address you include in the signing policy.
It's also possible to pass-through signed S3 URLs with special configuration of CloudFront (configure the bucket as a custom origin, not an S3 origin, and forward the query string to the origin) but this defeats all caching in CloudFront, so it's a little bit counterproductive... but it would work.
¹ Note that you have to use the regional endpoint when you rewrite like this, unless your bucket is in us-east-1 (a.k.a. US Standard) so the hostname would be s3-us-west-2.amazonaws.com for buckets in us-west-2, for example. For US Standard, either s3.amazonaws.com or s3-external-1.amazonaws.com can be used with https URLs.
Spent days going round in circles trying to setup custom CNAME/host for presigned URLs and it seemed impossible.
All forums said it cannot be done, or you have to recode your whole app to use cloudfront instead.
Changing my DNS to point from MYBUCKET.s3-WEBSITE-eu-west-1.amazonaws.com to MYBUCKET.s3-eu-west-1.amazonaws.com fixed it instantly.
Hope this helps others.
Working code:
function get_objectURL($key) {
// Instantiate the client.
$this->s3 = S3Client::factory(array(
'credentials' => array(
'key' => s3_key,
'secret' => s3_secret,
),
'region' => 'eu-west-1',
'version' => 'latest',
'endpoint' => 'https://example.com',
'bucket_endpoint' => true,
'signature_version' => 'v4'
));
$cmd = $this->s3->getCommand('GetObject', [
'Bucket' => s3_bucket,
'Key' => $key
]);
try {
$request = $this->s3->createPresignedRequest($cmd, '+5 minutes');
// Get the actual presigned-url
$presignedUrl = (string)$request->getUri();
return $presignedUrl;
} catch (S3Exception $e) {
return $e->getMessage() . "\n";
}
}