specifying a 30s ACL not working with Cloudfront? - php

Using the PHP Amazon SDK I am successfully able to set a 30 second access for a URL using the following functin: get_object_url($bucket, $filename, $preauth = 0, $opt = null)
$s3->get_object_url($results['s3.bucket.name'], $results['s3.file.name'], '30 seconds');
Now, the issue with this is that it returns a fantastic URL:
"s3.url": "http://THECOOLEST.BUCKET.INTHEWORLD.EVER.s3.amazonaws.com/2011/04/18/image/png/8ba2d302-a441-45d4-8354-08e2b7e1a325.png?AWSAccessKeyId=XXXXXXXXXXXXXXXXX&Expires=1303162244&Signature=AWdUnHSaIBDmRcbwo2RFSUQaqBM%3D",
When I change the URL to the CNAME we use for cloudfront, the ACL doesn't work. Anyone know how to get_object_url with the CNAME configured?

Cloudfront and S3 are two different things.
You need to setup a CNAME for your S3 bucket. See the AWS docs for more info: http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?VirtualHosting.html#VirtualHostingCustomURLs

Related

Amazon SES not working with eu_west_1 but ok using us_east_1 PHP

Im rewriting a few scripts to make use of the new v4 signature for Amazon AWS.
I am trying to send an email using the code on this page:
https://github.com/okamos/php-ses
When I use his code exactly as it is just adding secret keys etc I get an error saying my email address isnt verified on us_east_1. This makes sense as all my things are on EU_WEST_1.
So Ive tried adding the EU endpoint as a third parameter but get this error:
'Warning: SimpleEmailService::sendEmail(): 6 Could not resolve host: EU_WEST_1'
This is the line of code which seems to work but connecting to the wrong endpoint
$ses = new SimpleEmailService('apikey', 'secretkey');
print_r($ses->sendEmail($m));
I have tried adding the new endpoint as the third parameter like this
$ses = new SimpleEmailService('apikey', 'secret','eu-west-1');
But that just generates the error.
Can anyone tell me the correct code to use to set the eu-west-1 endpoint to send emails through?
Thanks
I had faced the same problem with AWS PHP library for uploading files to S3. Even though I was setting up the region from us-east-1 to eu-west-1 it was still taking us-east-1 as by default. I would suggest you to look for region configurations in library and how it is overriding.
SES Email Host for us-east-1 and eu-west-1 are different. So even though if you are passing correct API and Secret, it might not work because of default region.
If this debug process doesn't work can share the screenshot of what you region you are getting before email dispatch. I would love to explore further.
I found the problem.
Looks like there was a typo within the code on github, it had these 3 lines in the file 'SimpleEmailServices.php:
const AWS_US_EAST_1 = 'email.us-east-1.amazonaws.com';
const AWS_US_WEST_2 = 'email.us-west-2.amazonaws.com';
const AWS_EU_WEST1 = 'email.eu-west-1.amazonaws.com';
He'd missed the underscore in the AWS_EU_WEST1.

How to get EC2 instance availability zone and id using php in aws?

I don’t know not to get EC2 instance availability zone and instance id details using php in amazon web services. As I don’t know how to fetch data, so I haven’t tried anything. Help me out.
The thing I want:
/info
This is exact what I wanted:
In that info page I can want to display that Instance details.
Thanks in advance
You can get the current instances values by using the meta-data api, these can be accessed from within your application by using a HTTP request library such as GuzzleHTTP or by using native cURL commands built into PHP.
To get the instance ID you would need to request from your current server to the following URL.
http://169.254.169.254/latest/meta-data/instance-id
To get the instances current availability zone you would need to request from your current server the following URL
http://169.254.169.254/latest/meta-data/placement/availability-zone
Assuming you use GuzzleHTTP it would be as simple as calling the below
$client = new GuzzleHttp\Client();
$response = $client->get('http://169.254.169.254/latest/meta-data/instance-id');
echo "Instance ID: " . $response->getBody();
$response = $client->get('http://169.254.169.254/latest/meta-data/placement/availability-zone');
echo "Availability Zone: " . $response->getBody();

Pre-Sign CNAME URL with PostObjectV4 (PHP SDK)

I am trying to use the AWS PHP SDK to pre-sign V4 POST URLs and am hitting a major problem.
I have created a bucket called bucket1.chris.com. This has a CNAME to bucket1.chris.com.s3-eu-west-1.amazonaws.com.
When I create the S3Client I am passing http://bucket1.chris.com as the enpoint and bucket1.chris.com as the bucket name.
Once the URL is signed and I get the action from the formAttributes it is:
bucket1.chris.com.bucket1.chris.com
Looking at the generateUri function in PostObjectV4 I can see this line:
// Use virtual-style URLs
$uri = $uri->withHost($this->bucket . '.' . $uri->getHost());
Which is causing my problem.
If I don't pass an endpoint I get:
s3-eu-west-1.amazonaws.com/bucket1.chris.com
(This is throwing an error: "The specified method is not allowed against this resource" when I try to use it but I think this might be something else)
Does anyone know how I you are supposed to use CNAME records (virtual hosted buckets) with the AWS PHP SDK?
I have figured out what I was doing wrong.
I don't need to worry about passing an endpoint to the SDK since it is not used in the signing process.
The problem I was having with the s3-eu-west-1.amazonaws.zom/bucket1.chris.com was due to an issue with the bucket and not the endpoint I was using.

Signature mismatch when using S3 signed URLs

I have a bucket in S3 that I linked up to a CNAME alias. Let's assume for now that the domain is media.mycompany.com. In the bucket are image files that are all set to private. Yet they are publicly used on my website using URL signing. A signed URL may look like this:
http://media.mycompany.com/images/651/38935_small.JPG?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1466035210&Signature=Uk3K7qVNRFHuIUnMaDadCOPjV%2BM%3D
This works fine as it is. I'm using a S3 helper library in PHP to generate such URLs. Here's the identifier of that library:
$Id: S3.php 44 2008-12-23 15:38:38Z don.schonknecht $
I know that it is old, but I'm relying on a lot of methods in this library, so it's not trivial to upgrade, and as said, it works well for me. Here's the relevant method in this library:
public static function getAuthenticatedURL($bucket, $uri, $lifetime, $hostBucket = false, $https = false) {
$expires = time() + $lifetime;
$uri = str_replace('%2F', '/', rawurlencode($uri)); // URI should be encoded (thanks Sean O'Dea)
return sprintf(($https ? 'https' : 'http').'://%s/%s?AWSAccessKeyId=%s&Expires=%u&Signature=%s',
$hostBucket ? $bucket : $bucket.'.s3.amazonaws.com', $uri, self::$__accessKey, $expires,
urlencode(self::__getHash("GET\n\n\n{$expires}\n/{$bucket}/{$uri}")));
}
In my normal, working setup, I'd call this method like this:
$return = $this->s3->getAuthenticatedURL('media.mycompany.com', $dir . '/' . $filename,
$timestamp, true, false);
This returns the correctly signed URL as shared earlier in this post, and all is good.
However, I'd now like to generate HTTPS URLs, and this is where I'm running into issues. Simply adding HTTPs to the current URL (by setting the last param of the method to true) will not work, it will generate a URL like this:
https://media.mycompany.com/images/651/38935_small.JPG?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1466035210&Signature=Uk3K7qVNRFHuIUnMaDadCOPjV%2BM%3D
This will obviously not work, since my SSL certificate (which is created from letsencrypt) is not installed on Amazon's domain, and as far as I know, there's no way to do so.
I've learned of an alternative URL format to access the bucket over SSL:
https://media.mycompany.com.s3.amazonaws.com/images/651/38935_small.JPG?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1466035210&Signature=Uk3K7qVNRFHuIUnMaDadCOPjV%2BM%3D
This apparently works for some people, but not for me, from what I know, it's due to having a dot (.) character in my bucket name. I cannot change the bucket name, it would have large consequences in my setup.
Finally, there's this format:
https://s3.amazonaws.com/media.mycompany.com/images/2428/39000_small.jpg?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1466035210&Signature=6p3W6GHQtddJNnCoUXaNl970x9s%3D
And here I am getting very close. If I take a working non-secure URL, and edit the URL to take on this format, it works. The image is shown.
Now I'd like to have it working in the automated way, from the signing method I showed earlier. I'm calling it like this:
$return = $this->s3->getAuthenticatedURL("s3.amazonaws.com/media.mycompany.com", $dir . '/' . $filename,
$timestamp, true, true);
The change here is the alternative bucket name format, and the last parameter being set to true, indicating HTTPs. This leads to an output like this:
https://s3.amazonaws.com/media.mycompany.com/images/2784/38965_small.jpg?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1466035210&Signature=Db2ynwWOV852Mn4rpcWA0Q1DrH0%3D
As you can see, it has the same format as the URL I manually crafted to work. But unfortunately, I'm getting signature errors:
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
I'm stuck figuring out why these signatures are incorrect. I tried setting the 4th parameter of the signing method to true and false, but it makes no difference.
What am I missing?
Edit
Based on Michael's answer below I tried to do the simple string replace after the call to the S3 library, which works. Quick and dirty code:
$return = $this->s3->getAuthenticatedURL("media.mycompany.com", $dir . '/' . $filename, $timestamp, true, true);
$return = substr_replace($return, "s3.amazonaws.com/", strpos($return, "media.mycompany.com"), 0);
The change here is the alternative bucket name format
Almost. This library doesn't quite appear to have what you need in order to do what you are trying to do.
For Signature Version 2 (which is what you're using), your easiest workaround will be to take the signed URL with https://bucket.s3.amazonaws.com/path and just doing a string replace to https://s3.amazonaws.com/bucket/path.¹ This works because the signatures are equivalent in V2. It wouldn't work for Signature V4, but you aren't using that.
That, or you need to rewrite the code in the supporting library to handle this case with another option for path-style URLs.
The "hostbucket" option seems to assume a CNAME or Alias named after the bucket is pointing to the S3 endpoint, which won't work with HTTPS. Setting this option to true is actually causing the library to sign a URL for the bucket named s3.amazonaws.com/media.example.com, which is why the signature doesn't match.
If you wanted to hide the "S3" from the URL and use your own SSL certificate, this can be done by using CloudFront in front of S3. With CloudFront, you can use your own cert, and point it to any bucket, regardless of whether the bucket name matches the original hostname. However, CloudFront uses a very differential algorithm for signed URLs, so you'd need code to support that. One advantage of CloudFront signed URLs -- which may or may not be useful to you -- is that you can generate a signed URL that only works from the specific IP address you include in the signing policy.
It's also possible to pass-through signed S3 URLs with special configuration of CloudFront (configure the bucket as a custom origin, not an S3 origin, and forward the query string to the origin) but this defeats all caching in CloudFront, so it's a little bit counterproductive... but it would work.
¹ Note that you have to use the regional endpoint when you rewrite like this, unless your bucket is in us-east-1 (a.k.a. US Standard) so the hostname would be s3-us-west-2.amazonaws.com for buckets in us-west-2, for example. For US Standard, either s3.amazonaws.com or s3-external-1.amazonaws.com can be used with https URLs.
Spent days going round in circles trying to setup custom CNAME/host for presigned URLs and it seemed impossible.
All forums said it cannot be done, or you have to recode your whole app to use cloudfront instead.
Changing my DNS to point from MYBUCKET.s3-WEBSITE-eu-west-1.amazonaws.com to MYBUCKET.s3-eu-west-1.amazonaws.com fixed it instantly.
Hope this helps others.
Working code:
function get_objectURL($key) {
// Instantiate the client.
$this->s3 = S3Client::factory(array(
'credentials' => array(
'key' => s3_key,
'secret' => s3_secret,
),
'region' => 'eu-west-1',
'version' => 'latest',
'endpoint' => 'https://example.com',
'bucket_endpoint' => true,
'signature_version' => 'v4'
));
$cmd = $this->s3->getCommand('GetObject', [
'Bucket' => s3_bucket,
'Key' => $key
]);
try {
$request = $this->s3->createPresignedRequest($cmd, '+5 minutes');
// Get the actual presigned-url
$presignedUrl = (string)$request->getUri();
return $presignedUrl;
} catch (S3Exception $e) {
return $e->getMessage() . "\n";
}
}

Amazon S3: Subdomain mapping with PHP SDK?

How do you make $s3->get_object_url() from PHP SDK return:
http://my-bucket.my-domain.com/example.txt
instead of
http://my-bucket.s3.amazonaws.com/example.txt
S3 doesn't know if a bucketname has a CNAME set up for it, so you'll have to do it yourself. A simple call to preg_replace should work fine.
$url = preg_replace('#^http://my-bucket\.s3\.amazonaws\.com/#Ui', 'http://my-bucket.my-domain.com/', $url);

Categories