I'm using AWS SDK with Laravel framework in PHP. Here is my code
$cloudFront = new CloudFrontClient([
'region' => env('AWS_REGION'),
'version' => 'latest'
]);
$path = "R180417XXXX.mp4"
$resourceURL = "https://dbk93n3xxxxxx.cloudfront.net/" . $path;
$expires = Carbon::now()->addMinutes(5)->timestamp;
$signedUrlCannedPolicy = $cloudFront->getSignedUrl([
'url' => $resourceURL,
'expires' => $expires,
'private_key' => base_path('pk-APKAI2PXXXXXXXXXXXXX.pem'),
'key_pair_id' => 'APKAI2PXXXXXXXXXXXXX',
]);
This code is working but the URL it look like this
https://dbk93n3xxxxxx.cloudfront.net/R180417XXXX.mp4?Expires=1524389577&Signature=RmBDMqM4SMadsQstrgVpUiLoJ50dvKoxNI081Joa7WjSg5eelziQqtDrcs~klbDHvs7rMaq2McfHUQijrcLe7F9tDbn7oOxEC4kfPPCMbhqqjtBWavPmM8Zv8QhH50dPuNHwnEj4pIGUpm9FmAvDhCSExCv0uBMWUREJ9YKQJFHZcPJyKBtjPcJVzIGpnj2bQn3xNGO5AUlutsyeSWUqdvtNOLb3xurgx4WzcVotgB~BZo-bQxo3ieXFbKWAPQXMPl93YpuX5W10l4YtYPULrAtJVQZKUIFcfifnECnqg~IgtbkFbyLdM5e87ZiC837Hj-AphmlEshnY-MHWyEU24g__&Key-Pair-Id=APKAI2PXXXXXXXXXXXXX
But I'm just setting CNAME in CloudFront like server1.domain.tld I want the signed URL show like
https://server1.domain.tld/R180417XXXX.mp4?Expires=1524389577&Signature=RmBDMqM4SMadsQstrgVpUiLoJ50dvKoxNI081Joa7WjSg5eelziQqtDrcs~klbDHvs7rMaq2McfHUQijrcLe7F9tDbn7oOxEC4kfPPCMbhqqjtBWavPmM8Zv8QhH50dPuNHwnEj4pIGUpm9FmAvDhCSExCv0uBMWUREJ9YKQJFHZcPJyKBtjPcJVzIGpnj2bQn3xNGO5AUlutsyeSWUqdvtNOLb3xurgx4WzcVotgB~BZo-bQxo3ieXFbKWAPQXMPl93YpuX5W10l4YtYPULrAtJVQZKUIFcfifnECnqg~IgtbkFbyLdM5e87ZiC837Hj-AphmlEshnY-MHWyEU24g__&Key-Pair-Id=APKAI2PXXXXXXXXXXXXX
I'm have been tried to change $resourceURL to
$resourceURL = "https://server1.domain.tld/" . $path;
It's not working.
It's response status code 403 and I has been set Origin Access Identity I don't know why not working
Here is my Amazon S3 Policy
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E2OP22ZEXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::server1.domain.tld/*"
}
]
}
Please help...
Thanks
In Route53, there needs to be a hosted zone for your tld and a record set of type CNAME that is an alias to Cloudfront distribution.
Here are steps to follow:
Create certificate in Certificates Manager for domain.tld and server1.domain.tld.
Edit your Cloudfront Distribution Settings and set SSL certificate for the distribution to the custom one.
Ensure that Alternate Domain Names (CNAMEs) for your distribution lists server1.domain.tld
Create Public Hosted Zone for domain.tld in Route53
Copy 4 Nameservers and update your domain registrar to point to them if domain name wasn't setup originally in Route 53
Create Record Set in the Hosted Zone for a CNAME alias that points to Cloudfront Distribution.
Finally, rest easy and see changes propagate to name servers et Viola!
Related
im doing an Laravel 9 project, where i store image on azure accout.
I use this library https://github.com/matthewbdaly/laravel-azure-storage.
I try to connect with SAS Token it works but i got AuthorizationPermissionMismatch when i try to read it :
Route::get('/azure-test', function() {
$path = '';
$disk = \Storage::disk('azure');
$files = $disk->files($path);
dump($files);exit;
}
My configuration :
'driver' => 'azure',
'driver' => 'azure',
'sasToken' => env('AZURE_STORAGE_SAS_TOKEN'),
'container' => env('AZURE_STORAGE_CONTAINER'),
'url' => env('AZURE_STORAGE_URL'),
'prefix' => null,
'endpoint' => env('AZURE_STORAGE_ENDPOINT'),
'retry' => [
'tries' => 3,
'interval' => 500,
'increase' => 'exponential'
],
Just to be clear the file exist, i test it without SAS token configuration it display informations about my test file. I already searched and did some change, like assigned my account to roles "Storage Blob Data Contributor” and “Storage Queue Data Contributor", my sas token still work when i try to see my file "https://xxxxxxxx.blob.core.windows.net/container_name/im_test_file.pdf?sp=r&st=2022-06-24T15:32:22Z&se=2024-04-30T23:32:22Z&sv=2021-06-08&sr=c&sig=QSz6SZ6UrSMg0jqyKEr4bnnGqrMuxK2EIbGgTTbP%2F10%3D" it works.
Any Idea ?
AFAIK, To resolve "AuthorizationPermissionMismatch" error try assigning Storage Blob Data Reader role to your account like below:
Please note that Azure role assignments can take up to five
minutes to propagate.
Check whether you have given the below permissions while creating the SAS token:
For more in detail, please refer below links:
Authorize access to blobs with AzCopy & Azure Active Directory | Microsoft Docs
Fixed – authorizationpermissionmismatch Azure Blob Storage – Nishant Rana's Weblog
I have a frustrating issue with the Google Cloud Translate API.
I set up correctly the restriction of the key to some domains including *.example.com/ * (without blank space at the end)
I launch the script on the URL https://www.example.com/translate and i have the following message :
"status": "PERMISSION_DENIED",
"details": [
{
"#type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "API_KEY_HTTP_REFERRER_BLOCKED",
"domain": "googleapis.com",
When i remove the restriction, everything works, but i need the restriction to avoid misuse/abuse.
Furthemore, i use this same API Key for others Google App API (Maps, Auth, etc) and it works perfectly from this domain...
So weird.
Do you have any ideas or any ways to investigate better this issue ?
How i can know the referrer Google sees ? (or any external service)
Thanks a lot !!
Edit :
PHP code :
require_once(APPPATH . "libraries/GoogleTranslate/vendor/autoload.php");
require_once(APPPATH . "libraries/GoogleTranslate/vendor/google/cloud-translate/src/V2/TranslateClient.php");
$translate = new TranslateClient([
'key' => 'xXXXx'
]);
// Translate text from english to french.
$result = $translate->translate('Hello world!', [
'target' => 'fr'
]);
echo $result['text'];
Full error message :
Type: Google\Cloud\Core\Exception\ServiceException
Message: {
"error": { "code": 403, "message": "Requests from referer
\u003cempty\u003e are blocked.",
"errors": [ { "message": "Requests from referer \u003cempty\u003e are blocked.", "domain": "global", "reason": "forbidden" } ],
"status": "PERMISSION_DENIED",
"details": [ { "#type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "API_KEY_HTTP_REFERRER_BLOCKED",
"domain": "googleapis.com",
"metadata": { "service": "translate.googleapis.com", "consumer": "projects/XXXXX" } } ] } }
Filename: htdocs/application/libraries/GoogleTranslate/vendor/google/cloud-core/src/RequestWrapper.php
Line Number: 368
I will leave here my insights discussed on the Public Issue Tracker.
The HTTP restriction is working as intended, but the referer is always empty because this is not set by default. However, it can be added manually, so instead of doing:
-$translate = new TranslateClient([
'key' => 'XXX'
]);
You need to specify the referrer:
-$translate = new TranslateClient([
'key' => '[API_KEY]',
'restOptions' => [
'headers' => [
'referer' => '*.[URL].com/*'
]
]
]);
You have to take into account that this type of requests can be sent from whatever computer (if you have the key) since you’re not restricting the domain where the request is made, only checking who is the referrer (and you can set it manually). Moreover, API clients that run on a web browser expose their API keys publicly; that’s why I recommend you to use service accounts instead. For more information: adding application restrictions.
Regarding the HTTP referer, this is basically a header field that, basically, the web browsers put to let the web page know where the user is coming from. For example, if you click the above link (HTTP referer) your referer field will be this page.
In summary, since you can put whatever referer in the header of a request, this is pretty similar to not having any type of restrictions. Indeed, it’s recommended to use service accounts. To solve this issue easily, add the referer manually in the headers as exposed in the code above.
I read the comments and you seem to be doing everything ok. I would recommend you to try:
This error message can appear because you set API restrictions in the API key, is this the case? Maybe you’re restricting this specific API.
If you aren’t setting any API restrictions, is it possible to try adding an IP instead of the domain just for testing purposes?
I had same issue with google translate but not with maps.
So maps works with referrer restriction, but translate does not.
The only solution I found, with a restriction in force, is setting up an IP restriction instead of the HTTP referrers (web sites).
I am trying to use the Google Translate API on my Laravel project. I followed this tutorial https://cloud.google.com/translate/docs/quickstart-client-libraries?authuser=2#client-libraries-install-php
But when I try to run the code to translate, I get this error -
Your application has authenticated using end user credentials from Google Cloud SDK. We recommend that most server applications use service accounts instead. If your application continues to use end user credentials from Cloud SDK, you might receive a "quota exceeded" or "API not enabled" error. For more information about service accounts, see https://cloud.google.com/docs/authentication/. To disable this warning, set SUPPRESS_GCLOUD_CREDS_WARNING environment variable to "true".
This is my code:
public static function gcloud(){
# Your Google Cloud Platform project ID
$projectId = 'mybot';
# Instantiates a client
$translate = new TranslateClient([
'projectId' => $projectId
]);
# The text to translate
$text = 'Hello, world!';
# The target language
$target = 'ru';
# Translates some text into Russian
$translation = $translate->translate($text, [
'target' => $target
]);
echo 'Text: ' . $text . '
Translation: ' . $translation['text'];
}
I don't know what the problem might be.
Most likely the credentials you set the client library to use your gcloud credentials at ~/.config/gcloud/application_default_credentials.json. These are End User Credentials, which are tied to YOU, a specific user. The client library requires Service Account Credentials, which are not tied to a specific user.
Create Service Account Credentials by going to APIs and Services > Credentials and selecting Create Credentials > Service Account Key. Create a new service account, and in your case assign it the role Cloud Translation API Admin. This will download a JSON file with the following fields:
{
"type": "service_account",
"project_id": "YOUR_PROJECT_ID",
"private_key_id": "...",
"private_key": "...",
"client_email": "...",
"client_id": "...",
"auth_uri": "...",
"token_uri": "...",
"auth_provider_x509_cert_url": "...",
"client_x509_cert_url": "..."
}
Now set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path to this file. Notice the "type" field is "service_account". In the credentials which are throwing the error, the "type" field is "authorized_user".
I am using AWS SDK for PHP to upload and display files to/from my S3 bucket.
The files should only be accessible through my site, no other referrer allowed - no hotlinking etc.
I also need to be able to copy objects within the bucket.
I create and connect as normal:
$s3Client = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'eu-west-2'
]);
$s3 = $s3Client::factory(array(
'version' => 'latest',
'region' => 'eu-west-2',
'credentials' => array(
'provider' => $provider,
'key' => $key,
'secret' => $secret
)
));
And execute Copy object command:
$s3->copyObject([
'Bucket' => "{$targetBucket}",
'Key' => "{$targetKeyname}",
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
]);
I have tried a policy using "Allow if string like referrer" but AWS then tells me I'm allowing public access?!?!?
Everything works just fine EVEN the copyObject action but files are still accessible directly and from everywhere!
I try using "Deny if string not like referrer" which works mostly as expected - I can upload and display the files and the files don't show when linked directly (which is what i want) - However, the copyObject action no longer works and i get the access denied error.
I've tried everything else i can think of and spent hours googling and searching for answers but to no avail.
Here's each seperate policy...
ALLOW FILE ACCESS ONLY (GetObject) if string LIKE referrer:
{
"Version": "2008-10-17",
"Id": "",
"Statement": [
{
"Sid": "Deny access if referer is not my site",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::MY-BUCKET/*"
],
"Condition": {
"StringLike": {
"aws:Referer": [
"http://MY-SITE/*",
"https://MY-SITE/*"
]
}
}
}
]
}
RESULT: uploads & copyObject work but files are still accessible everywhere
DENY ALL ACTIONS (*) if string NOT LIKE referrer:
{
"Version": "2008-10-17",
"Id": "",
"Statement": [
{
"Sid": "Deny access if referer is not my site",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::MY-BUCKET/*"
],
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://MY-SITE/*",
"https://MY-SITE/*"
]
}
}
}
]
}
RESULT: copyObject action no longer works and i get the access denied error
AWS is probably warning you that it's public because it is, for all practical purposes, still public.
Warning
This key should be used carefully: aws:referer allows Amazon S3 bucket owners to help prevent their content from being served up by unauthorized third-party sites to standard web browsers. [...] Since aws:referer value is provided by the caller in an HTTP header, unauthorized parties can use modified or custom browsers to provide any aws:referer value that they choose. As a result, aws:referer should not be used to prevent unauthorized parties from making direct AWS requests. It is offered only to allow customers to protect their digital content, stored in Amazon S3, from being referenced on unauthorized third-party sites.
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html
If you are okay with this primitive mechanism, use it, but be aware that all it does is trust what the claim made by the browser.
And that's your problem with copyObject() -- the request is not being made by a browser so there is no Referer header to validate.
You can use the StringLikeIfExists condition test to Deny only wrong referers (ignoring the absence of a referer, as would occur with an object copy) or -- better -- just grant s3:GetObject with StringLike your referer, understanding that the public warning is correct -- this allows unauthenticated access, which is what referer checking amounts to. Your content is still publicly accessible but not from a standard, unmodified web browser if hotlinked from another site.
For better security, you will want to render your HTML with pre-signed URLs (with short expiration times) for your S3 assets, or otherwise do full and proper authorization using Amazon Cognito.
Objects in Amazon S3 are private by default. There is no access unless it is granted somehow (eg on an IAM User, IAM Group or an S3 bucket policy).
The above policies are all Deny policies, which can override an Allow policy. Therefore, they aren't the reason why something is accessible.
You should start by discovering what is granting access, and then remove that access. Once the objects are private again, you should create a Bucket Policy with Allow statements that define in what situations access is permitted.
This is a follow-up on my previous question regarding policy document signing using instance profiles.
I'm developing a system that allows drag & drop uploads directly to an S3 bucket; an AJAX request is first made to my server containing the file metadata. Once verified, my server responds with the form parameters that are used to complete the upload.
The process of setting up browser based uploads is well explained here and it all works as expected in my local test environment.
However, once my application gets deployed on an EC2 instance, I'm seeing this error when the browser attempts to upload the file:
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The AWS Access Key Id you provided does not exist in our records.</Message>
<RequestId>...</RequestId>
<HostId>...</HostId>
<AWSAccessKeyId>ASIAxxxyyyzzz</AWSAccessKeyId>
</Error>
The value of ASIAxxxyyyzzz here comes from the instance role credentials, as obtained from the metadata service; it seems that those credentials can't be used outside of EC2 to facilitate browser based uploads.
I've looked at the Security Token Service as well to generate another set of temporary credentials by doing this:
$token = $sts->assumeRole(array(
'RoleArn' => 'arn:aws:iam::xyz:role/mydomain.com',
'RoleSessionName' => 'uploader',
));
$credentials = new Credentials($token['Credentials']['AccessKeyId'], $token['Credentials']['SecretAccessKey']);
The call givens me a new set of credentials, but it give the same error as above when I use it.
I hope that someone has done this before and can tell me what stupid thing I've missed out :)
The AWS docs are very confusing on this, but I suspect that you need to include the x-amz-security-token parameter in the S3 upload POST request and that its value matches the SessionToken you get from STS ($token['Credentials']['SessionToken']).
STS temporary credentials are only valid when you include the corresponding security token.
The AWS documentation for the POST request states that:
Each request that uses Amazon DevPay requires two x-amz-security-token
form fields: one for the product token and one for the user token.
but that parameter is also used outside of DevPay, to pass the STS token and you would only need to pass it once in the form fields.
As pointed out by dcro's answer, the session token needs to be passed to the service you're using when you use temporary credentials. The official documentation mentions the x-amz-security-token field, but seems to suggest it's only used for DevPay; this is probably because DevPay uses the same type of temporary credentials and therefore requires the session security token.
2013-10-16: Amazon has updated their documentation to make this more obvious.
As it turned out, it's not even required to use STS at all; the credentials received by the metadata service come with such a session token as well. This token is automatically passed for you when the SDK is used together with temporary credentials, but in this case the final request is made by the browser and thus needs to be passed explicitly.
The below is my working code:
$credentials = Credentials::factory();
$signer = new S3Signature();
$policy = new AwsUploadPolicy(new DateTime('+1 hour', new DateTimeZone('UTC')));
$policy->setBucket('upload.mydomain.com');
$policy->setACL($policy::ACL_PUBLIC_READ);
$policy->setKey('uploads/test.jpg');
$policy->setContentType('image/jpeg');
$policy->setContentLength(5034);
$fields = array(
'AWSAccessKeyId' => $credentials->getAccessKeyId(),
'key' => $path,
'Content-Type' => $type,
'acl' => $policy::ACL_PUBLIC_READ,
'policy' => $policy,
);
if ($credentials->getSecurityToken()) {
// pass security token
$fields['x-amz-security-token'] = $credentials->getSecurityToken();
$policy->setSecurityToken($credentials->getSecurityToken());
}
$fields['signature'] = $signer->signString($policy, $credentials);
I'm using a helper class to build the policy, called AwsUploadPolicy; at the time of writing it's not complete, but it may help others with a similar problem.
Permissions were the last problem; my code sets the ACL to public-read and doing so requires the additional s3:PutObjectAcl permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Sid": "Stmt1379546195000",
"Resource": [
"arn:aws:s3:::upload.mydomain.com/uploads/*"
],
"Effect": "Allow"
}
]
}