I'm trying to copy a file I've already placed in an S3 bucket. When I try to perform the copy I get the following error:
Guzzle\Http\Exception\CurlException
[curl] 56: SSL read: error:1408F119:SSL
routines:SSL3_GET_RECORD:decryption failed or bad record mac, errno 0
[url] (url omitted by me)
Any idea what is causing this error? I'm able to use the putObject command with no problems and I've checked that the file exists (both looking at the bucket and by using the doesObjectExist command.
$response = $this->client->copyObject(array(
"ACL" => "public-read",
"Bucket" => Yii::app()->params['S3Bucket'],
"CopySource" => Yii::app()->params['S3BucketFolder'] . $old_key,
"Key" => Yii::app()->params['S3BucketFolder'] . $key,
)
);
I figured it out. The CopySource parameter requires the bucket as part of it. I was trying to copy files in the same bucket so this wasn't apparent to me but once I reread the documentation I realized my mistake.
So the line should be:
"CopySource" => Yii::app()->params['S3Bucket'] . '/' . Yii::app()->params['S3BucketFolder'] . $old_key,
Related
I am trying to get a specific file from an s3 bucket. Essentially I'm trying to make sure a file has been uploaded before I remove it from my server. I've been able to upload files using the same credentials with no issue. And I have been able to run the code locally from the browser and command line. But on my aws server I am get this error:
Class 'SimpleXMLElement' not found in ../vendor/aws/aws-sdk-php/src/Api/Parser/PayloadParserTrait.php on line 44
I've gotten this error before while writing the code to upload files. One time it was because I had the wrong secret, the other was due to file sizes. So I don't think this error is narrowing down my issue. I believe it is preventing the script from outputting the actual error since it's been the same for multiple issues.
Before adding:
"scheme" => "http"
to my S3Client object I was getting an ssl certification error locally:
Error executing "PutObject" on "{s3 url}"; AWS HTTP error: cURL error 60: SSL certificate problem: unable to get local issuer certificate
With adding that line it works locally through the command line but still not on my aws server. I believe the issue has a lot more to do with the error that I was receiving locally rather than the simplexml error that is output. Here's my code (I'm aware the "scheme" line is commented out):
$s3 = new S3Client([
"version" => "latest",
"region" => "us-east-2",
// "scheme" => "http",
"credentials" => [
"key" => "$key",
"secret" => "$secret"
]
]);
try {
// Retrieve data.
$objects = $s3->getIterator("ListObjects", array(
"Bucket" => $bucket,
"Prefix" => "$site$file"
));
echo "Searching through s3 bucket...\n";
foreach ($objects as $object) {
if($object["Key"] === "$site$file") {
echo "Matching event found in s3 bucket, proceeding with deletion\n";
return true;
}
}
return false;
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
return $e->getMessage() . PHP_EOL;
}
Installing simplexml will not fix my issue as I tried going down that rabbit hole the last time this error popped up. Just wondering if anyone may have encountered anything similar and have something for me to try. Or maybe suggestions on better error handling to try and get a different output other than this simplexml line. Thanks in advance!
Edit: I'm using php 5.6 and apache on an ubuntu 16.04 server.
When I am uploading a file to the Azure FILE Storage I am getting the following error:
in
E:\WAMP\www\myweb\_protected\vendor\microsoft\windowsazure\WindowsAzure\Common\Internal\Http\Url.php at line 74 – WindowsAzure\Common\Internal\Validate::isTrue(false, 'Provided URL is invalid.')
E:\WAMP\www\allure\_protected\vendor\microsoft\windowsazure\WindowsAzure\Common\Internal\RestProxy.php at line 122 – WindowsAzure\Common\Internal\Http\Url::__construct('https://cG9rYXJuYXZpb282mGQ=.blo...')
The settings that I have in my config file is:
'filesystem' => [
'class' => 'creocoder\flysystem\AzureFilesystem',
'accountName' => 'azure-accname',
'accountKey' => 'some-long-key-A==',
'container' => 'azure-container',
],
and finally, the code I am calling to save the file is:
if($file = \yii\web\UploadedFile::getInstance($this, 'attachment'))
{
$stream = fopen($file->tempName, 'r+');
Yii::$app->filesystem->writeStream($file->name, $stream);
}
Some additional information that might be helpful
running on yii2 advanced framework
webserver: IIS 8.5 on Windows 2012
PHP 5.4.5
composer used for installation
its a azure file system - the error seems to throw error for blob.core.windows.net, where as I am saving data to file.core.windows.net. What changes should I do in config / settings?
According the source code at https://github.com/creocoder/yii2-flysystem/blob/master/src/AzureFilesystem.php#L62, it seems the package encrypt the storage info string into base64 encode before combine them into the connection string. Which makes the strange looking url format in your error message 'https://cG9rYXJuYXZpb282mGQ=.blo...'.
Please try to set the account info into following format:
...
'accountName' => base64_decode('azure-accname'),
'accountKey' => base64_decode('some-long-key-A=='),
...
Any further concern, please feel free to let me know.
I have established an AWS acct. and am trying to do my first programmatic PUT into S3. I have used the console to create a bucket and put things there. I have also created a subdirectory (myFolder) and made it public. I created my .aws/credentials file and have tried using the sample codes but I get the following error:
Error executing "PutObject" on "https://s3.amazonaws.com/gps-photo.org/mykey.txt"; AWS HTTP error: Client error: PUT https://s3.amazonaws.com/gps-photo.org/mykey.txt resulted in a 403 Forbidden response:
AccessDeniedAccess DeniedFC49CD (truncated...)
AccessDenied (client): Access Denied -
AccessDeniedAccess DeniedFC49CD15567FB9CD1GTYxjzzzhcL+YyYsuYRx4UgV9wzTCQJX6N4jMWwA39PFaDkK2B9R+FZf8GVM6VvMXfLyI/4abo=
My code is
<?php
// Include the AWS SDK using the Composer autoloader.
require '/home/berman/vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = 'gps-photo.org';
$keyname = 'my-object-key';
// Instantiate the client.
$s3 = S3Client::factory(array(
'profile' => 'default',
'region' => 'us-east-1',
'version' => '2006-03-01'
));
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => "myFolder/$keyname",
'Body' => 'Hello, world!',
'ACL' => 'public-read'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
If anyone can help me out, that would be great. Thanks.
--Len
It looks like the same issue I ran into. Add a AmazonS3FullAccess policy to your AWS account.
Log into AWS.
Under Services select IAM.
Select Users > [Your User]
Open Permissoins Tab
Attach the AmazonS3FullAccess policy to the account
I facing same problem and found the solution as below.
remove line
'ACL' => 'public-read'
default permission with list, read, and write but without permission for change object specific permission (PutObjectAcl in AWS policy).
Braden's approach will work, but it is dangerous. The user will have full access to all your S3 buckets and the ability to log into the console. If the credentials used in the site are compromised, well...
A safer approach is:
AWS Console -> IAM -> Policies -> Create policy
Service = S3
Actions = (only the minimum required, e.g. List and Read)
Resources -> Specific -> bucket -> Add ARN (put the ARN of only the buckets needed)
Resources -> Specific -> object -> check Any or put the ARN's of specific objects
Review and Save to create policy
AWS Console -> IAM -> Users -> Add user
Access type -> check "Programmatic access" only
Next:Permissions -> Attach existing policies directly
Search and select your newly created policy
Review and save to create user
In this way you will have a user with only the needed access.
Assuming that you have all the required permissions, if you are getting this error, but are still able to upload, check the bucket permissions section under your bucket, and try disabling (uncheck) "Block all public access," and see if you still get the error. You can enable this option again if you want to.
This is an extra security setting/policy that AWS adds to prevent changing the object permissions. If your app gives you problems or generates the warning, first look at the code and see if you are trying to change any permissions (which you may not want to). You can also customize these settings to better suit your needs.
Again, you can customize this settings by clicking your S3 bucket, permissions/ edit.
The 403 suggests that your key is incorrect, or the path to key is not correct. Have you verified that the package is loading the correct key in /myFolder/$keyname?
Might be helpful to try something simpler (instead of worrying about upload filetypes, paths, permissions, etc.) to debug.
$result = $client->listBuckets();
foreach ($result['Buckets'] as $bucket) {
// Each Bucket value will contain a Name and CreationDate
echo "{$bucket['Name']} - {$bucket['CreationDate']}\n";
}
Taken from http://docs.aws.amazon.com/aws-sdk-php/v2/guide/service-s3.html Also check out the service builder there.
The problem was a lack of permissions on the bucket themselves once I added those everything worked fine.
I got the same error issue. Project is laravel vue, I'm uploading file using axios to s3.
I'm using vagrant homestead as my server. Turns out the time on the virtual box server is not correct. I had to update it with the correct UTC time. After updating to correct time which I took from the s3 error it worked fine.
Error: I have removed sensitive information
message: "Error executing "PutObject" on "https://url"; AWS HTTP error: Client error: `PUT https://url` resulted in a `403 Forbidden` response:↵<?xml version="1.0" encoding="UTF-8"?>↵<Error><Code>RequestTimeTooSkewed</Code><Message>The difference between the reque (truncated...)↵ RequestTimeTooSkewed (client): The difference between the request time and the current time is too large. - <?xml version="1.0" encoding="UTF-8"?>↵<Error><Code>RequestTimeTooSkewed</Code><Message>The difference between the request time and the current time is too large.</Message><RequestTime>20190225T234631Z</RequestTime><ServerTime>2019-02-25T15:47:39Z</ServerTime><MaxAllowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds><RequestId>-----</RequestId><HostId>----</HostId></Error>"
Before:
vagrant#homestead:~$ date
Wed Feb 20 19:13:34 UTC 2019
After:
vagrant#homestead:~$ date
Mon Feb 25 15:47:01 UTC 2019
I'm trying to upload some files to S3 using Laravel and it's Filesystem api.
I'm getting the following error:
S3Exception in WrappedHttpHandler.php line 159:
Error executing "PutObject" on "https://s3..amazonaws.com/example-bucket/433922_1448096894943.png"; AWS HTTP error: cURL error 6: Could not resolve host: s3.global.amazonaws.com (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
here are my settings in the /config/filesystems.php file
's3' => [
'driver' => 's3',
'key' => 'apiKey',
'secret' => 'secretApiKey',
'region' => 'global',
'bucket' => 'example-bucket',
],
as noted in the error, the filesystem api can't resolve the following domain name
s3.global.amazonaws.com
the "global" part in here stands for the region but as far I understand S3 no longer requires a region.
while I think that the actual domain name should like this
example-bucket.s3.amazonaws.com/
here is the code I am using to invoke the filesystem api
Storage::disk('s3')->put('new-name.png', file_get_contents( 'path/to/file'))
Had to set up the correct S3 region, which in my case was: eu-west-1, and it all started working.
I am trying to create directory in amazon aws s3 for that I am trying following code ( I am using v3 php sdk)
$bucketName = 'somebucketName';
$key = 'folderName';
$params = [
'Bucket' => $bucketName,
'Key' => $key . '/'
];
$s3->putObject($params);
$s3 is instance of $s3 = new Aws\S3\S3Client class, I am getting bucket and object successfully with my current configuration.
it was working fine before but now I am getting error
Fatal error: Uncaught exception 'Aws\S3\Exception\S3Exception' with message 'Error executing "PutObject" on "https://s3-us-west-2.amazonaws.com/sdfsdf/demoer/";
AWS HTTP error: Client error: 411 MissingContentLength (client): You must provide the Content-Length HTTP header.
This error is due to you are not passing any image or object for put. pass a object too.
I also faced similar kind of problem https://stackoverflow.com/questions/32117596/aws-s3-uploaded-images-are-getting-corrupted
Check the below code.
try {
$result = $s3->putObject(array(
'Bucket' => $bucketName,
'Key' => $key . '/',
'SourceFile' => $filepath, // file path which is putting on AWS S3, Path should be absolute path like $filepath = "/var/www/html/for_testing_aws/assets/img/avtar.png";
'ContentType' => mime_content_type($filepath),
));
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
for more information AWS putObject
The problem is simple, Amazon S3 does not actually have directories.
The reality is that the slashes in object paths create the appearance of directories (and the AWS Console interface allows you to interact as if the objects are inside directories).
So, to create a directory, you must upload an object, in Git there are no directories, so users often create a file called .gitkeep to 'hold' a directory which shouldn't have any files in it committed. You could do something similar if you really don't want to push an actual file.