I am trying to delete a single file from AWS s3 bucket and my code is as below:
$removeUploadedDocFromTestingFolder = array(
"removeFromTestFolder" =>
array(
"bucket" => "my-bucket"
),
"AccessKeys" =>
array(
"access_key" => "my access key",
"seceret_key" => "my secret key"
)
);
$_SERVER['HOME'] = DIR_HOME;
ini_set('display_errors',1); error_reporting(E_ALL);
use Aws\S3\S3Client;
use Aws\Common\Credentials\Credentials;
$client = S3Client::factory();
//$objError = new ErrorReporting();
$bucket = $removeUploadedDocFromTestingFolder['removeFromTestFolder']['bucket'];
//testing file setup on local server....
$file1 = "my-file";
//File reference on cloud.....Object URL
$file1_cloud = "https://Object URL/myFile/myFile";
echo "here 0";
$client->deleteObject($bucket, $file1_cloud);
Can anyone tell what I am doing wrong that the code is not deleting the file.
I tried the following code but it didn't work:
try {
//$client->deleteObject($bucket, $file1_cloud);
$result = $client->deleteObject(['Bucket' => 'my-bucket','Key' => 'myFile.png']);
} catch(\Aws\S3\Exception\S3Exception $e) {
echo "error"+$e;
}
Thanks
The object is not being deleted because you have asked S3 to delete a non-existent object, and S3 treats this as a no-op rather than an error. This is because you have indicated a URL rather than the key of the object.
An example of an S3 object key is:
testing/cats/fluffykins.png
Note that it's not a URL and it doesn't begin with forward slash.
An example of the correct way to call the PHP deleteObject function is:
$result = $s3Client->deleteObject([
'Bucket' => $bucket,
'Key' => $key,
]);
Note that it takes an array of parameters, including bucket and key. It assumes PHP 5.4 or later in which you can use the short array syntax, which replaces array() with [].
I've resolved the issue by using the following code:
$client = S3Client::factory(array(
'key' => "mykey",
'secret' => "my secret key"
));
//File reference on cloud.....
$file1_cloud = "TestFile/myFile.png";
$result = $client->deleteObject(array(
'Bucket' => $bucket,
'Key' => 'TestFile/myFile.png'));
Related
I am using AWS SDK, and I am able to create buckets and manipulate keys. At the time of creation of bucket I also want to enable it for website hosting.
This is what I am using for creation
$result = $s3->createBucket([
'Bucket' => $buck_name
]);
From what I found, This is how we add website configuration
$result = $s3->putBucketWebsite(array(
'Bucket' => $buck_name,
'IndexDocument' => array('Suffix' => 'index.html'),
'ErrorDocument' => array('Key' => 'error.html'),
));
But this is not enabling website hosting,I also have uploaded both files(index and error) just in case. But I am getting this error
InvalidArgumentException: Found 1 error while validating the input provided for the PutBucketWebsite operation: [WebsiteConfiguration] is missing and is a required parameter in
Try this way
use Aws\S3\S3Client;
$bucket = $buck_name;
// 1. Instantiate the client.
$s3 = S3Client::factory();
// 2. Add website configuration.
$result = $s3->putBucketWebsite(array(
'Bucket' => $bucket,
'IndexDocument' => array('Suffix' => 'index.html'),
'ErrorDocument' => array('Key' => 'error.html'),
));
// 3. Retrieve website configuration.
$result = $s3->getBucketWebsite(array(
'Bucket' => $bucket,
));
echo $result->getPath('IndexDocument/Suffix');
I am trying to connect to my S3 to upload a file via my server but whenever i try to run the PHP, i encounter the following error below. I included the Version and Region but yet the issue still stands?
Error:
Fatal error: Uncaught exception 'InvalidArgumentException' with message 'Missing required client configuration options: region: (string) A "region" configuration value is required for the "s3" service (e.g., "us-west-2"). A list of available public regions and endpoints can be found at http://docs.aws.amazon.com/general/latest/gr/rande.html. version: (string) A "version" configuration value is required. Specifying a version constraint ensures that your code will not be affected by a breaking change made to the service. For example, when using Amazon S3, you can lock your API version to "2006-03-01". Your build of the SDK has the following version(s) of "s3": * "2006-03-01" You may provide "latest" to the "version" configuration value to utilize the most recent available API version that your client's API provider can find. Note: Using 'latest' in a production application is not recommended. A list of available API versions can be found on each client's API documentation page: http:/ in /srv/http/auploader/include/Aws/ClientResolver.php on line 364
My Code:
<?PHP
require '/srv/http/test/include/aws-autoloader.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = 'testbucket';
$keyname = 'sample';
// $filepath should be absolute path to a file on disk
$filepath = '/srv/http/testfile/setup.html';
// Instantiate the client.
$s3 = S3Client::factory(array(
'key' => 'blank',
'secret' => 'blank'
));
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SourceFile' => $filepath,
'ACL' => 'public-read',
'Region' => 'eu-west-1',
'Version' => '2006-03-01'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
?>
You have to create an object of S3. And keys you have put is misplaced please do it as following.
$s3 = S3Client::factory([
'version' => 'latest',
'region' => 'eu-west-1',
'credentials' => [
'key' => "your s3 bucket key",
'secret' => "your s3 bucket secret key",
]
]);
By using s3 object you can implement putObject method something like this.
$result = $s3->putObject(array(
'Bucket' => "yourbucket name",
'Key' => $keyName,
'SourceFile' => $filepath,
'ACL' => 'public-read', //for making the public url
'Version' => '2006-03-01'
));
));
Hope it helps!
For SES AWS SDK v3 use
/*
* 1. version as `2010-12-01`
* 2. version as Eg. `us-east-1`.
*/
ini_set("display_errors", 1);
Aws\Ses\SesClient::factory(array(
'credentials' => array(
'key' => "someKey",
'secret' => "someSecret",
),
"region" => "us-east-1",
"version" => "2010-12-01")
);
I am successfully uploading folders to S3 using ->uploadDirectory(). Several hundred folders have 100's, or 1,000's of images contained within them with so using PutObject() for each file hardly seemed to make sense. The upload works, and all goes well, but the ACL, StorageClass, and metadata is not being included in the upload.
According to the docs at http://docs.aws.amazon.com/aws-sdk-php/v2/guide/service-s3.html#uploading-a-directory-to-a-bucket , the following code should accomplished this. It is further documented with the putObject() function that is also cited.
I can find no examples of this function using anything but a directory and bucket, so fail to see what might be wrong with it. Any ideas why the data in $options is being ignored?
$aws = Aws::factory('config.php');
$s3 = $aws->get('S3');
$dir = 'c:\myfolder\myfiles';
$bucket = 'mybucket;
$keyPrefix = "ABC/myfiles/";
$options = array(
'ACL' => 'public-read',
'StorageClass' => 'REDUCED_REDUNDANCY',
'Metadata'=> array(
'MyVal1'=>'Something',
'MyVal2'=>'Something else'
)
);
$result = $s3->uploadDirectory($dir, $bucket, $keyPrefix, $options);
Parameters to provide to putObject or createMultipartUpload should be in the params option, not provided as top-level values in the options array. Try declaring your options as follows:
$options = array(
'params' => array(
'ACL' => 'public-read',
'StorageClass' => 'REDUCED_REDUNDANCY',
'Metadata'=> array(
'MyVal1'=>'Something',
'MyVal2'=>'Something else',
),
),
);
I don't understand why this doesn't work, and I have scoured the Internet and can't find anything matching my specific command I'm using.
I am basically trying to generate a presigned URL from Amazon S3 and I am following the directions in the docs to a T and it's not working.. Actually not to a T, I was doing it to a T. The docs say to make the array like this : [ 'Key' => 'Value' ] ... I saw another question here where the solved answer was to make it using array() .... but it doesn't change anything.
It still gives this error:
[01-Jan-2016 13:28:56 America/Los_Angeles] PHP Catchable fatal error: Argument 2 passed to Guzzle\Service\Client::getCommand() must be of the type array, object given, called in /Users/alex/Development/theshrineofdionysus-com/vendor/guzzle/guzzle/src/Guzzle/Service/Client.php on line 76 and defined in /Users/alex/Development/theshrineofdionysus-com/vendor/guzzle/guzzle/src/Guzzle/Service/Client.php on line 79
This is the code I am using related to the S3 part of it. Trust me when I saw the constants regarding the keys, region and bucket are correct, as I have other S3 code using them elsewhere that works flawlessly.
<?php
$s3 = Aws\S3\S3Client::factory(array(
'key' => AWS_ACCESS_KEY,
'secret' => AWS_SECRET_KEY,
'region' => AWS_REGION,
));
$cmd = $s3->getCommand('GetObject', array(
'Bucket' => AWS_BUCKET,
'Key' => $row['video_id']
));
$request = $s3->createPresignedRequest($cmd, '+120 minutes');
$url = (string) $request->getUri();
?>
I also know that $row['video_id'] is equal to an existing filename because without this code there, and I'm echoing it out it is the correct filename.
Here is my composer.json:
{
"require": {
"aws/aws-sdk-php": "2.*",
"php": ">=5.2.0"
}
}
This is my amazon code on the other page that works fine:
$s3 = Aws\S3\S3Client::factory(array(
'key' => AWS_ACCESS_KEY,
'secret' => AWS_SECRET_KEY,
'region' => AWS_REGION
));
$objects = $s3->getIterator('ListObjects', array('Bucket' => AWS_BUCKET));
foreach ($objects as $object) {
echo '<option value="' . $object['Key'] . '">' . $object['Key'] . '</option>' . PHP_EOL;
}
It looks like you're following the guide for v3 but have v2 installed. You can create a presigned URL in v2 by calling: $url = $s3->getObjectUrl(AWS_BUCKET, $row['video_id'], '+120 minutes');
A full guide can be found here.
I want to get the list of all the filenames that are inside my Amazon S3 bucket from my PHP server. Is that possible with the Amazon PHP sdk? If not, is there another way? Thanks!
Using the official AWS SDK for PHP v2, you can do something like the following:
<?php
require 'vendor/autoload.php';
use Aws\Common\Aws;
// Instantiate an S3 client
$s3 = Aws::factory('/path/to/config.php')->get('s3');
$objects = $s3->getIterator('ListObjects', array(
'Bucket' => $bucket
));
foreach ($objects as $object) {
echo $bucket['Name'] . '/' . $object['Key'] . PHP_EOL;
}
It worked fine for me except that MaxKeys was ignored.
Is there a difference between using the above method and the one below. Although I couldn't get to array elements within $result.
use Aws\Common\Aws;
use Aws\S3\S3Client;
// Prepare S3 Client to speak with AWS
$s3client;
$s3client = S3Client::factory(array(
'key' => '',
'secret' => '',
'region' => Region::US_EAST_1
));
// List objects from a bucket
$result = $s3client->listObjects(array(
'Bucket' => $aws_bucket,
'MaxKeys' => 25,
'Marker' => 'docs/'
));
$objects = $result->getIterator();
foreach ($objects as $wo)
{
// Prints junk
echo $wo['Key'] . ' - ' . $wo['Size'] . PHP_EOL;
}