I have found the documentation for it here. I have PHP SDK installed. Now when I go through the documents there is not so much in detail about the PHP one. I have the following questions:
Here how can I specify the $client
$result = $client->createDatabase([
'DatabaseName' => '<string>', // REQUIRED
'KmsKeyId' => '<string>',
'Tags' => [
[
'Key' => '<string>', // REQUIRED
'Value' => '<string>', // REQUIRED
],
// ...
],
]);
Is there any good documents or videos regarding the timestream in PHP from where I can get some help.
There are two client classes. One for writing and one for reading.
TimestreamWriteClient
https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.TimestreamWrite.TimestreamWriteClient.html
and
TimestreamQueryClient
https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.TimestreamQuery.TimestreamQueryClient.html
You can use the function createTimestreamQuery and createTimestreamWrite on the $sdk object to instantiate those classes.
A sample Timestream client and query below.
// Create client
$client = new \Aws\TimestreamQuery\TimestreamQueryClient([
'version' => 'latest',
'region' => AWS_REGION, /* eg: eu-west-1 */
'endpoint' => AWS_TIMESTREAM_ENDPOINT, /* eg: https://query-cell3.timestream.eu-west-1.amazonaws.com */
'credentials' => new \Aws\Credentials\Credentials(AWS_KEY, AWS_SECRET)
]);
// Perform a basic query with the client
$client->query([
'QueryString' => 'select * from "db_timestream"."tbl_usage_logs"',
'ValidateOnly' => true,
]);
If you receive endpoint warning, such as "The endpoint required for this service is currently unable to be retrieved"
You can find endpoint using AWS CLI command,
aws timestream-query describe-endpoints --region eu-west-1
Sample response:
{
"Endpoints": [
{
"Address": "query-cell3.timestream.eu-west-1.amazonaws.com",
"CachePeriodInMinutes": 1440
}
]
}
One can create TimestreamWriteClient and write records in a similar way.
The documentation seems sparse and a bit misleading, to me anyhow.
This is how I got it going for a write client (assuming SDK is installed).
//Create the client
$client = new \Aws\TimestreamWrite\TimestreamWriteClient([
'version' => 'latest',
'region' => 'eu-west-1',
'credentials' => new \Aws\Credentials\Credentials('***KEY***', '***SECRET***')
]);
Note that the 'endpoint' is not specified, as I've seen in some examples. There seems to be some misleading documentation of what the endpoint should be for any given region. The SDK does some magic and creates a suitable endpoint; providing a specific endpoint didn't work for me.
$result = $client->writeRecords(
[
'DatabaseName' => 'testDB',
'TableName' => 'history',
'Records' =>
[
[
'Dimensions' => [
[
'DimensionValueType' => 'VARCHAR',
'Name' => 'Server',
'Value' => 'VM01',
],
],
'MeasureName' => 'CPU_utilization',
'MeasureValue' => '1.21',
'MeasureValueType' => 'DOUBLE',
'Time' => strval(time()),
'TimeUnit' => 'SECONDS',
]
]
]
);
This seems to be the minimum set of things needed to write a record to Timestream successfully. The code above writes one record, with one dimension, in this case, a 'Name' of a 'Server', recording its CPU utilization at time().
Note:
Time is required, although the documentation suggested it is optional.
Time has to be a String.
Related
I've obtained the following code from searching about the topic
Route::get('/test', function () {
//disable execution time limit when downloading a big file.
set_time_limit(0);
$fs = Storage::disk('local');
$path = 'uploads/user-1/1653600850867.mp3';
$stream = $fs->readStream($path);
if (ob_get_level()) ob_end_clean();
return response()->stream(function () use ($stream) {
fpassthru($stream);
},
200,
[
'Accept-Ranges' => 'bytes',
'Content-Length' => 14098560,
'Content-Type' => 'application/octet-stream',
]);
});
However when I click play on the UI, it takes a good four seconds to start playing. If I switch the disk to local though, it plays almost instantly.
Is there a way to improve the performance or, read the stream by range as per request?
Edit
My current DO config is as per below
'driver' => 's3',
'key' => env('DO_ACCESS_KEY_ID'),
'secret' => env('DO_SECRET_ACCESS_KEY'),
'region' => env('DO_DEFAULT_REGION'),
'bucket' => env('DO_BUCKET'),
'url' => env('DO_URL'),
'endpoint' => env('DO_ENDPOINT'),
'use_path_style_endpoint' => env('DO_USE_PATH_STYLE_ENDPOINT', false),
But I find two type of integration online one specifying the CDN endpoint and one doesn't. I am not sure which one is relevant, though the one that specifies CDN is for Laravel 8 and I am on Laravel 9.
I had to change my code such that:
I had to use the php SDK client for connecting to Aws for the Laravel API isn't flexible to allow passing additional arguments (at least I haven't found anything while researching)
Change to streamDownload as I can't see any description to the stream method in the docs despite that it is present in code.
So the below code allows to achieve what I was aiming for which is, download by chunk based on the range received in the request.
return response()->streamDownload(function(){
$client = new Aws\S3\S3Client([
'version' => 'latest',
'region' => config('filesystems.disks.do.region'),
'endpoint' => config('filesystems.disks.do.endpoint'),
'credentials' => [
'key' => config('filesystems.disks.do.key'),
'secret' => config('filesystems.disks.do.secret'),
],
]);
$path = 'uploads/user-1/1653600850867.mp3';
$range = request()->header('Range');
$result = $client->getObject([
'Bucket' => 'wyxos-streaming',
'Key' => $path,
'Range' => $range
]);
echo $result['Body'];
},
200,
[
'Accept-Ranges' => 'bytes',
'Content-Length' => 14098560,
'Content-Type' => 'application/octet-stream',
]);
Note:
In a live scenario, you would need to cater for if range isn't specified, the content length will need to be the actual file size
When range is present however, the content length should then be the size of the segment being echoed
I can't figure out how to delete hosted zone resource record set with Amazon PHP sdk.
So my code is following
public function __construct(\ConsoleOutput $stdout = null, \ConsoleOutput $stderr = null, \ConsoleInput $stdin = null) {
parent::__construct($stdout, $stderr, $stdin);
/** #var \Aws\Route53\Route53Client route53Client */
$this->route53Client = Route53Client::factory([
'version' => '2013-04-01',
'region' => 'eu-west-1',
'credentials' => [
'key' => <my-key>,
'secret' => <my-secret-key>
]
]);
}
And this is my function for deleting resource record set
private function deleteResourceRecordSet() {
$response = $this->route53Client->changeResourceRecordSets([
'ChangeBatch' => [
'Changes' => [
[
'Action' => 'DELETE',
'ResourceRecordSet' => [
'Name' => 'pm-bounces.subdomain.myDomain.com.',
'Region' => 'eu-west-1',
'Type' => 'CNAME',
],
]
]
],
'HostedZoneId' => '/hostedzone/<myHostedZoneId>'
]);
var_dump($response);
die();
}
And the error I'm keep getting is
Error executing "ChangeResourceRecordSets" on "https://route53.amazonaws.com/2013-04-01/hostedzone/<myHostedZoneId>/rrset/"; AWS HTTP error: Client error: `POST https://route53.amazonaws.com/2013-04-01/hostedzone/<myHostedZoneId>/rrset/` resulted in a `400 Bad Request` response:
<?xml version="1.0"?>
<ErrorResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/"><Error><Type>Sender</Type><Co (truncated...)
InvalidInput (client): Invalid request: Expected exactly one of [AliasTarget, all of [TTL, and ResourceRecords], or TrafficPolicyInstanceId], but found none in Change with [Action=DELETE, Name=pm-bounces.subdomain.myDomain.com., Type=CNAME, SetIdentifier=null] - <?xml version="1.0"?>
<ErrorResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/"><Error><Type>Sender</Type><Code>InvalidInput</Code><Message>Invalid request: Expected exactly one of [AliasTarget, all of [TTL, and ResourceRecords], or TrafficPolicyInstanceId], but found none in Change with [Action=DELETE, Name=pm-bounces.subdomain.myDomain.com., Type=CNAME, SetIdentifier=null]</Message>
So what exactly is minimum required set of params so I will be available to delete resource record from hosted zone? If you need any additional informations, please let me know and I will provide. Thank you
Ok I have figure it out. If you wan't to delete resource record set from hosted zones, then the code/function for deleting record set should look like following
private function deleteResourceRecordSet($zoneId, $name, $ResourceRecordsValue, $recordType, $ttl) {
$response = $this->route53Client->changeResourceRecordSets([
'ChangeBatch' => [
'Changes' => [
[
'Action' => 'DELETE',
"ResourceRecordSet" => [
'Name' => $name,
'Type' => $recordType,
'TTL' => $ttl,
'ResourceRecords' => [
$ResourceRecordsValue // should be reference array of all resource records set
]
]
]
]
],
'HostedZoneId' => $zoneId
]);
}
I am trying to download my archive from Amazon Glacier using expedited option. I'm doing it via PHP with PHP SDK3. I have a little problem. I've launched job to get ArchiveID:
$credentials = new Credentials('GLA_AWS_KEY', 'GLA_AWS_SECRET');
$client = new GlacierClient(array(
'version' => 'latest',
'credentials' => $credentials,
'region' => 'GLA_AWS_REGION'
));
$result = $client->initiateJob(array(
'vaultName' => 'GLA_AWS_VAULT',
'jobParameters' => [
'Type' => 'archive-retrieval',
'ArchiveId' => $archiveId,
]
));
$jobid = $result->get('jobId');
How can you recover the file in expedited mode?
Thanx for any help ;D
Finally I found the answer. For anyone interested on it.
$result = $client->initiateJob(array(
'vaultName' => 'GLA_AWS_VAULT',
'jobParameters' => [
'Type' => 'archive-retrieval',
'ArchiveId' => $archiveId,
'Tier' => 'Expedited'
]
));
We need to add the Tier as Expedited. The download time reduces to 5 minuts more or less.
I'm currently struggling with getting a S3 download link working. I've been using this code as reference but when I try to open the file, I get the error:
The authorization mechanism you have provided is not supported. Please
use AWS4-HMAC-SHA256.
I tried a few other scripts floating around, but all ended with some other error message.
Is there an easy way to migrate the script I'm using to make it work with Signature v4?
UPDATE: as suggested by hjpotter92, I used the AWS-SDK and came up with this working code:
$client = S3Client::factory([
'version' => 'latest',
'region' => 'eu-central-1',
'signature' => 'v4',
'credentials' => [
'key' => '12345',
'secret' => 'ABCDE'
]
]);
$cmd = $client->getCommand('GetObject', [
'Bucket' => '###name###',
'Key' => $fileName
]);
$request = $client->createPresignedRequest($cmd, '+2 minutes');
$presignedUrl = (string) $request->getUri();
return $presignedUrl;
I know there is no concept of folders in S3, it uses a flat file structure. However, i will use the term "folder" for the sake of simplicity.
Preconditions:
An s3 bucket called foo
The folder foo has been made public using the AWS Management Console
Apache
PHP 5
Standard AWS SDK
The problem:
It's possible to upload a folder using the AWS PHP SDK. However, the folder is then only accessible by the user that uploaded the folder and not public readable as i would like it to be.
Procedure:
$sharedConfig = [
'region' => 'us-east-1',
'version' => 'latest',
'visibility' => 'public',
'credentials' => [
'key' => 'xxxxxx',
'secret' => 'xxxxxx',
],
];
// Create an SDK class used to share configuration across clients.
$sdk = new Aws\Sdk($sharedConfig);
// Create an Amazon S3 client using the shared configuration data.
$client = $sdk->createS3();
$client->uploadDirectory("foo", "bucket", "foo", array(
'params' => array('ACL' => 'public-read'),
'concurrency' => 20,
'debug' => true
));
Success Criteria:
I would be able to access a file in the uploaded folder using a "static" link. Fx:
https://s3.amazonaws.com/bucket/foo/001.jpg
I fixed it by using a defined "Before Execute" function.
$result = $client->uploadDirectory("foo", "bucket", "foo", array(
'concurrency' => 20,
'debug' => true,
'before' => function (\Aws\Command $command) {
$command['ACL'] = strpos($command['Key'], 'CONFIDENTIAL') === false
? 'public-read'
: 'private';
}
));
Use can use this:
$s3->uploadDirectory('images', 'bucket', 'prefix',
['params' => array('ACL' => 'public-read')]
);