How do I get the instance bandwidth usage for NetworkIn and NetworkOut for an EC2 instance based on the instance ID using the PHP SDK.
So far what I have is...
<?php
require_once("../aws/Sdk.php");
use Aws\CloudWatch\CloudWatchClient;
$client = CloudWatchClient::factory(array(
'profile' => 'default',
'region' => 'ap-southeast-2'
));
$dimensions = array(
array('Name' => 'Prefix', 'Value' => ""),
);
$result = $client->getMetricStatistics(array(
'Namespace' => 'AWSSDKPHP',
'MetricName' => 'NetworkIn',
'Dimensions' => $dimensions,
'StartTime' => strtotime('-1 hour'),
'EndTime' => strtotime('now'),
'Period' => 3000,
'Statistics' => array('Maximum', 'Minimum'),
));
I have a PHP cron job running every hour and I need to be able to get the bandwidth in and out for a specific EC2 instance to record in an internal database.
What I have above I have been able to piece together from the SDK documentation but from here I am kinda stumped.
I believe what I need is cloudwatch so would rather it be able to be done through this. I know that I can install a small program onto each server to report the bandwidth usage to a file on the server that I then SFTP into to download to our database but would rather it be done externally of any settings within the instance itself so that an instance admin can't cause issues with the bandwidth reporting.
Managed to get it working with...
<?php
require '../../aws.phar';
use Aws\CloudWatch\CloudWatchClient;
$cw = CloudWatchClient::factory(array(
'key' => 'your-key-here',
'secret' => 'your-secret-here',
'region' => 'your-region-here',
'version' => 'latest'
));
$metrics = $cw->listMetrics(array('Namespace' => 'AWS/EC2'));
//print_r($metrics);
$statsyo = $cw->getMetricStatistics(array(
'Namespace' => 'AWS/EC2',
'MetricName' => 'NetworkIn',
'Dimensions' => array(array('Name' => 'InstanceId', 'Value' => 'your-instance-id-here')),
'StartTime' => strtotime("2017-01-23 00:00:00"),
'EndTime' => strtotime("2017-01-23 23:59:59"),
'Period' => 86400,
'Statistics' => array('Average'),
'Unit' => 'Bytes'
));
echo($statsyo);
If you're trying to calculate your bandwidth charge the same way AWS would, a better and more conclusive way would be to use VPC Flow Logs. You can subscribe your ENI to VPC flow logs (should be pretty cheap, they only charge for CloudWatch Logs costs, flow logs is free) then use the AWS SDK to pull from CloudWatch with GetLogEvents, and then sum up the bytes total.
Related
is it possible to do realtime voice call using nexmo/vonage with PHP or Javascript via web browser?
i used library called nexmo/laravel.
This sample code that i used:
$nexmo = Nexmo::calls()->create([
'to' => [[
'type' => 'phone',
'number' => '855969818674'
]],
'from' => [
'type' => 'phone',
'number' => '63282711511'
],
'answer_url' => ['https://gist.githubusercontent.com/jazz7381/d245a8f54ed318ac2cb68152929ec118/raw/6a63a20d7b1b288a84830800ab1813ebb7bac70c/ncco.json'],
'event_url' => [backpack_url('call/event')]
]);
with that code i can send text-to-speech, but how can i do realtime voice conversation person to person?
From the code you shared above, it looks like you might not have instantiated a client instance of the Nexmo PHP SDK, which is necessary to do so. You make an outbound call with an instantiated and authenticated client.
For example, first instantiate a client with the following, supplying the file path to your private key file, your application ID, your API key and your API secret. You can obtain all of those from the Dashboard
$basic = new \Nexmo\Client\Credentials\Basic('key', 'secret');
$keypair = new \Nexmo\Client\Credentials\Keypair(
file_get_contents((NEXMO_APPLICATION_PRIVATE_KEY_PATH),
NEXMO_APPLICATION_ID
);
$client = new \Nexmo\Client(new \Nexmo\Client\Credentials\Container($basic, $keypair));
Then, once you have a credentialed client, you can then invoke the $client->calls() methods on it. For example:
$client->calls()->create([
'to' => [[
'type' => 'phone',
'number' => '14843331234'
]],
'from' => [
'type' => 'phone',
'number' => '14843335555'
],
'answer_url' => ['https://example.com/answer'],
'event_url' => ['https://example.com/event'],
]);
You can find more information on using the PHP SDK on GitHub. You can also find code snippets, tutorials, and more instructions on our developer portal.
We began seeing these DocuSign exceptions 09/24/2019:
DocuSign \ eSign \ ApiException (401)
[401] Error connecting to the API (https://NA3.docusign.net/restapi/v2/login_information)
None of the code surrounding our DocuSign logic has been touched for almost six months. So I'm at a loss as to why this exception is being thrown.
We're using the following packages (relating to this):
laravel/framework v5.8.35
docusign/esign-client 3.0.1
tucker-eric/docusign-rest-client 1.0.0
tucker-eric/laravel-docusign 0.1.1
I've tried to update the packages with composer thinking they might have made updates to fix something, but it didn't change anything other than throw USER_AUTHENTICATION_FAILED instead of the exceptions' message above.
As I said, no code has been touched, and I have very little experience with the DocuSign API, and making matters worse this was an old developer's code...
I am able to hit the endpoint, and authenticate with our credentials, using Postman and it seems to work fine. So again, I'm not sure how this just started happening.
The code from our controller:
$parcel = request('parcel_id');
$subdivision = $user->subdivision_id;
$subEmail = Subdivision::where('id', $user->subdivision_id)->pluck('email')->first();
$move = Move::create([
'full_name' => request('full_name'),
'email' => request('email'),
'phone_number' => request('phone_number'),
'parcel_id' => $parcel,
'direction' => request('direction'),
'action_date' => request('action_date'),
'user_id' => auth()->id(),
'subdivision_id' => $subdivision
]);
$residentTabs = array(
array(
'tabLabel' => env('MOVE_IN_ADDRESS_FIELD'),
'value' => $move->parcel->MailingAddress
),
array(
'tabLabel' => env('MOVE_IN_DATE_RESIDENT_FIELD'),
'value' => $move->action_date->format('m/d/Y')
),
array(
'tabLabel' => env('MOVE_IN_EMAIL_FIELD'),
'value' => $move->email
),
array(
'tabLabel' => env('MOVE_IN_PRIMARY_PHONE_FIELD'),
'value' => $move->phone_number
),
array(
'tabLabel' => env('MOVE_IN_FULL_NAME_FIELD'),
'value' => $move->full_name
)
);
$pmTabs = array(
array(
'tabLabel' => env('MOVE_IN_PM_ADDRESS_FIELD'),
'value' => $move->parcel->MailingAddress
),
array(
'tabLabel' => env('MOVE_IN_PM_DATE_FIELD'),
'value' => $move->action_date->format('m/d/Y')
),
);
$templateRoles = array(
array(
'email' => $move->email,
'name' => $move->full_name,
'roleName' => 'Resident',
'tabs' => array(
'textTabs' => $residentTabs
)
),
array(
'email' => $subEmail,
'name' => $user->name,
'roleName' => 'Property Manager',
'tabs' => array(
'textTabs' => $pmTabs
)
)
);
$envelopeDefinition = array(
'status' => 'sent',
'templateId' => env("DOCUSIGN_TEMPLATE_ID"),
'templateRoles' => $templateRoles
);
$contract = DocuSign::get('envelopes')->createEnvelope($envelopeDefinition);
The last line is where the exception is thrown, and the function throwing the exceptions is:
vendor/docusign/esign-client/src/ApiClient.php::callApi
We expect it to work as it has, throwing no exceptions and creating the envelope successfully.
However, we have been seeing USER_AUTHENTICATION_FAILED and general 401 exceptions.
Any help is appreciated!
Your token may have expired. Not sure how it was created and what authentication mechanism you are using. You need to check where is the token and the header in the REST API calls that is using it. It may be that was hardcoded, or was there a refresh token used to keep obtaining new tokens and that process broke.
If you're getting an Authentication failure while trying to hit the login_information endpoint, it's likely that your application is using Legacy Header authentication with an invalid password.
I'd recommend the following:
Try to log in to the web console at www.docusign.net, and perform a Password Reset if necessary
Once you are able to log in, update the stored credentials in the application
2FA or forced Single Sign-On will both block Legacy Header auth. If either is in place, they will need to be disabled, or you will need to switch to one of the Account Server auth workflows.
I am working on a project where we will be creating both subdomains as well as domains in Route53. We are hoping that there is a way to do this programmatically. The SDK for PHP documentation seems a little light, but it appears that createHostedZone can be used to create a domain or subdomain record and that changeResourceRecordSets can be used to create the DNS records necessary. Does anyone have examples of how to actually accomplish this?
Yes, this is possible using the changeResourceRecordSets call, as you already indicated. But it is a bit clumsy since you have to structure it like a batch even if you're changing/creating only one record, and even creations are changes. Here is a full example, without a credentials method:
<?php
// Include the SDK using the Composer autoloader
require 'vendor/autoload.php';
use Aws\Route53\Route53Client;
use Aws\Common\Credentials\Credentials;
$client = Route53Client::factory(array(
'credentials' => $credentials
));
$result = $client->changeResourceRecordSets(array(
// HostedZoneId is required
'HostedZoneId' => 'Z2ABCD1234EFGH',
// ChangeBatch is required
'ChangeBatch' => array(
'Comment' => 'string',
// Changes is required
'Changes' => array(
array(
// Action is required
'Action' => 'CREATE',
// ResourceRecordSet is required
'ResourceRecordSet' => array(
// Name is required
'Name' => 'myserver.mydomain.com.',
// Type is required
'Type' => 'A',
'TTL' => 600,
'ResourceRecords' => array(
array(
// Value is required
'Value' => '12.34.56.78',
),
),
),
),
),
),
));
The documentation of this method can be found here. You'll want to take very careful note of the required fields as well as the possible values for others. For instance, the name field must be a FQDN ending with a dot (.).
Also worth noting: You get no response back from the API after this call by default, i.e. there is no confirmation or transaction id. (Though it definitely gives errors back if something is wrong.) So that means that if you want your code to be bulletproof, you should write a Guzzle response handler AND you may want to wait a few seconds and then run a check that the new/changed record indeed exists.
Hope this helps!
Yes, I done using changeResourceRecordSets method.
<?php
require 'vendor/autoload.php';
use Aws\Route53\Route53Client;
use Aws\Exception\CredentialsException;
use Aws\Route53\Exception\Route53Exception;
//To build connection
try {
$client = Route53Client::factory(array(
'region' => 'string', //eg . us-east-1
'version' => 'date', // eg. latest or 2013-04-01
'credentials' => [
'key' => 'XXXXXXXXXXXXXXXXXXX', // eg. VSDFAJH6KXE7TXXXXXXXXXX
'secret' => 'XXXXXXXXXXXXXXXXXXXXXXX', //eg. XYZrnl/ejPEKyiME4dff45Pds54dfgr5XXXXXX
]
));
} catch (Exception $e) {
echo $e->getMessage();
}
/* Create sub domain */
try {
$dns = 'yourdomainname.com';
$HostedZoneId = 'XXXXXXXXXXXX'; // eg. A4Z9SD7DRE84I ( like 13 digit )
$name = 'test.yourdomainname.com.'; //eg. subdomain name you want to create
$ip = 'XX.XXXX.XX.XXX'; // aws domain Server ip address
$ttl = 300;
$recordType = 'CNAME';
$ResourceRecordsValue = array('Value' => $ip);
$client->changeResourceRecordSets([
'ChangeBatch' => [
'Changes' => [
[
'Action' => 'CREATE',
"ResourceRecordSet" => [
'Name' => $name,
'Type' => $recordType,
'TTL' => $ttl,
'ResourceRecords' => [
$ResourceRecordsValue
]
]
]
]
],
'HostedZoneId' => $HostedZoneId
]);
}
If you get any error please check into server error.log file. If you get error from SDK library then there is might PHP version not supported.
if you run this code from your local machine then you might get "SignatureDoesNotMatch" error then Make sure run this code into same (AWS)server environment.
All,
I am attempting to migrate roughly 6GB of Mongo data that is comprised of hundreds of collections to DynamoDB. I have written some scripts using the AWS PHP SDK and am able to port over very small collections but when I try ones that have more than 20k documents (still a very small collection all things considered) it either takes an outrageous amount of time or quietly fails.
Does anyone have some tips/tricks for taking data from Mongo (or any other NoSQL DB) and migrating it to Dynamo, or any other NoSQL DB. I feel like this should be relatively easy because the documents are extremely flat/simple.
Any thoughts/suggestions would be much appreciated!
Thanks!
header.php
<?
require './aws-autoloader.php';
require './MongoGet.php';
set_time_limit(0);
use \Aws\DynamoDb\DynamoDbClient;
$client = \Aws\DynamoDb\DynamoDbClient::factory(array(
'key' => 'MY_KEY',
'secret' => 'MY_SECRET',
'region' => 'MY_REGION',
'base_url' => 'http://localhost:8000'
));
$collection = "AccumulatorGasPressure4093_raw";
function nEcho($str) {
echo "{$str}<br>\n";
}
echo "<pre>";
test-store.php
<?
include('test-header.php');
nEcho("Creating table(s)...");
// create test table
$client->createTable(array(
'TableName' => $collection,
'AttributeDefinitions' => array(
array(
'AttributeName' => 'id',
'AttributeType' => 'N'
),
array(
'AttributeName' => 'count',
'AttributeType' => 'N'
)
),
'KeySchema' => array(
array(
'AttributeName' => 'id',
'KeyType' => 'HASH'
),
array(
'AttributeName' => 'count',
'KeyType' => 'RANGED'
)
),
'ProvisionedThroughput' => array(
'ReadCapacityUnits' => 10,
'WriteCapacityUnits' => 20
)
));
$result = $client->describeTable(array(
'TableName' => $collection
));
nEcho("Done creating table...");
nEcho("Getting data from Mongo...");
// instantiate class and get data
$mGet = new MongoGet();
$results = $mGet->getData($collection);
nEcho ("Done retrieving Mongo data...");
nEcho ("Inserting data...");
$i = 0;
foreach($results as $result) {
$insertResult = $client->putItem(array(
'TableName' => $collection,
'Item' => $client->formatAttributes(array(
'id' => $i,
'date' => $result['date'],
'value' => $result['value'],
'count' => $i
)),
'ReturnConsumedCapacity' => 'TOTAL'
));
$i++;
}
nEcho("Done Inserting, script ending...");
I suspect that you are being throttled by DynamoDB, especially if your tables' throughputs are low. The SDK retries the requests, up to 11 times per request, but eventually, the requests fail, which should throw an exception.
You should take a look at the WriteRequestBatch object. This object is basically a queue of items that get sent in batches, but any items that fail to transfer are re-queued automatically. Should provide a more robust solution for what you are doing.
I am writing script that connects to amazon S3 storage. The script is supposed to create 2 buckets:
Bucket is for data
Bucket is for logs
I successfully created both buckets but I can't set up logging. Below is shown code I use for enabling bucket logging
$result = $client->putBucketLogging(array(
'Bucket' => $bucket,
'LoggingEnabled' => array(
'TargetBucket' => $bucket . '-LOG',
'TargetGrants'=>array(
'Grantee'=>array(
'DisplayName'=>'user.name',
'Type'=>'CanonicalUser'
),
),
'TargetPrefix' => 'LOG-',
),
));
In amazon AWS API for PHP version 2 is written that Bucket, LoggingEnabled and Type are mandatory. But the documentation does not say how to exactly implement there parameters.
Could you please help me with structure of config array for putBucketLogging method?
You can also use the service's API documents as a reference, which sometimes contain more details about how to specifically structure some of the data types for requests. The S3 API docs for PUT Bucket Logging have more details about how to specify the grantee.
Also, you should not use capital letters in bucket names (See Rules for Bucket Naming).
After searching in manuals the php array for method putBucketLogging is
$result = $client->putBucketLogging(array(
'Bucket' => $bucket,
'LoggingEnabled' => array(
'TargetBucket' => $bucket . '-log',
'TargetGrants' => array(
'Grant' => array(
'Grantee' => array(
'Type' => 'AmazonCustomerByEmail',
'EmailAddress' => 'email#email.com',
),
'Permission' => 'FULL_CONTROL',
),
),
'TargetPrefix' => 'log-',
),
));
However enabling logging fails with exception that tells me I have to set permissions on log bucket...