Pre-Sign CNAME URL with PostObjectV4 (PHP SDK) - php

I am trying to use the AWS PHP SDK to pre-sign V4 POST URLs and am hitting a major problem.
I have created a bucket called bucket1.chris.com. This has a CNAME to bucket1.chris.com.s3-eu-west-1.amazonaws.com.
When I create the S3Client I am passing http://bucket1.chris.com as the enpoint and bucket1.chris.com as the bucket name.
Once the URL is signed and I get the action from the formAttributes it is:
bucket1.chris.com.bucket1.chris.com
Looking at the generateUri function in PostObjectV4 I can see this line:
// Use virtual-style URLs
$uri = $uri->withHost($this->bucket . '.' . $uri->getHost());
Which is causing my problem.
If I don't pass an endpoint I get:
s3-eu-west-1.amazonaws.com/bucket1.chris.com
(This is throwing an error: "The specified method is not allowed against this resource" when I try to use it but I think this might be something else)
Does anyone know how I you are supposed to use CNAME records (virtual hosted buckets) with the AWS PHP SDK?

I have figured out what I was doing wrong.
I don't need to worry about passing an endpoint to the SDK since it is not used in the signing process.
The problem I was having with the s3-eu-west-1.amazonaws.zom/bucket1.chris.com was due to an issue with the bucket and not the endpoint I was using.

Related

Vtiger Query webservice returnes 403 access forbidden error in postman

I am new to vtiger and recently I tried working with third party API integration of vtiger where we can have a query webservice. I tried following API in Postman
http://myurl/webservice.php?operation=query&sessionName=63c67873606f00c2d94fa&query=select count(*) from Leads where Leadid = 1
which is giving 403 error. Also please let me know where to create a webservice in vtiger.
You have to authenticate:
First, get a challenge token:
GET /webservice.php?operation=getchallenge&username=<USERNAME> HTTP/1.1
Then use that token, together with a username and the accesskey of that user (not to be confused with the user's password) to login:
POST /webservice.php HTTP/1.1
operation=login
username=<USERNAME>
accessKey=md5(TOKENSTRING + <ACCESSKEY>) // Note: accessKey= K here is capitalized.
Notice that the concatenation of TOKENSTRING and ACCESSKEY need to be encoded using the md5 function. I recommend using php to do that operation because I've had problems using online encoders.
About the second question, take a look at the folder include/Webservices. Many of the files under that folder are ws functions and you have to create something similar. After created, you have to register
the function by calling vtws_addWebserviceOperation() and
each parameter of the function by calling vtws_addWebserviceOperationParam.
Both of the above functions are defined under /include/Webservices/Utils.php
source: https://community.vtiger.com/help/vtigercrm/developers/third-party-app-integration.html

Amazon SES not working with eu_west_1 but ok using us_east_1 PHP

Im rewriting a few scripts to make use of the new v4 signature for Amazon AWS.
I am trying to send an email using the code on this page:
https://github.com/okamos/php-ses
When I use his code exactly as it is just adding secret keys etc I get an error saying my email address isnt verified on us_east_1. This makes sense as all my things are on EU_WEST_1.
So Ive tried adding the EU endpoint as a third parameter but get this error:
'Warning: SimpleEmailService::sendEmail(): 6 Could not resolve host: EU_WEST_1'
This is the line of code which seems to work but connecting to the wrong endpoint
$ses = new SimpleEmailService('apikey', 'secretkey');
print_r($ses->sendEmail($m));
I have tried adding the new endpoint as the third parameter like this
$ses = new SimpleEmailService('apikey', 'secret','eu-west-1');
But that just generates the error.
Can anyone tell me the correct code to use to set the eu-west-1 endpoint to send emails through?
Thanks
I had faced the same problem with AWS PHP library for uploading files to S3. Even though I was setting up the region from us-east-1 to eu-west-1 it was still taking us-east-1 as by default. I would suggest you to look for region configurations in library and how it is overriding.
SES Email Host for us-east-1 and eu-west-1 are different. So even though if you are passing correct API and Secret, it might not work because of default region.
If this debug process doesn't work can share the screenshot of what you region you are getting before email dispatch. I would love to explore further.
I found the problem.
Looks like there was a typo within the code on github, it had these 3 lines in the file 'SimpleEmailServices.php:
const AWS_US_EAST_1 = 'email.us-east-1.amazonaws.com';
const AWS_US_WEST_2 = 'email.us-west-2.amazonaws.com';
const AWS_EU_WEST1 = 'email.eu-west-1.amazonaws.com';
He'd missed the underscore in the AWS_EU_WEST1.

Document signing failed using Fineuploader to make amazon s3 uploads

I successfully tested uploading to local server using traditional PHP.
However, I am having problem uploading to Amazon s3.
I wrote php using git examples as reference.Please tell me what am I doing wrong.
All the scripts referenced are in the proper location in my local system and I am not making any CORS requests either.
Here are the specific code sections:
//UI Instance
var s3uploader = new qq.s3.FineUploader({
request: {
endpoint: "bucket.s3.amazonaws.com",
accessKey: "given key"
},
signature: {
endpoint: "endpoint.php"
},
uploadSucess: {
endpoint: "endpoint.php?success"
},
});
In endpoint.php I have assigned clientPrivateKey,bucketName and hostName and I am assuming that rest of the things are best left untouched. (including composer.json file)
Errors:
1.Error attempting to parse signature
2.Recieved an empty or invalid server response
3.Policy signing failed
Further:
Are policy documents to be authored explicitly by ourselves?
How do I know if my bucket supports only version 4 signature?
You must include include values for the following variables:
$clientPrivateKey = $_ENV['AWS_CLIENT_SECRET_KEY'];
$serverPublicKey = $_ENV['AWS_SERVER_PUBLIC_KEY'];
$serverPrivateKey = $_ENV['AWS_SERVER_PRIVATE_KEY'];
$expectedBucketName = $_ENV['S3_BUCKET_NAME'];
Additionally, if you are utilizing v4 signatures, you must also include a value for:
$expectedHostName = $_ENV['S3_HOST_NAME'];
If you are seeing signature errors, then either you have not set all of these values, or the AWS keys are incorrect.
Regarding your other two questions:
Are policy documents to be authored explicitly by ourselves?
No, Fine Uploader S3 creates these. Note that policy documents are only used for non-chunked uploads. For chunked uploads, the S3 multipart upload API is used, and your signature server is asked to sign a string of identifying headers instead.
How do I know if my bucket supports only version 4 signature?
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

Twitter API post statuses/filter with Twitter OAuth Class

I keep getting error code 34 (Sorry, this page does not exist) when attempting to make a post request to the statuses/filter method with the abraham twitteroauth class (https://github.com/abraham/twitteroauth). Following authentication (that's working fine) my request is simple:
$filter = $twitteroauth->post('statuses/filter',array('track' => 'seo'));
I have other calls working but even when I isolate this on a separate instance of the site, I'm only receiving the "Sorry, that page does not exist" error.
Any help would be appreciated.
TwitterOAuth does not currently support the Streaming APIs. You can try the method that #JohnC suggests but I don't know if it will actually work.
Phirehose is the PHP library I recommend for use with the Streaming APIs.
The statuses/filter call uses a different URL to many of the other API calls - using stream.twitter.com instead of api.twitter.com. The library you are using appears to be hardcoded to only use api.twitter.com, so this could be the source of your problem. You can either change the URL for that call:
$twitteroauth->host = "https://stream.twitter.com/1/";
$filter = $twitteroauth->post('statuses/filter',array('track' => 'seo'));
Or if you use the full URL it will override the default (probably the best way if you make multiple calls to the $twitteroauth class):
$filter = $twitteroauth->post('https://stream.twitter.com/1/statuses/filter.json',array('track' => 'seo'));

specifying a 30s ACL not working with Cloudfront?

Using the PHP Amazon SDK I am successfully able to set a 30 second access for a URL using the following functin: get_object_url($bucket, $filename, $preauth = 0, $opt = null)
$s3->get_object_url($results['s3.bucket.name'], $results['s3.file.name'], '30 seconds');
Now, the issue with this is that it returns a fantastic URL:
"s3.url": "http://THECOOLEST.BUCKET.INTHEWORLD.EVER.s3.amazonaws.com/2011/04/18/image/png/8ba2d302-a441-45d4-8354-08e2b7e1a325.png?AWSAccessKeyId=XXXXXXXXXXXXXXXXX&Expires=1303162244&Signature=AWdUnHSaIBDmRcbwo2RFSUQaqBM%3D",
When I change the URL to the CNAME we use for cloudfront, the ACL doesn't work. Anyone know how to get_object_url with the CNAME configured?
Cloudfront and S3 are two different things.
You need to setup a CNAME for your S3 bucket. See the AWS docs for more info: http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?VirtualHosting.html#VirtualHostingCustomURLs

Categories