PHP/Amazon S3: Query string authentication sometimes fails - php

I created a simple file browser in PHP that links to the files through generation expiring query URLs. So for each access to a directory, a link to each file is generated that is valid for say 900 seconds.
I now have the problem that the generated signatures seem to fail sometimes. Which is strange, since I intentionally used external S3 libraries for generating the URLs and signatures.
In fact, I tried the following libraries to generate the signatures:
CloudFusion
S3 generator
Amazon S3 PHP class
The libraries internally use hash_hmac('sha256', ... or hash_hmac('sha1', ... - I also don't understand why differnet hash algorithms are used.
Since the problem is the same with all libraries, it could as well be in my URL generation code, which is straightforward though:
$bucket = "myBucket";
$filename = $object->Key;
$linksValidForSeconds = 900;
$url = $s3->get_object_url($bucket, $filename, $linksValidForSeconds);
Sp $bucket and $linksValidForSeconds are constant, $filename is e.g. "Media/Pictures/My Picture.png". But event for same variables, it sometimes works, soemtimes doesn't.
Any ideas?
Edit: Typo/Wrong constant variable name fixed (thanks)

I found the problem and it had nothing to do with the code I mentioned. The generated URL is urlencode()'d and sent to another PHP script. There I use the URL to display an image from S3. I used urldecode() there to undo the changes but apparently this is not neccesary.
So each time the signature contains certain chars, the urldecode() would change them and corrupt it.
Sorry for omitting the actual problem code.

The code the asker is using above is from the CloudFusion AWS PHP SDK. Here's the documentation for get_object_url(): get_object_url ( $bucket, $filename, [ $preauth = 0 ], [ $opt = null ] )
The problem in your code above is your $linksValidForSeconds variable.
Where: $preauth - integer | string (Optional) Specifies that a presigned URL for this request should be returned. May be passed as a number of seconds since UNIX Epoch, or any string compatible with strtotime().
In other words, you are setting an expires time for 900 seconds after UNIX Epoch. I am honestly not sure how any links have worked using that library with your client code. If you are using the CloudFusion SDK, what you want to do is take the current UNIX time and add 900 seconds to that when passing in the parameter.
You seem to be confusing this with the Amazon S3 Class' getAuthenticatedURL method which takes the parameter integer $lifetime in seconds as you've used in your client code.
Be careful when using multiple libraries and swapping between them freely. Things tend to break that way.

The current version of CloudFusion is the AWS SDK for PHP, plus some other stuff. Amazon forked CloudFusion as the basis for their PHP SDK, then when the official SDK went live, CloudFusion backported the changes.
It's kind of a KHTML/WebKit thing. http://en.wikipedia.org/wiki/WebKit#History

Related

Google Plus API - Cannot Handle Token Prior to Certain Date

I implemented a login functionality using Google Plus API. It was working fine until we moved the deployment timezone. The problem below started appearing from time to time even though the server time has been adjusted properly:
Cannot handle token prior to 2018-02-01T06:30:07+0000
This was implemented in PHP and using the SDK for Google Plus. Has anyone encountered this before and resolved it properly?
This worked for me as well. I had to go into my vendor folder that composer generates for me in vendor\google\apiclient\src\Google\AccessToken\Verify.php and look for a function getJwtService() which should look exactly like this
private function getJwtService()
{
$jwtClass = 'JWT';
if (class_exists('\Firebase\JWT\JWT')) {
$jwtClass = 'Firebase\JWT\JWT';
}
if (property_exists($jwtClass, 'leeway')) {
// adds 1 second to JWT leeway
// #see https://github.com/google/google-api-php-client/issues/827
$jwtClass::$leeway += 1;
}
return new $jwtClass;
}
Then I changed the value of the $jwtClass::$leeway += 1; to $jwtClass::$leeway += 200; due to my timezone. I was about 2mins 30 seconds behind. Beware this comes with security vulnerabilities.
While the answer provided by richard4s works well but it's not a good practice to edit files in vendor directory as they are created by composer and would typically be outside your project's Git/Svn repo. The Google_Client accepts custom jwt object as a parameter to it's constructor. So here's a proper way to fix this:
$jwt = new \Firebase\JWT\JWT;
$jwt::$leeway = 5; // adjust this value
// we explicitly pass jwt object whose leeway is set to 5
$this->client = new \Google_Client(['jwt' => $jwt]);
Copied from this article.
This error appears to occur when the server's clock is a few seconds behind Auth servers clock. You probably have a slight skew between the clock on the server that mints the tokens and the clock on the server that's validating the token if the iat or nbf is in the future, then the token isn't yet valid.
One solution would be to use a small leeway, like this:
JWT::$leeway = 5; // Allows a 5 second tolerance on timing checks
see issue 1172

Azure SDK PHP SAS for container [duplicate]

I'm getting this error:
<Error>
<Code>AuthenticationFailed</Code>
<Message>
Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. RequestId:6c3fc9a8-cdf6-4874-a141-10282b709022 Time:2014-07-30T10:48:43.8634735Z
</Message>
<AuthenticationErrorDetail>
Signature did not match. String to sign used was rwl 2014-07-31T04:48:20Z /acoustie/$root 2014-02-14
</AuthenticationErrorDetail>
</Error>
I get it when I generate a sas (Shared Access Signature) then paste that sas at the end of the container uri into a browser. This is the full address with the generated sas:
https://acoustie.blob.core.windows.net/mark?sv=2014-02-14&sr=c&sig=E6w%2B3B8bAXK8Lhvvr62exec5blSxsA62aSWAg7rmX4g%3D&se=2014-07-30T13%3A30%3A14Z&sp=rwl
I have scoured SO and Google and have tried lots of combinations, as far as I can tell I'm doing everything correctly, I know I'm not, I just can't see it...really hoping someone can help :-\
To be clear, I am generating a sas on a container, not a specific blob and not on the root container. Access on the blob is defined as Public Blob. My end goal is to simply allow writes to the container with the sas, while 'debugging' I have added most permissions to the SharedAccessBlobPolicy.
I have tried adding a \ at the beginning and ending of the container name. No change.
This is the code I use to generate the sas:
var blobClient = storageAccount.CreateCloudBlobClient();
//Get a reference to the blob container
var container = blobClient.GetContainerReference(containerName);
// Do not set start time so the sas becomes valid immediately.
var sasConstraints = new SharedAccessBlobPolicy
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(30),
Permissions = SharedAccessBlobPermissions.Write
| SharedAccessBlobPermissions.Read
| SharedAccessBlobPermissions.List,
};
var sasContainerToken = container.GetSharedAccessSignature(sasConstraints);
//Return the URI string for the container, including the SAS token.
var sas = string.Format("{0}{1}", container.Uri.AbsoluteUri, sasContainerToken);
Logger.Debug("SAS: {0}", sas);
return sas;
It generates a signature, it just doesn't seem to be a valid signature.
I've tried different containers, changing the Access policy, with and without start times, extending the expiry to > 12 hours from now (I'm in a UTC+10 timezone), it doesn't seem to matter what I change it results in the same "signature did not match" error.
I have even tried using an older version of 'WindowsAzure.Storage', so I have now tried 4.2 and 4.1. Even tried the uri in a different browser, really shouldn't make a difference but hey...
Any suggestions are greatly appreciated :-)
Short Answer:
Add comp=list&restype=container to your SAS URL and you should not get this error.
Long Answer:
Essentially from your SAS URL, Azure Storage Service is not able to identify if the resource you're trying to access is a blob or a container and assumes it's a blob. Since it assumes the resource type is blob, it makes use of $root blob container for SAS calculation (which you can see from your error message). Since SAS was calculated for mark blob container, you get this Signature Does Not Match error. By specifying restype=container you're telling storage service to treat the resource as container. comp=list is required as per REST API specification.
Adding to #Gaurav Mantri Answer, in order to double check the permissions, you can also create your OWN SAS token in Azure Portal
From this you can relate this comp=list&restype=container
Resource types you can provide as :
Container
Object
Service
Hope this helps to some one..
After spending lot of time on this the actual error is different then exception raised by .net compiler. if you're using meta data fields while uploading blob file into storage then check metadata character's. For example I am adding metadata fields like description, filename and etc.... In description field I have some junk characters and which i found at the run time string text viewer.
my description originally > test� file description , after changing the description "test file description" . It is working fine.
metadata values i have extracted from different sources tats why it got that junk characters . Please remove/amend values of metadata then it will work well.

Using putObject command to aliased bucket

I've got a script that uploads files perfectly fine into buckets. However, one particular bucket has been given a cname so that it can be accessed directly, apparently it has been assigned this using CloudFront.
I'm no expert in this field, but basically, instead of accessing the bucket via:
http://mybucket.mysite.com.s3.amazonaws.com/thing.txt, it allows you to access it via:
http://mybucket.mysite.com/thing.txt
It performs the put fine by the looks of it, when I do a response on the callback, it says it's all done but the last element in the array swaps the bucket and the endpoint around, so it looks like this: https://s3.amazonaws.com/mybucket.mysite.com/thing.txt
However, when I use any other bucket it uploads correctly and returns the correct ObjectURL.
Having had a search around google and this site, I can't seem to find a solution so any help would be magic.
I'm using an older version of the AWS PHP 2 sdk, currently using 2.2.1.
Edit: even stranger still, when I pass the bucket through the isValidBucketName method, it returns true.
Just in case anyone else ever encounters this issue, the problem was that when executing the put, the SDK automatically assumes that you are trying to connect to a bucket in the US region. So I needed to specify what region the bucket is in, in my case EU_WEST_1, so when you set up your config array, be sure to provide this value, eg.
$config = array(
'key' => 'your-key'
, 'secret' => 'your-secret'
, 'region' => Region::EU_WEST_1
);
Being sure to include the Aws\Common\Enum\Region in the class.

file_exists() expects parameter 1 to be a valid path, string given

I'm designing a web application that can be customized based on which retail location the end user is coming from. For example, if a user is coming from a store called Farmer's Market, there may be customized content or extra links available to that user, specific to that particular store. file_exists() is used to determine if there are any customized portions of the page that need to be imported.
Up until now, we've been using a relatively insecure method, in which the item ID# and the store are simply passed in as GET parameters, and the system knows to apply them to each of the links within the page. However, we're switching to a reversible hash method, in which the store and item number are encrypted (to look something like "gd651hd8h41dg0h81"), and the pages simply decode them and assign the store and ID variables.
Since then, however, we've been running into an error that Googling extensively hasn't found me an answer for. There are several similar blocks of code, but they all look something like this:
$buttons_first = "../stores/" . $store . "/buttons_first.php";
if(file_exists($buttons_first))
{
include($buttons_first);
}
(The /stores/ directory is actually in the directory above the working one, hence the ../)
Fairly straightforward. But despite working fine when a regular ID and store is passed in, using the encrypted ID throws this error for each one of those similar statements:
Warning: file_exists() expects parameter 1 to be a valid path, string given in [url removed] on line 11
I've had the script spit back the full URL, and it appears to be assigning $store correctly. I'm running PHP 5.4.11 on 1&1 hosting (because I know they have some abnormalities in the way their servers work), if that helps any.
I got the same error before but I don't know if this solution of mine works on your problem you need to remove the "\0" try replace it:
$cleaned = strval(str_replace("\0", "", $buttons_first));
it worked on my case.
Run a var_dump(strpos($buttons_first,"\0")), this warning could come up when a path has a null byte, for security reasons. If that doesn't work, check the length of the string and make sure it is what you'd expect, just in case there are other invisible bytes.
It may be a problem with the path as it depends where you are running the script from. It's safer to use absolute paths. To get the path to the directory in which the current script is executing, you can use dirname(__FILE__).
Add / before stores/, you are better off using absolute paths.
I know this post was created on 2013 but didn't saw the common solution.
This error occurs after adding multiple to the file submit form
for example you are using files like this on php: $_FILES['file']['tmp_name']
But after the adding multiple option to the form. Your input name became file => file[]
so even if you post just one file, $_FILES['file']['tmp_name'] should be change to $_FILES['file']['tmp_name'][0]

Detect a digital signature without WinVerifyTrust

I have a large number of EXE files and need to figure out which ones have digital signatures. Does anyone know if there is a way to check without access to WinVerifyTrust (they're all on a Unix server).
I can't seem to find any information on where the digital signature actually is inside the EXE. If I could find out where it is I might be able to open the file and fseek to a location to test. I don't need to do "real" verification on the certificate, I just want to see if a digital signature is present (or, more importantly, NOT present) without having to use WinVerifyTrust.
As mentioned above, the solely presence of the IMAGE_DIRECTORY_ENTRY_SECURITY directory is a clear indicator to detect the presence of a signature inside a PE file. If you have a large amount of files to test and want to filter these, just testing the presence of this standard directory is valid. You don't need a library to do this.
I tried to solve the problem in the same situation.
I recommend osslsigncode.
This is an implementation of windows authenticode with openssl.
https://github.com/develar/osslsigncode
Below is a code block excerpt from osslsigncode.
siglen = GET_UINT32_LE(indata + peheader + 152 + pe32plus*16 + 4);
If siglen is 0 in osslsigncode, it determines that there is no signature.
If you just want to check the signature, you don't need a library.
However, see osslsigncode for help.
You can find this information using code from Mono.Security.dll AuthenticodeBase [1]
[1] https://github.com/mono/mono/blob/master/mcs/class/Mono.Security/Mono.Security.Authenticode/AuthenticodeBase.cs
Your best hint (if an authenticode signature is present) is:
// 2.2. Locate IMAGE_DIRECTORY_ENTRY_SECURITY (offset and size)
dirSecurityOffset = BitConverterLE.ToInt32 (fileblock, peOffset + 152);
dirSecuritySize = BitConverterLE.ToInt32 (fileblock, peOffset + 156);
if dirSecuritySize is larger than 8 then there's an signature entry (valid or not).

Categories