Azure SDK PHP SAS for container [duplicate] - php

I'm getting this error:
<Error>
<Code>AuthenticationFailed</Code>
<Message>
Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. RequestId:6c3fc9a8-cdf6-4874-a141-10282b709022 Time:2014-07-30T10:48:43.8634735Z
</Message>
<AuthenticationErrorDetail>
Signature did not match. String to sign used was rwl 2014-07-31T04:48:20Z /acoustie/$root 2014-02-14
</AuthenticationErrorDetail>
</Error>
I get it when I generate a sas (Shared Access Signature) then paste that sas at the end of the container uri into a browser. This is the full address with the generated sas:
https://acoustie.blob.core.windows.net/mark?sv=2014-02-14&sr=c&sig=E6w%2B3B8bAXK8Lhvvr62exec5blSxsA62aSWAg7rmX4g%3D&se=2014-07-30T13%3A30%3A14Z&sp=rwl
I have scoured SO and Google and have tried lots of combinations, as far as I can tell I'm doing everything correctly, I know I'm not, I just can't see it...really hoping someone can help :-\
To be clear, I am generating a sas on a container, not a specific blob and not on the root container. Access on the blob is defined as Public Blob. My end goal is to simply allow writes to the container with the sas, while 'debugging' I have added most permissions to the SharedAccessBlobPolicy.
I have tried adding a \ at the beginning and ending of the container name. No change.
This is the code I use to generate the sas:
var blobClient = storageAccount.CreateCloudBlobClient();
//Get a reference to the blob container
var container = blobClient.GetContainerReference(containerName);
// Do not set start time so the sas becomes valid immediately.
var sasConstraints = new SharedAccessBlobPolicy
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(30),
Permissions = SharedAccessBlobPermissions.Write
| SharedAccessBlobPermissions.Read
| SharedAccessBlobPermissions.List,
};
var sasContainerToken = container.GetSharedAccessSignature(sasConstraints);
//Return the URI string for the container, including the SAS token.
var sas = string.Format("{0}{1}", container.Uri.AbsoluteUri, sasContainerToken);
Logger.Debug("SAS: {0}", sas);
return sas;
It generates a signature, it just doesn't seem to be a valid signature.
I've tried different containers, changing the Access policy, with and without start times, extending the expiry to > 12 hours from now (I'm in a UTC+10 timezone), it doesn't seem to matter what I change it results in the same "signature did not match" error.
I have even tried using an older version of 'WindowsAzure.Storage', so I have now tried 4.2 and 4.1. Even tried the uri in a different browser, really shouldn't make a difference but hey...
Any suggestions are greatly appreciated :-)

Short Answer:
Add comp=list&restype=container to your SAS URL and you should not get this error.
Long Answer:
Essentially from your SAS URL, Azure Storage Service is not able to identify if the resource you're trying to access is a blob or a container and assumes it's a blob. Since it assumes the resource type is blob, it makes use of $root blob container for SAS calculation (which you can see from your error message). Since SAS was calculated for mark blob container, you get this Signature Does Not Match error. By specifying restype=container you're telling storage service to treat the resource as container. comp=list is required as per REST API specification.

Adding to #Gaurav Mantri Answer, in order to double check the permissions, you can also create your OWN SAS token in Azure Portal
From this you can relate this comp=list&restype=container
Resource types you can provide as :
Container
Object
Service
Hope this helps to some one..

After spending lot of time on this the actual error is different then exception raised by .net compiler. if you're using meta data fields while uploading blob file into storage then check metadata character's. For example I am adding metadata fields like description, filename and etc.... In description field I have some junk characters and which i found at the run time string text viewer.
my description originally > test� file description , after changing the description "test file description" . It is working fine.
metadata values i have extracted from different sources tats why it got that junk characters . Please remove/amend values of metadata then it will work well.

Related

get filename from google docs using laravel

My site has a feature where users can upload a link to their google docs file. What I want to do is list all the uploaded links in a place. While doing that I need to show the name of the file that is associated with the link.
I can extract the file id from the link and make sure the link is of google docs. Now I need to find a way to get the filename from that. I tried going through the google developer API for google drive, but it is for uploading/doing anything only on the authorized docs. My issue here is, my users upload the files manually to their docs which I have no control over. All I get is a sharable link and somehow get the name out of it. In addition, a thumbnail will also help.
I have tried doing this, but it throws error
$url = "https://www.googleapis.com/drive/v3/files/1G6N6FyXzg7plgEtJn-Cawo5gbghrS8z9_j_cvVqcEDA";
// and
$url = "https://docs.google.com/document/d/1G6N6FyXzg7plgEtJn-Cawo5gbghrS8z9_j_cvVqcEDA/edit?usp=sharing"
$html= file_get_contents($url);
print_r($html);
A dummy link for anyone willing to help: https://docs.google.com/document/d/1G6N6FyXzg7plgEtJn-Cawo5gbghrS8z9_j_cvVqcEDA/edit?usp=sharing
Since we are getting the URL to the file, we can do a couple of things -
get the id to the file
get what type of file is that
Before I explain how to do that, it is better to know that there can be 2 possible situations. One the file is a google docs file, the other google drive file. Those both start with different URLs.
$url = explode('/', Str::after(
Str::after($request->url, 'https://docs.google.com/'), 'https://drive.google.com/'
));
I am using 2 Str::after() to remove the unnecessary part from the URL. Then I am using explode to convert the URL into an array.
Since we have excluded the useless part from the URL, we are left with document/d/1G6N6FyXzg7plgEtJn-Cawo5gbghrS8z9_j_cvVqcEDA/edit?usp=sharing in an array form.
So, if we try to do $url[2], we get the id of the file. Also, "document" is also a good thing to note about. I use those to show proper images. There can be 5 different types of them (4 for google docs and 1 for google drive). Those are - document, spreadsheets, forms, presentation for google docs, and file for google drive. I would recommend everyone store these in the database so that extra calculations are not necessary while displaying it.
Now, to answer the actual part of the question. How to get the name. I have created a new model method to handle that.
public function name()
{
$key = config('app.google_api_key');
$url = "https://www.googleapis.com/drive/v3/files/{$this->file_id}?key={$key}";
$response = Http::get($url)->json();
return $response['name'] ?? 'Private File';
}
Don't forget to add your Google API key in the config file app.php (You need to create one). You can get your API key from Google Developer Console and create a project-specific key. Just to note that this key need not be belonging to the user of the URL.
Also, a thing to note here is that $response returns error code if the file is not set to visible to the public or the file is deleted.

How to create Domain on Jasper via PHP Client

My problem is:
i've got the PHP Client http://community.jaspersoft.com/wiki/php-client-sample-code#About_the_Class from the Jasper Community.
I want to add a Domain with PHP to the Jasper Repository and i've got the needed data in an .xml, like label etc.
In this PHP Client i have to use the class SemanticLayerDataSource to create a domain.
This class got a public variable schema.
But i can't find what this schema needs to work and add an correct domain to repository. There is not info neither on the webside nor in the class.
$semLayer = new SemanticLayerDataSource();
$semLayer->schema = ?????
$semLayer->label = (string)$xml->label; //SimpleXml
.
.
.
Which Data needs schema? An array, a resource or something else? Thank you.
Also a code sample with PHP Client would be really good, cause the documentation is not that good in this point.
Edit: I tried to create a xml as a local file a set for schema the uri of this xml. To create the xml i used this: http://community.jaspersoft.com/wiki/php-client-sample-code#Creating_Binary_Resources
I am able to create a domain, but AdHoc views on this domain doesn't work. I get a null exception from jasper.
According to the REST API docs you need to provide a schema ressource:
<schemaFileReference>
<uri>{schemaFileResourceUri}</uri>
</schemaFileReference>
This ressource represents the whole structure, as written in the domain metadata service description (under the paragraph Working with Domain Schemas):
The v2/domains/metadata service returns only the display information about a Domain, not its internal definition. The fields, joins, filters, and calculated fields that define the internal structure of a Domain make up the Domain design. The XML representation of a Domain design is called the Domain schema.
Currently, there is no REST service to interact with Domain schemas, but you can use the v2/resources service to retrieve the raw schema. First, retrieve the resource descriptor for the Domain. For example, to view the descriptor for the Supermart Domain, use the following request (when logged in as jasperadmin):
GET http://<host>:<port>/jasperserver-pro/rest_v2/resources/Domains/supermartDomain
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<semanticLayerDataSource>
<creationDate>2013-10-10 15:30:31</creationDate>
<description>Comprehensive example of Domain (pre-joined table sets for complex reporting, custom query based dataset, column and row security, I18n bundles)</description>
<label>Supermart Domain</label>
<permissionMask>1</permissionMask>
<updateDate>2013-10-10 15:30:31</updateDate>
<uri>/organizations/organization_1/Domains/supermartDomain</uri>
<version>1</version>
<dataSourceReference>
<uri>/organizations/organization_1/analysis/datasources/FoodmartDataSourceJNDI</uri>
</dataSourceReference>
<bundles>
<bundle>
<fileReference><uri>/organizations/organization_1/Domains/supermartDomain_files/supermart_domain.properties</uri></fileReference>
<locale></locale>
</bundle>
(snip) [...]
The Domain schema is an XML file with a structure explained in the JasperReports Server User Guide. If you wish to modify the schema programmatically, you must write your own parser to access its fields and definitions. You can then replace the schema file in the Domain with one of the file updating methods described in .
Well, i found the solution.
If you want to create a schema per php client, create a new file object.
$file = new \Jaspersoft\Dto\Resource\File();
$file->type = "xml";
$file->label = "MyDomain_schema";
$file->content = base64_encode((string)$schemaXML);
The file content is the base64 encoded (valid) domain schema.
Now set $semLayer->schema = $file. This way works rather good.
Also, there is a way to create the domain via multipart request, but this way is rather complicated with php client. There is a function multipartrequest in the PHP Client, but it seems that this function consists of legacy code.

How to set storage class when using Amazon S3 UploadBuilder in PHP

I'm using the PHP SDK 2 for Amazon S3, in particular the UploadBuilder class to do multipart concurrent uploading of files. How, if you can, do you set the storage class of the file that you are uploading? When you do a regular putObject, you can set the storage class that you want the file to have. I want my files to be stored using reduced redundancy rather than the standard storage. Can I just use setOption to set a header like so?
setOption('x-amz-storage-class', 'REDUCED_REDUNDANCY')
From looking at the source of UploadBuilder it appears you may be able to set it using Metadata. Here's an example of setting an MD5 header from the source:
// If an MD5 is specified, then add it to the custom headers of the request
// so that it will be returned when downloading the object from Amazon S3
if ($this->md5) {
$params['Metadata']['x-amz-Content-MD5'] = $this->md5;
}
So, that would mean I would set it doing something like this:
setOption('Metadata', array('x-amz-storage-class' => 'REDUCED_REDUNDANCY'))
Can anyone confirm this is correct or am I way off base?
In the API docs for setOption, it says that the method sets "an option to pass to the initial CreateMultipartUpload operation". According to the API docs for createMultipartUpload, you should use the StorageClass param. So, you should be able to do the following with the UploadBuilder to set the storage class on your multipart upload:
->setOption('StorageClass', 'REDUCED_REDUNDANCY')

How to upload files,create folder into repo using api

Hi i m using GitHub v3 and i want to add new binery file in repo .by using KnpLabs php-github-api i m exectly doing what says in
get the current commit object
retrieve the tree it points to
retrieve the content of the blob object that tree has for that particular file path
change the content somehow and post a new blob object with that new content, getting a blob SHA back
post a new tree object with that file path pointer replaced with your new blob SHA getting a tree SHA back
and soo on . but on 5 point i got an exseption
server error
form this code
$comit=$client->api('git')->commits()->show($userName,$reposit,'master');
$basetree=$client->api('git')->trees()->show($userName,'appwiz',$comit['commit'] ['tree']['sha']);
$newBlob=$client->api('git')->blobs()->create($userName,$reposit,array('content'=> "gitapi",'encoding'=>'base64'));
$client->authenticate($userName,$password,Github\Client::AUTH_HTTP_PASSWORD);
$treeData = array(
'tree'=>
array( array('path'=>'/'
,'mode'=>'040000'
,'type'=>'tree'
,'content'=>'folder')
)
);
You cannot
As part of our ongoing effort to keep GitHub focused on building software, we
are deprecating the Downloads Tab. The Downloads API is officially deprecated
and will be disabled in 90 days.
github.com/blog/1302-goodbye-uploads
I was under the impression you needed a valid sha before you could create a tree. Based on the documentation for creating a tree it seems you need to get the SHA1 of the object. So it seems you might have to have already added the tree to the index. Without that you won't be able to get the SHA of the object as git has recognized it.

Detect a digital signature without WinVerifyTrust

I have a large number of EXE files and need to figure out which ones have digital signatures. Does anyone know if there is a way to check without access to WinVerifyTrust (they're all on a Unix server).
I can't seem to find any information on where the digital signature actually is inside the EXE. If I could find out where it is I might be able to open the file and fseek to a location to test. I don't need to do "real" verification on the certificate, I just want to see if a digital signature is present (or, more importantly, NOT present) without having to use WinVerifyTrust.
As mentioned above, the solely presence of the IMAGE_DIRECTORY_ENTRY_SECURITY directory is a clear indicator to detect the presence of a signature inside a PE file. If you have a large amount of files to test and want to filter these, just testing the presence of this standard directory is valid. You don't need a library to do this.
I tried to solve the problem in the same situation.
I recommend osslsigncode.
This is an implementation of windows authenticode with openssl.
https://github.com/develar/osslsigncode
Below is a code block excerpt from osslsigncode.
siglen = GET_UINT32_LE(indata + peheader + 152 + pe32plus*16 + 4);
If siglen is 0 in osslsigncode, it determines that there is no signature.
If you just want to check the signature, you don't need a library.
However, see osslsigncode for help.
You can find this information using code from Mono.Security.dll AuthenticodeBase [1]
[1] https://github.com/mono/mono/blob/master/mcs/class/Mono.Security/Mono.Security.Authenticode/AuthenticodeBase.cs
Your best hint (if an authenticode signature is present) is:
// 2.2. Locate IMAGE_DIRECTORY_ENTRY_SECURITY (offset and size)
dirSecurityOffset = BitConverterLE.ToInt32 (fileblock, peOffset + 152);
dirSecuritySize = BitConverterLE.ToInt32 (fileblock, peOffset + 156);
if dirSecuritySize is larger than 8 then there's an signature entry (valid or not).

Categories