Laravel 9 Azure Storage Blob Server AuthorizationPermissionMismatch - php

im doing an Laravel 9 project, where i store image on azure accout.
I use this library https://github.com/matthewbdaly/laravel-azure-storage.
I try to connect with SAS Token it works but i got AuthorizationPermissionMismatch when i try to read it :
Route::get('/azure-test', function() {
$path = '';
$disk = \Storage::disk('azure');
$files = $disk->files($path);
dump($files);exit;
}
My configuration :
'driver' => 'azure',
'driver' => 'azure',
'sasToken' => env('AZURE_STORAGE_SAS_TOKEN'),
'container' => env('AZURE_STORAGE_CONTAINER'),
'url' => env('AZURE_STORAGE_URL'),
'prefix' => null,
'endpoint' => env('AZURE_STORAGE_ENDPOINT'),
'retry' => [
'tries' => 3,
'interval' => 500,
'increase' => 'exponential'
],
Just to be clear the file exist, i test it without SAS token configuration it display informations about my test file. I already searched and did some change, like assigned my account to roles "Storage Blob Data Contributor” and “Storage Queue Data Contributor", my sas token still work when i try to see my file "https://xxxxxxxx.blob.core.windows.net/container_name/im_test_file.pdf?sp=r&st=2022-06-24T15:32:22Z&se=2024-04-30T23:32:22Z&sv=2021-06-08&sr=c&sig=QSz6SZ6UrSMg0jqyKEr4bnnGqrMuxK2EIbGgTTbP%2F10%3D" it works.
Any Idea ?

AFAIK, To resolve "AuthorizationPermissionMismatch" error try assigning Storage Blob Data Reader role to your account like below:
Please note that Azure role assignments can take up to five
minutes to propagate.
Check whether you have given the below permissions while creating the SAS token:
For more in detail, please refer below links:
Authorize access to blobs with AzCopy & Azure Active Directory | Microsoft Docs
Fixed – authorizationpermissionmismatch Azure Blob Storage – Nishant Rana's Weblog

Related

Consent is required to transfer ownership of a file to another user [Google Drive API]

I have a Google Sheets file shared with 1 user - owner (Gmail account) and 2nd user - writer (google service account like #iam.gserviceaccount.com).
I use this function to copy a table from user 1 by user 2 and make the owner of copied file user 1 again:
$ownerPermission = new Google_Service_Drive_Permission([
'type' => 'user',
'role' => 'owner',
'emailAddress' => 'user1#gmail.com'
]);
$requestOwnership = $service->permissions->create($newfileId, $ownerPermission, ['fields' => 'id', 'transferOwnership' => 'true']);
And all works well until yesterday! Now Gooogle Drive API returns error: Consent is required to transfer ownership of a file to another user
How to fix it?
This seems to be a recent change to the google API. See also https://developers.google.com/drive/api/guides/manage-sharing
I haven't tried this, but it suggests that you should replace transferOwnership=true with role=writer and pendingOwner=true, and then separately update the permissions as user1#gmail.com with role=owner. I don't know how you do that second step from a gmail.com account though.
This seems to be working as expected as according to this Issue Tracker issue here:
Following up here, this is the expected behavior as currently Drive does not support the changing of the ownership for items which are owned by gmail.com accounts.

Download enterprise.wsdl.xml from Salesforce

I'm trying to configure this package https://github.com/davispeixoto/Laravel-5-Salesforce
in my laravel app
Expecting
return [
'username' => 'YOUR_SALESFORCE_USERNAME',
'password' => 'YOUR_SALESFORCE_PASSWORD',
'token' => 'YOUR_SALESFORCE_TOKEN',
'wsdl' => 'path/to/your/enterprise.wsdl.xml',
];
It have 3 parameters.
'username' => 'YOUR_SALESFORCE_USERNAME',
'password' => 'YOUR_SALESFORCE_PASSWORD',
'token' => 'YOUR_SALESFORCE_TOKEN',
But I'm not sure what is
`'wsdl' => 'path/to/your/enterprise.wsdl.xml',`
Where and how do I get enterprise.wsdl.xml ?
Log in to your Salesforce account. You must log in as an
administrator or as a user who has the “Modify All Data” permission.
From Setup, enter API in the Quick Find box, then select API.
Click Generate Metadata WSDL and save the XML WSDL file to your file system.
Click Generate Enterprise WSDL and save the XML WSDL file to
your file system.

How to fetch files from S3 bucket, which are moved from S3 bucket to Glacier?

I am storing files on AWS S3 bucket and we have set option to move file from S3 bucket to Glacier after specific time period(e.g 6 months) via AWS console.
File are successfully moving from S3 to Glacier.
Now, I want to retrieve files moved on Glacier. But I couldn't find any working method to do so.
I have already tried with referring document of AWS Glacier but no luck.
Note : We are trying to do this via PHP SDK or any other way using PHP.
As per documentation said:
Objects in the Amazon Glacier storage class are not immediately accessible: you must first restore a temporary copy of the object to its bucket before it is available.
You need to initiate a restore operation on your archived (S3-Glacier) object, which may take a few hours (typically three to five hours) to be restored as a temporary object. If you want them permanently in S3 bucket, you can create a copy within your S3 bucket after the restore is done.
To initiate restore job, you can use:
S3 Management Console, see here.
AWS CLI, see here.
Call S3 REST API - POST Restore Object, see here.
AWS SDK, for PHP can see in here.
To determine when a restore job is complete programmatically, you can:
Call S3 REST API - HEAD Object, see here.
AWS SDK, for PHP can see in here.
After the restore job is done, you can retrieve the object in S3 bucket for a certain period that you set in the job.
If you are Using PHP SDK you can use this
$objects = $s3Client->restoreObject(array(
'Bucket' => 'Bucket name'
,"Key" => 'File Key which is file bath in S3 bucket'
//,'RequestPayer' => 'requester',
,'RestoreRequest' => [
'Days' => 10,
'GlacierJobParameters' => [
//'Tier' => 'Standard|Bulk|Expedited', // REQUIRED
'Tier' => Expedited, // REQUIRED
]
],
));
Here is the general template for this
$result = $client->restoreObject([
'Bucket' => '<string>', // REQUIRED
'Key' => '<string>', // REQUIRED
'RequestPayer' => 'requester',
'RestoreRequest' => [
'Days' => <integer>, // REQUIRED
'GlacierJobParameters' => [
'Tier' => 'Standard|Bulk|Expedited', // REQUIRED
],
],
'VersionId' => '<string>',
]);

Logging to CloudWatch from EC2 instances

My EC2 servers are currently hosting a website that logs each registered user's activity under their own separate log file on the local EC2 instance, say username.log. I'm trying to figure out a way to push log events for these to CloudWatch using the PHP SDK without slowing the application down, AND while still being able to maintain a separate log file for each registered member of my website.
I can't for the life of me figure this out:
OPTION 1: How can I log to CloudWatch asynchronously using the CloudWatch SDK? My PHP application is behaving VERY sluggishly, since each log line takes roughly 100ms to push directly to CloudWatch. Code sample is below.
OPTION 2: Alternatively, how could I configure an installed CloudWatch Agent on EC2 to simply OBSERVE all of my log files, which would basically upload them asynchronously to CloudWatch for me in a separate process? The CloudWatch EC2 Logging Agent requires a static "configuration file" (AWS documentation) on your server which, to my knowledge, needs to lists out all of your log files ("log streams") in advance, which I won't be able to predict at the time of server startup. Is there any way around this (ie, simply observe ALL log files in a directory)? Config file sample is below.
All ideas are welcome here, but I don't want my solution to simply be "throw all your logs into a single file, so that your log names are always predictable".
Thanks in advance!!!
OPTION 1: Logging via SDK (takes ~100ms / logEvent):
// Configuration to use for the CloudWatch client
$sharedConfig = [
'region' => 'us-east-1',
'version' => 'latest',
'http' => [
'verify' => false
]
];
// Create a CloudWatch client
$cwClient = new Aws\CloudWatchLogs\CloudWatchLogsClient($sharedConfig);
// DESCRIBE ANY EXISTING LOG STREAMS / FILES
$create_new_stream = true;
$next_sequence_id = "0";
$result = $cwClient->describeLogStreams([
'Descending' => true,
'logGroupName' => 'user_logs',
'LogStreamNamePrefix' => $stream,
]);
// Iterate through the results, looking for a stream that already exists with the intended name
// This is so that we can get the next sequence id ('uploadSequenceToken'), so we can add a line to an existing log file
foreach ($result->get("logStreams") as $stream_temp) {
if ($stream_temp['logStreamName'] == $stream) {
$create_new_stream = false;
if (array_key_exists('uploadSequenceToken', $stream_temp)) {
$next_sequence_id = $stream_temp['uploadSequenceToken'];
}
break;
}
}
// CREATE A NEW LOG STREAM / FILE IF NECESSARY
if ($create_new_stream) {
$result = $cwClient->createLogStream([
'logGroupName' => 'user_logs',
'logStreamName' => $stream,
]);
}
// PUSH A LINE TO THE LOG *** This step ALONE takes 70-100ms!!! ***
$result = $cwClient->putLogEvents([
'logGroupName' => 'user_logs',
'logStreamName' => $stream,
'logEvents' => [
[
'timestamp' => round(microtime(true) * 1000),
'message' => $msg,
],
],
'sequenceToken' => $next_sequence_id
]);
OPTION 2: Logging via CloudWatch Installed Agent (note that config file below only allows hardcoded, predermined log names as far as I know):
[general]
state_file = /var/awslogs/state/agent-state
[applog]
file = /var/www/html/logs/applog.log
log_group_name = PP
log_stream_name = applog.log
datetime_format = %Y-%m-%d %H:%M:%S
Looks like we have some good news now... not sure if it's too late!
CloudWatch Log Configuration
So to answer the doubt,
Is there any way around this (ie, simply observe ALL log files in a directory)?
yes, we can mention log files and file paths using wild cards, which can help you in having some flexibility in configuring from where the logs are fetched and pushed to the log streams.

How do I give multiple users access to a single Amazon S3 account AND determine who's added a file?

I have an AWS S3 account which contains 3 buckets. I need to be able to generate access codes for a new user so that they can access the buckets and add/delete files (preferably only their own, but not a deal breaker).
I have managed to get as far as granting access to new users using IAM. However, when I read the metadata of uploaded objects (in PHP using the AWS SDK) the owner comes back as the main AWS account.
I've read pages of documentation but can't seem to find anything relating to determining who the owner (or uploader) of the file was.
Any advice or direction massively appreciated!
Thanks.
If your only problem is to find the owner of Uploaded file.
You can pass the owner info as meta-data of uploaded file.
Check http://docs.amazonwebservices.com/AmazonS3/latest/dev/UsingMetadata.html
In php code while uploading :
// Instantiate the class.
$s3 = new AmazonS3();
$response = $s3->create_object(
$bucket,
$keyname2,
array(
'fileUpload' => $filePath,
'acl' => AmazonS3::ACL_PUBLIC,
'contentType' => 'text/plain',
'storage' => AmazonS3::STORAGE_REDUCED,
'headers' => array( // raw headers
'Cache-Control' => 'max-age',
'Content-Encoding' => 'gzip',
'Content-Language' => 'en-US',
'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT',
),
'meta' => array(
'uploadedBy' => 'user1',
) )
);
print_r($response);
Check php api for more info.

Categories