How to upload a folder into AWS - php

Iam using AWS to upload images, css and some zip files for my site and they are fine to upload them.But now I want like I first upload a zip on localhost and I will extract them into one folder and I want to upload that entire folder into aws.Can anyone help me to do it.Thanks in advance.
Iam using function to upload files like
require_once 'aws-sdk-for-php/sdk.class.php';
$s3 = new AmazonS3();
$response = $s3->create_object($bucket, $filename, array(
'fileUpload' => $filepath,
'acl' => AmazonS3::ACL_PUBLIC,
'storage' => AmazonS3::STORAGE_REDUCED,
'headers' => array(
'Cache-Control' => 'max-age=2592000'
);
It is working fine for single images.But I dont know how to do it for entore folder.

There is no API call for Amazon S3 that can upload an entire folder. You can loop through your list of local files and then upload each individually to S3. If you're capable, doing it in parallel can greatly speed the upload, too.
You could also cheat by calling out to the AWS Command Line Interface (CLI). The CLI can upload/download a recursive list of files and can also do multi-part upload for large files. There is also an aws s3 sync command that can intelligently upload only new/modified files.

Using PHP you can upload entire directory:
$client->uploadDirectory(
SOURCE_FOLDER,
YOUR_BUCKET_NAME,
DESTINATION,
array(
'concurrency' => 5,
'debug' => TRUE,
'force' => FALSE,
'params' => array(
'ServerSideEncryption' => 'AES256',
),
)
);

Related

saving files to scaleway storage php methode

how to upload an image to scaleway storage by laravel or PHP methods?
Laravel uses FlySystem under the hood to abstract file storage. It provides several drivers out of the box including: S3, Rackspace, FTP etc.
If you want to support Scaleway, you would need to write a Custom Driver, which you can read more about it here.
Edit: It seems from the documentation of Scaleway, it supports AWS CLI clients, which means, this should be quite easy to add support for in FlySytem. I tried the following and it worked.
I added a new driver in config/filesystems.php as follows:
'scaleway' => [
'driver' => 's3',
'key' => '####',
'secret' => '#####',
'region' => 'nl-ams',
'bucket' => 'test-bucket-name',
'endpoint' => 'https://s3.nl-ams.scw.cloud',
]
and then, to use the disk, I did the following:
\Storage::disk('scaleway')->put('file.txt', 'Contents');
My file was uploaded.
EDIT: I also made a PR to get Scaleway accepted in the list of adapters for League's FlySystem. It got merged. You can see it live here.

PHP - file encoding issue with files downloaded from AWS bucket

Using following code in PHP, I am trying to download file from AWS bucket.
I am able to download file successfully, but the downloaded file is not readable. File encoding is set to ANSI.
In AWS bucket the meta data for this file is as follows
Content-Type: text/csv;%20charset=utf-16le
Content-Encoding: gzip
require '/aws/aws-autoloader.php';
use Aws\S3\S3Client;
// Instantiate the S3 client with your AWS credentials
$key = [access_key];
$secret = [secret_key];
$client = S3Client::factory(array(
'credentials' => array(
'key' => $key,
'secret' => $secret
)
));
$bucket = [bucket_name];
$file=[bucket_file_location];
$fileSaveAs = [download_file_location];
// Get an object using the getObject operation & download file
$result = $client->getObject(array(
'Bucket' => $bucket,
'Key' => $file,
'SaveAs' => $fileSaveAs
));
Can anyone explain, what is wrong here ?
Edit 1:
This file downloads nicely when I download it from AWS bucket directly.
Edit 2 :
I just noticed that downloaded CSV File is always of 1KB size.
Downloaded file damage pattern:
`‹½”ÍJÃ#F¿µOÑPi+h]V[Tð§hâFÚ4HÐjI¬Ð—W¿¤
Edit 3 :
All these files are transferred from Google play bucket using gsutil
Files received from AWS bucket are gzip. (Content-Encoding: gzip)
So need to decode gzip compressed string using gzdecode function
The following code solves problem
$content = gzdecode(file_get_contents($fileSaveAs));
file_put_contents($fileSaveAs,$content);

AWS PHP SDK: Limit S3 file upload size in presigned URL

I'm working on a project involving generating S3 URLs that someone else can use to upload files to my S3 bucket. Here's a minimal working example:
<?php
require('aws.phar');
use Aws\S3\S3Client;
$s3client = new S3Client(...); // credentials go here
$id = uniqid(); // generate some kind of key
$command = $s3client->getCommand('PutObject', [
'ACL' => 'private',
'Body' => '',
'Bucket' => 'mybucket',
'Key' => 'tmp/' . $id]);
echo (string) $s3client->createPresignedRequest($command, '+5 minutes')->getURI();
?>
Now, if I put that file at a location accessible by the internet, my web server can be used to fetch new signed upload URLs:
$ curl http://my.domain.com/some/page.php
https://s3.amazonaws.com/mybucket/tmp/someID?x-amz-acl=private&lots-of-aws-params...
$ curl -X PUT -d "#someFile" https://s3.amazonaws.com/mybucket/tmp/someID?x-amz-acl=private&lots-of-aws-params...
$
This successfully uploads a local file to my bucket, so I can play with it in S3.
Let's suppose that I'm not too worried about people generating many URLs and uploading many files to my bucket in a short period of time, but I would like to limit the size of uploaded files. Many resources suggest attaching a policy to the signed URL:
<?php
require('aws.phar');
use Aws\S3\S3Client;
$s3client = new S3Client(...); // credentials go here
$id = uniqid(); // generate some kind of key
$policy = [
'conditions' => [
['acl' => 'private'],
['bucket' => 'mybucket'],
['content-length-range', 0, 8*1024], // 8 KiB
['starts-with', '$key', 'tmp/']
], 'expiration' =>
(new DateTime())->modify('+5 minutes')->format(DateTime::ATOM)];
$command = $s3client->getCommand('PutObject', [
'ACL' => 'private',
'Body' => '',
'Bucket' => 'mybucket',
'Key' => 'tmp/' . $id,
'Policy' => $policy]);
echo (string) $s3client->createPresignedRequest($command, '+5 minutes')->getURI();
?>
This version generates URLS (without any indication of errors) that can be used in the same way. I'm not sure if I need some of those conditions in the policy (acl, bucket, starts-with), but I don't think that including them would break the policy.
In theory, attempting to use this signed URL to upload a file larger than 8 KiB should cause S3 to abort the upload. However, testing this with a larger file shows that curl still happily uploads the file:
$ ls -lh file.txt
-rw-rw-r-- 1 millinon millinon 210K Jan 2 00:41 file.txt
$ curl http://my.domain.com/some/page.php
https://s3.amazonaws.com/mybucket/tmp/someOtherID?x-amz-acl=private&lots-of-aws-params...
$ curl -X PUT -d "#file.txt" https://s3.amazonaws.com/mybucket/tmp/someOtherID?x-amz-acl=private&lots-of-aws-params...
$
Checking the bucket shows that, indeed, the large file was uploaded, and the file's size is larger than the policy supposedly indicates.
Since various pages show different ways of attaching the policy, I have also tried the following versions:
'Policy' => json_encode($policy)
'Policy' => base64_encode(json_encode($policy))
However, URLs generated with any of these versions allow files larger than the specified size to be uploaded.
Am I attaching the policy incorrectly, or is there a fundamental limitation to restricting uploads to S3 in this manner?
For my web server, I'm using HHVM 3.11.1 with version 3.14.1 of the AWS SDK for PHP.
An S3 upload policy cannot be used with pre-signed URLs.
A policy document can be used with browser uploads to S3 using HTML POST Forms.
Pre-signed URLs and HTML POST forms are two different methods of uploading to S3. The former is arguably simpler, but less flexible, than the latter.
UPDATE
If you must upload the files without the use of a browser, the HTML POST Form's request can be reproduced using PHP's curl functions, a library such as Guzzle, or using the command line as follows:
curl 'https://s3-bucket.s3.amazonaws.com/' \
-F 'key=uploads/${filename}' \
-F 'AWSAccessKeyId=YOUR_AWS_ACCESS_KEY' \
-F 'acl=private' \
-F 'success_action_redirect=http://localhost/' \
-F 'policy=YOUR_POLICY_DOCUMENT_BASE64_ENCODED' \
-F 'signature=YOUR_CALCULATED_SIGNATURE' \
-F 'Content-Type=image/jpeg' \
-F 'file=#file.jpg'
can you try to add the specific policy to your bucket
$s3client->putBucketPolicy(array(
'Bucket' => $bucket,
'Policy' => json_encode(array(
'conditions' => array(
array(
'content-length-range', 0, 8*1024,
'starts-with', '$key', 'tmp/'
...
and after this, just use the normal putObject command
$command = $s3client->getCommand('PutObject', [
'ACL' => 'private',
'Body' => '',
'Bucket' => 'mybucket',
'Key' => 'tmp/' . $id]);

Downloading files from google cloud storage and search good documantation by apies for php

I began to work with Google Cloud Storage
I am currently using the library Google API PHP Client.
The documentation is here however I couldn't find any php documentation related to the APIs of the library.
And I've problem with downloading files from storage. I know how upload, but I don't know how download.
$res = $storage->objects->insert(
"bucket",
$obj,
['name' => $file_name, 'data' => file_get_contents('path'), 'uploadType' => 'multipart', 'mimeType'=>'text/xml']
);
I can get data of file, but I can't download.
$res = $storage->objects->get(
"bucket",
$file_name,
[]
);
Thanks
$storage->objects->get
This api returns mediaLink parameter and by this link we can download from storage

PHP + S3: Permission denied while deleting a file using unlink()

I am trying to solve an extremely trivial issue since long but no luck.
I want to delete a file immediately after uploading it to AWS S3 from a PHP WebServer. Following are the steps:
//Upload file to S3 using PHP SDK's S3Client::putObject method:
$result = $s3_client->putObject( array(
'Bucket' => AWS_BUCKET_NAME,
'Key' => $file_name,
'SourceFile' => $file_path,
'Metadata' => array(
'metadata_field' => 'metadata_value'
)
));
//Poll the object until it is accessible
$s3_client->waitUntil('ObjectExists', array(
'Bucket' => AWS_BUCKET_NAME,
'Key' => $file_name
));
//Delete the file
unlink( $file_path );
These steps work perfectly in case I upload a small file (~500KB).
However, if I upload a larger file (5MB-10MB), I get the following error:
Warning: unlink(<Complete Path to File>): Permission denied in <Complete path to uploader.php> on line N
I am working on Windows and have tried elevating user permissions for the directory and file. (using chmod, chown php commands and made sure that the directory is writable and accessible)
It seems that AWS S3 PutObject method is not releasing the file handle (in case of large files only). I have also tried adding sleep() but not luck.!
Moreover, in case I skip uploading the file to S3 (just to test my delete workflow), the file gets deleted without any issue.
Please help.!
The issue has raised on https://github.com/aws/aws-sdk-php/issues/841
Try using the gc_collect_cycles() function, it solved the problem for me. See the page above for further reference.
Regards,
Andor
Maybe you need to set the value of upload_max_filesize and post_max_size in your php.ini:
; Maximum allowed size for uploaded files.
upload_max_filesize = 40M
; Must be greater than or equal to upload_max_filesize
post_max_size = 40M
After modifying php.ini file(s), you need to restart your HTTP server to use new configuration.
In case anybody else is also stuck on this, I moved nginx server deployment to CentOS and this issue was not observed.
The waitUntil 'ObjectExists' have an timeout/max attempts by default.
You can change using:
$s3Client->waitUntil('ObjectExists', array(
'Bucket' => AWS_BUCKET_NAME,
'Key' => $file_name,
'waiter.interval' => 10,
'waiter.max_attempts' => 6
));

Categories