I have code that is suppose to upload an 8GB file to the server. The problem I an running into appears to be memory issues because my server only has 4GB of ram. My script for upload is:
$s3 = S3Client::factory(array(
'credentials' => $credentials
));;
// 2. Create a new multipart upload and get the upload ID.
$response = $s3->createMultipartUpload(array(
'Bucket' => $bucket,
'Key' => $obect,
//'Body' => (strlen($body) < 1000 && file_exists($body)) ? Guzzle\Http\EntityBody::factory(fopen($body, 'r+')) : $body,
'ACL' => $acl,
'ContentType' => $content_type,
'curl.options' => array(
CURLOPT_TIMEOUT => 12000,
)
));
$uploadId = $response['UploadId'];
// 3. Upload the file in parts.
$file = fopen($body, 'r');
$parts = array();
$partNumber = 1;
while (!feof($file)) {
$result = $s3->uploadPart(array(
'Bucket' => $bucket,
'Key' => $obect,
'UploadId' => $uploadId,
'PartNumber' => $partNumber,
'Body' => fread($file, 10 * 1024 * 1024),
));
$parts[] = array(
'PartNumber' => $partNumber++,
'ETag' => $result['ETag'],
);
}
$result = $s3->completeMultipartUpload(array(
'Bucket' => $bucket,
'Key' => $obect
,
'UploadId' => $uploadId,
'Parts' => $parts,
));
$url = $result['Location'];
return true;
} catch (Aws\Exception\S3Exception $e) {
error_log($e -> getMessage() . ' ' . $e -> getTraceAsString());
return false;
} catch (Exception $e) {
error_log($e -> getMessage() . ' ' . $e -> getTraceAsString());
return false;
}
Has anyone come across this problem before? And how do I resolve the memory issues?
Related
I am looking in docs and Oracle sdk to see if there's anything we can upload to oracle storage.
But i didn't found any php sdk from Oracle or am i missing something?
I have researched a lot, please help me. I want to use php sdk to upload the files and folder to Oracle cloud and serve those file urls to my application.
Anyone looking for the solution for the same so i have figured it out and posting the answer here.
After looking at too many online references i got to know Oracle is compatible with Amazon s3 SDK. So all you need is to use AWS sdk and get the access key and secret from Oracle and you are done. Posting some code.
<?php
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\Exception\AwsException;
use Aws\S3\Exception\S3Exception;
define('ORACLE_ACCESS_KEY', '***************************************');
define('ORACLE_SECRET_KEY', '***************************************');
define('ORACLE_REGION', '***************************************');
define('ORACLE_NAMESPACE', '***************************************');
function get_oracle_client($endpoint)
{
$endpoint = "https://".ORACLE_NAMESPACE.".compat.objectstorage.".ORACLE_REGION.".oraclecloud.com/{$endpoint}";
return new Aws\S3\S3Client(array(
'credentials' => [
'key' => ORACLE_ACCESS_KEY,
'secret' => ORACLE_SECRET_KEY,
],
'version' => 'latest',
'region' => ORACLE_REGION,
'bucket_endpoint' => true,
'endpoint' => $endpoint
));
}
function upload_file_oracle($bucket_name, $folder_name = '', $file_name)
{
if (empty(trim($bucket_name))) {
return array('success' => false, 'message' => 'Please provide valid bucket name!');
}
if (empty(trim($file_name))) {
return array('success' => false, 'message' => 'Please provide valid file name!');
}
if ($folder_name !== '') {
$keyname = $folder_name . '/' . $file_name;
$endpoint = "{$bucket_name}/";
} else {
$keyname = $file_name;
$endpoint = "{$bucket_name}/{$keyname}";
}
$s3 = get_oracle_client($endpoint);
$s3->getEndpoint();
$file_url = "https://objectstorage.".ORACLE_REGION.".oraclecloud.com/n/".ORACLE_NAMESPACE."/b/{$bucket_name}/o/{$keyname}";
try {
$s3->putObject(array(
'Bucket' => $bucket_name,
'Key' => $keyname,
'SourceFile' => $file_name,
'StorageClass' => 'REDUCED_REDUNDANCY'
));
return array('success' => true, 'message' => $file_url);
} catch (S3Exception $e) {
return array('success' => false, 'message' => $e->getMessage());
} catch (Exception $e) {
return array('success' => false, 'message' => $e->getMessage());
}
}
function upload_folder_oracle($bucket_name, $folder_name)
{
if (empty(trim($bucket_name))) {
return array('success' => false, 'message' => 'Please provide valid bucket name!');
}
if (empty(trim($folder_name))) {
return array('success' => false, 'message' => 'Please provide valid folder name!');
}
$keyname = $folder_name;
$endpoint = "{$bucket_name}/{$keyname}";
$s3 = get_oracle_client($endpoint);
try {
$manager = new \Aws\S3\Transfer($s3, $keyname, 's3://' . $bucket_name . '/' . $keyname);
$manager->transfer();
return array('success' => true);
} catch (S3Exception $e) {
return array('success' => false, 'message' => $e->getMessage());
} catch (Exception $e) {
return array('success' => false, 'message' => $e->getMessage());
}
}
The above code is working and tested for more details please visit link - https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm
I can't figure it out what's going on with this. The requirement is to upload large files up to 25GB to storage on AWS S3. We have a laravel app running with no problems for a couple of years. So, I just add a controller to do the job with the MultipartUploader tool from AWS...
Files up to 64mb are uploading with no problem
Files larger than that, return a 500 0 error, and I find nothing on the log.
I've change the upload_max_filesize & post_max_size to 4G.
Here's the code...
$s3Client = new S3Client([
'region' => 'us-east-1',
'version' => 'latest',
'credentials' => $credentials
]);
$source = fopen( $file['fileName']->getRealPath() , 'r');
$storageClass = 'STANDARD_IA';
$chunkSize = 100 * 1024 * 1024; // 100MB
if(!isset($results['backup']['uploadId'])) {
$response = $s3Client->createMultipartUpload([
'Bucket' => $bucket,
'Key' => $path,
'StorageClass' => $storageClass,
'Tagging' => '',
'ServerSideEncryption' => 'AES256',
'ContentType' => $file['fileName']->getMimeType(),
]);
$results['backup']['uploadId'] = $response['UploadId'];
$results['backup']['partNumber'] = 1;
}
//Reading parts already uploaded
for($i = 1; $i < $results['backup']['partNumber']; $i++) {
set_time_limit(0);
if(!feof($source)) fread($source, $chunkSize);
}
// Uploading next parts
while(!feof($source)) {
do {
try {
set_time_limit(0);
$uploadSuccess = $s3Client->uploadPart([
'Bucket' => $bucket,
'Key' => $path,
'UploadId' => $results['backup']['uploadId'],
'PartNumber' => $results['backup']['partNumber'],
'Body' => fread($source, $chunkSize),
]);
$results['uploadFile ' . $key] = ['status' => 'success', 'result' => $fileName ];
} catch (MultipartUploadException $e) {
rewind($source);
$uploader = new MultipartUploader($s3Client, $source, [
'state' => $e->getState(),
]);
$results['uploadFile ' . $key] = ['status' => 'error', 'result' => $e->getMessage() . "\n" ];
}
} while (!isset($uploadSuccess));
$results['backup']['parts'][] = [
'PartNumber' => $results['backup']['partNumber'],
'ETag' => $uploadSuccess['ETag'],
];
$results['backup']['partNumber']++;
}
fclose($source);
$uploadSuccess = $s3Client->completeMultipartUpload([
'Bucket' => $bucket,
'Key' => $path,
'UploadId' => $results['backup']['uploadId'],
'MultipartUpload' => [
'Parts' => $results['backup']['parts'],
],
]);
unset($results['backup']);
return $results;
You can set Maximum allowed content length in Request Filtering rules.
The default maximum upload is 30mb, you can change it as needed.
I am trying to upload error file in AWSS3 but it shows error like "The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint: "test9011960909.s3.amazonaws.com"."
i also specified 'region' => 'us-east-1' but still same error occurs.
it is working when i specify
'Bucket' => $this->bucket,
but i wanted to upload file in sub-folder of main bucket
'Bucket' => $this->bucket . "/" . $PrefixFolderPath,
i already applied approved answer from AWS S3: The bucket you are attempting to access must be addressed using the specified endpoint
but still getting same error, i am using php
Code :
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
class AWSS3Factory {
private $bucket;
private $keyname;
public function __construct() {
$this->bucket = AWSS3_BucketName;
$this->keyname = AWSS3_AccessKey;
// Instantiate the client.
}
public function UploadFile($FullFilePath,$PrefixFolderPath="") {
try {
$s3 = S3Client::factory(array(
'credentials' => array(
'key' => MYKEY,
'secret' => MYSECKEY,
'region' => 'eu-west-1',
)
));
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $this->bucket . "/" . $PrefixFolderPath,
'Key' => $this->keyname,
'SourceFile' => $FullFilePath,
'StorageClass' => 'REDUCED_REDUNDANCY'
));
return true;
// Print the URL to the object.
//echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
}
}
You must create s3 instance in another way, like this:
$s3 = S3Client::factory([
'region' => '',
'credentials' => ['key' => '***', 'secret' => '***'],
'version' => 'latest',
]);
You must add $PrefixFolderPath not to 'Bucket' but to 'Key':
$result = $s3->putObject(array(
'Bucket' => $this->bucket,
'Key' => $PrefixFolderPath . "/" . $this->keyname,
'SourceFile' => $FullFilePath,
'StorageClass' => 'REDUCED_REDUNDANCY'
));
I'm trying to upload a picture on my amazon S3 via their PHP SDK. So I made a little script to do so. However, my script doesn't work and my exception doesn't send me back any error message.
I'm new with AWS thank you for your help.
Here is the code :
Config.php
<?php
return array(
'includes' => array('_aws'),
'services' => array(
'default_settings' => array(
'params' => array(
'key' => 'PUBLICKEY',
'secret' => 'PRIVATEKEY',
'region' => 'eu-west-1'
)
)
)
);
?>
Index.php
<?php
//Installing AWS SDK via phar
require 'aws.phar';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = 'infact';
$keyname = 'myImage';
// $filepath should be absolute path to a file on disk
$filepath = 'image.jpg';
// Instantiate the client.
$s3 = S3Client::factory('config.php');
// Upload a file.
try {
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SourceFile' => $filePath,
'ContentType' => 'text/plain',
'ACL' => 'public-read',
'StorageClass' => 'REDUCED_REDUNDANCY'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
?>
EDIT : I'm now using this code but its still not working. I don't even have error or exception message.
<?php
require 'aws.phar';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = 'infactr';
$keyname = 'sample';
// $filepath should be absolute path to a file on disk
$filepath = 'image.jpg';
// Instantiate the client.
$s3 = S3Client::factory(array(
'key' => 'key',
'secret' => 'privatekey',
'region' => 'eu-west-1'
));
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SourceFile' => $filePath,
'ACL' => 'public-read',
'ContentType' => 'image/jpeg'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
?>
Try something like this (from the AWS docs):
<?php
require 'aws.phar';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = '<your bucket name>';
$keyname = 'sample';
// $filepath should be absolute path to a file on disk
$filepath = '/path/to/image.jpg';
// Instantiate the client.
$s3 = S3Client::factory(array(
'key' => 'your AWS access key',
'secret' => 'your AWS secret access key'
));
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SourceFile' => $filepath,
'ACL' => 'public-read'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
?>
It works fine for me as long as you have the right credentials. Keep in mind that the key name is the name of your file in S3 so if you want to have your key have the same name of your file you have to do something like: $keyname = 'image.jpg'; . Also, a jpg is generally not a plain/text file type, you can ommit that Content-type field or you can just simply specify: image/jpeg
$s3 = S3Client::factory('config.php');
should be
$s3 = S3Client::factory(include 'config.php');
For those looking an up to date working version, this is what I am using
// Instantiate the client.
$s3 = S3Client::factory(array(
'credentials' => [
'key' => $s3Key,
'secret' => $s3Secret,
],
'region' => 'us-west-2',
'version' => "2006-03-01"
));
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $s3Bucket,
'Key' => $fileId,
'SourceFile' => $filepath."/".$fileName
));
return $result['ObjectURL'];
} catch (S3Exception $e) {
return false;
}
An alternative way to explain is by showing the curl, and how to build it in php - the pragmatic approach.
Please don't stone me for ugly code, just thought that this example is easy to follow for uploading to Azure from PHP, or other language.
$azure1 ='https://viperprodstorage1.blob.core.windows.net/bucketnameAtAzure/';
$azure3 ='?sr=c&si=bucketnameAtAzure-policy&sig=GJ_verySecretHashFromAzure_aw%3D';
$shellCmd='ls -la '.$outFileName;
$lsOutput=shell_exec($shellCmd);
#print_r($lsOutput);
$exploded=explode(' ', $lsOutput);
#print_r($exploded);
$fileLength=$exploded[7];
$curlAzure1="curl -v -X PUT -T '" . $outFileName . "' -H 'Content-Length: " . $fileLength . "' ";
$buildedCurlForUploading=$curlAzure1."'".$azure1.$outFileName.$azure3."'";
var_dump($buildedCurlForUploading);
shell_exec($buildedCurlForUploading);
This is the actual curl
shell_exec("curl -v -X PUT -T 'fileName' -H 'Content-Length: fileSize' 'https://viperprodstorage1.blob.core.windows.net/bucketnameAtAzure/fileName?sr=c&si=bucketNameAtAzure-policy&sig=GJ_verySecretHashFromAzure_aw%3D'")
Below are the code for upload image/file in amazon s3 bucket.
function upload_agreement_data($target_path, $source_path, $file_name, $content_type)
{
$fileup_flag = false;
/*------------- call global settings helper function starts ----------------*/
$bucketName = "pilateslogic";
//$global_setting_option = '__cloud_front_bucket__';
//$bucketName = get_global_settings($global_setting_option);
/*------------- call global settings helper function ends ----------------*/
if(!$bucketName)
{
die("ERROR: Template bucket name not found!");
}
// Amazon profile_template template js upload URL
$target_profile_template_js_url = "/".$bucketName."/".$target_path;
// Chatching profile_template template js upload URL
//$source_profile_template_js_url = dirname(dirname(dirname(__FILE__))).$source_path."/".$file_name;
// file name
$template_js_file = $file_name;
$this->s3->setEndpoint("s3-ap-southeast-2.amazonaws.com");
if($this->s3->putObjectFile($source_path, $target_profile_template_js_url, $template_js_file, S3::ACL_PUBLIC_READ, array(), array("Content-Type" => $content_type)))
{
$fileup_flag = true;
}
return $fileup_flag;
}
I've uploaded nearly 25k files (large media files) to an s3 bucket. I used AWS SDK2 for PHP (S3Client::putObject) to perform uploads. Now, I need to update metadata for these files i.e change the ContentDisposition to attachment and assign a filename.
Is there a way to perform this without requiring to re-upload the file? Please help.
Yes, you can use the copyObject method, where you set the CopySource parameter equal to the Bucket and Key parameters.
Example:
// setup your $s3 connection, and define the bucket and key for your resource.
$s3->copyObject(array(
'Bucket' => $bucket,
'CopySource' => "$bucket/$key",
'Key' => $key,
'Metadata' => array(
'ExtraHeader' => 'HEADER VALUE'
),
'MetadataDirective' => 'REPLACE'
));
Update Cache Control Metadata on S3 Objects
<?php
define('S3_BUCKET', 'bucket-name');
define('S3_ACCESS_KEY', 'your-access-key');
define('S3_SECRET_KEY', 'secret-key');
define('S3_REGION', 'ap-south-1'); //Mumbai
require 'vendors/aws/aws-autoloader.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
try {
$s3 = S3Client::factory(array(
'version' => 'latest',
'region' => S3_REGION,
'credentials' => array(
'secret' => S3_SECRET_KEY,
'key' => S3_ACCESS_KEY,
)
));
$objects = $this->s3->getIterator('ListObjects', array('Bucket' => S3_BUCKET));
echo "Keys retrieved!\n";
foreach ($objects as $object) {
echo $object['Key'] . "\n";
$s3->copyObject(array(
'Bucket' => S3_BUCKET,
'CopySource' => S3_BUCKET . '/' . $object['Key'],
'Key' => $key,
'ContentType' => 'image/jpeg',
'ACL' => 'public-read',
'StorageClass' => 'REDUCED_REDUNDANCY',
'CacheControl' => 'max-age=172800',
'MetadataDirective' => 'REPLACE'
));
}
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
Try this
For delete existing object
$keyname = 'product-file/my-object1.dll';
try
{
$delete = $this->s3->deleteObject([
'Bucket' => 'belisc',
'Key' => $keyname
]);
if ($delete['DeleteMarker']){
return true;
} else {
return false;
}
}
catch (S3Exception $e) {
return $e->getAwsErrorMessage();
}
For check object
return true if object is still exists
$keyname = 'product-file/my-object1.dll';
try {
$this->s3->getObject([
'Bucket' => 'belisc',
'Key' => $keyname
]);
return true;
} catch (S3Exception $e) {
return $e->getAwsErrorMessage();
}
Then you can upload the new one
try {
return $this->s3->putObject([
'Bucket' => 'belisc',
'Key' => 'product-file/MiFlashSetup_eng.rar',
'SourceFile' => 'c:\MiFlashSetup_eng.rar'
]);
} catch (S3Exception $e) {
die("There was an error uploading the file. ".$e->getAwsErrorMessage());
}