I am looking in docs and Oracle sdk to see if there's anything we can upload to oracle storage.
But i didn't found any php sdk from Oracle or am i missing something?
I have researched a lot, please help me. I want to use php sdk to upload the files and folder to Oracle cloud and serve those file urls to my application.
Anyone looking for the solution for the same so i have figured it out and posting the answer here.
After looking at too many online references i got to know Oracle is compatible with Amazon s3 SDK. So all you need is to use AWS sdk and get the access key and secret from Oracle and you are done. Posting some code.
<?php
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\Exception\AwsException;
use Aws\S3\Exception\S3Exception;
define('ORACLE_ACCESS_KEY', '***************************************');
define('ORACLE_SECRET_KEY', '***************************************');
define('ORACLE_REGION', '***************************************');
define('ORACLE_NAMESPACE', '***************************************');
function get_oracle_client($endpoint)
{
$endpoint = "https://".ORACLE_NAMESPACE.".compat.objectstorage.".ORACLE_REGION.".oraclecloud.com/{$endpoint}";
return new Aws\S3\S3Client(array(
'credentials' => [
'key' => ORACLE_ACCESS_KEY,
'secret' => ORACLE_SECRET_KEY,
],
'version' => 'latest',
'region' => ORACLE_REGION,
'bucket_endpoint' => true,
'endpoint' => $endpoint
));
}
function upload_file_oracle($bucket_name, $folder_name = '', $file_name)
{
if (empty(trim($bucket_name))) {
return array('success' => false, 'message' => 'Please provide valid bucket name!');
}
if (empty(trim($file_name))) {
return array('success' => false, 'message' => 'Please provide valid file name!');
}
if ($folder_name !== '') {
$keyname = $folder_name . '/' . $file_name;
$endpoint = "{$bucket_name}/";
} else {
$keyname = $file_name;
$endpoint = "{$bucket_name}/{$keyname}";
}
$s3 = get_oracle_client($endpoint);
$s3->getEndpoint();
$file_url = "https://objectstorage.".ORACLE_REGION.".oraclecloud.com/n/".ORACLE_NAMESPACE."/b/{$bucket_name}/o/{$keyname}";
try {
$s3->putObject(array(
'Bucket' => $bucket_name,
'Key' => $keyname,
'SourceFile' => $file_name,
'StorageClass' => 'REDUCED_REDUNDANCY'
));
return array('success' => true, 'message' => $file_url);
} catch (S3Exception $e) {
return array('success' => false, 'message' => $e->getMessage());
} catch (Exception $e) {
return array('success' => false, 'message' => $e->getMessage());
}
}
function upload_folder_oracle($bucket_name, $folder_name)
{
if (empty(trim($bucket_name))) {
return array('success' => false, 'message' => 'Please provide valid bucket name!');
}
if (empty(trim($folder_name))) {
return array('success' => false, 'message' => 'Please provide valid folder name!');
}
$keyname = $folder_name;
$endpoint = "{$bucket_name}/{$keyname}";
$s3 = get_oracle_client($endpoint);
try {
$manager = new \Aws\S3\Transfer($s3, $keyname, 's3://' . $bucket_name . '/' . $keyname);
$manager->transfer();
return array('success' => true);
} catch (S3Exception $e) {
return array('success' => false, 'message' => $e->getMessage());
} catch (Exception $e) {
return array('success' => false, 'message' => $e->getMessage());
}
}
The above code is working and tested for more details please visit link - https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm
Related
Anyone who has had anything to do with google files, can you tell me how to overwrite a file?
Now I use this function but, even if the names are the same, it creates another one.
function uploadFile($fileName, $folderId, $file) {
$service = getGoogleDriveService();
$fileMetadata = new Google_Service_Drive_DriveFile(array(
'name' => $fileName,
'parents' => array($folderId),
));
try {
$newFile = $service->files->create(
$fileMetadata,
array(
'data' => file_get_contents($file),
'mimeType' => mime_content_type($file),
'uploadType' => 'multipart',
)
);
return $newFile->id;
} catch (Exception $e) {
return 'An error ocurred : ' . $e->getMessage();
}
}
I have code that is suppose to upload an 8GB file to the server. The problem I an running into appears to be memory issues because my server only has 4GB of ram. My script for upload is:
$s3 = S3Client::factory(array(
'credentials' => $credentials
));;
// 2. Create a new multipart upload and get the upload ID.
$response = $s3->createMultipartUpload(array(
'Bucket' => $bucket,
'Key' => $obect,
//'Body' => (strlen($body) < 1000 && file_exists($body)) ? Guzzle\Http\EntityBody::factory(fopen($body, 'r+')) : $body,
'ACL' => $acl,
'ContentType' => $content_type,
'curl.options' => array(
CURLOPT_TIMEOUT => 12000,
)
));
$uploadId = $response['UploadId'];
// 3. Upload the file in parts.
$file = fopen($body, 'r');
$parts = array();
$partNumber = 1;
while (!feof($file)) {
$result = $s3->uploadPart(array(
'Bucket' => $bucket,
'Key' => $obect,
'UploadId' => $uploadId,
'PartNumber' => $partNumber,
'Body' => fread($file, 10 * 1024 * 1024),
));
$parts[] = array(
'PartNumber' => $partNumber++,
'ETag' => $result['ETag'],
);
}
$result = $s3->completeMultipartUpload(array(
'Bucket' => $bucket,
'Key' => $obect
,
'UploadId' => $uploadId,
'Parts' => $parts,
));
$url = $result['Location'];
return true;
} catch (Aws\Exception\S3Exception $e) {
error_log($e -> getMessage() . ' ' . $e -> getTraceAsString());
return false;
} catch (Exception $e) {
error_log($e -> getMessage() . ' ' . $e -> getTraceAsString());
return false;
}
Has anyone come across this problem before? And how do I resolve the memory issues?
Im uploading some images to S3, how can I check if it was a successful transfer?
Here is my code. I screw up the access key on purpose so the files do not upload, how can I catch this error and act upon it?
The code below does not catch anything, even though the images fail to upload.
$this->commands[] = $this->s3->getCommand('PutObject', [
'Bucket' => env('AWS_BUCKET'),
'Key' => $name,
'Body' => $img,
'ContentType' => $mime,
'ACL' => 'public-read'
]);
$pool = new CommandPool($this->s3, $this->commands);
$promise = $pool->promise();
try {
$result = $promise->wait();
}
catch (AwsException $e) {
var_dump($e)
}
Im using php sdk 3.0
I am also using PHP SDK 3.0 and was able to get the result of the upload using the following:
$result = $s3Client->putObject(array(
'Bucket' => $bucketName,
'Key' => $backupFileName,
'SourceFile' => $backupFile,
));
$code = $result['#metadata']['statusCode'];
$uri = $result['#metadata']['effectiveUri'];
if ($code === 200) {
// Success code here
}
I'm trying to upload a picture on my amazon S3 via their PHP SDK. So I made a little script to do so. However, my script doesn't work and my exception doesn't send me back any error message.
I'm new with AWS thank you for your help.
Here is the code :
Config.php
<?php
return array(
'includes' => array('_aws'),
'services' => array(
'default_settings' => array(
'params' => array(
'key' => 'PUBLICKEY',
'secret' => 'PRIVATEKEY',
'region' => 'eu-west-1'
)
)
)
);
?>
Index.php
<?php
//Installing AWS SDK via phar
require 'aws.phar';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = 'infact';
$keyname = 'myImage';
// $filepath should be absolute path to a file on disk
$filepath = 'image.jpg';
// Instantiate the client.
$s3 = S3Client::factory('config.php');
// Upload a file.
try {
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SourceFile' => $filePath,
'ContentType' => 'text/plain',
'ACL' => 'public-read',
'StorageClass' => 'REDUCED_REDUNDANCY'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
?>
EDIT : I'm now using this code but its still not working. I don't even have error or exception message.
<?php
require 'aws.phar';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = 'infactr';
$keyname = 'sample';
// $filepath should be absolute path to a file on disk
$filepath = 'image.jpg';
// Instantiate the client.
$s3 = S3Client::factory(array(
'key' => 'key',
'secret' => 'privatekey',
'region' => 'eu-west-1'
));
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SourceFile' => $filePath,
'ACL' => 'public-read',
'ContentType' => 'image/jpeg'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
?>
Try something like this (from the AWS docs):
<?php
require 'aws.phar';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucket = '<your bucket name>';
$keyname = 'sample';
// $filepath should be absolute path to a file on disk
$filepath = '/path/to/image.jpg';
// Instantiate the client.
$s3 = S3Client::factory(array(
'key' => 'your AWS access key',
'secret' => 'your AWS secret access key'
));
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SourceFile' => $filepath,
'ACL' => 'public-read'
));
// Print the URL to the object.
echo $result['ObjectURL'] . "\n";
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
?>
It works fine for me as long as you have the right credentials. Keep in mind that the key name is the name of your file in S3 so if you want to have your key have the same name of your file you have to do something like: $keyname = 'image.jpg'; . Also, a jpg is generally not a plain/text file type, you can ommit that Content-type field or you can just simply specify: image/jpeg
$s3 = S3Client::factory('config.php');
should be
$s3 = S3Client::factory(include 'config.php');
For those looking an up to date working version, this is what I am using
// Instantiate the client.
$s3 = S3Client::factory(array(
'credentials' => [
'key' => $s3Key,
'secret' => $s3Secret,
],
'region' => 'us-west-2',
'version' => "2006-03-01"
));
try {
// Upload data.
$result = $s3->putObject(array(
'Bucket' => $s3Bucket,
'Key' => $fileId,
'SourceFile' => $filepath."/".$fileName
));
return $result['ObjectURL'];
} catch (S3Exception $e) {
return false;
}
An alternative way to explain is by showing the curl, and how to build it in php - the pragmatic approach.
Please don't stone me for ugly code, just thought that this example is easy to follow for uploading to Azure from PHP, or other language.
$azure1 ='https://viperprodstorage1.blob.core.windows.net/bucketnameAtAzure/';
$azure3 ='?sr=c&si=bucketnameAtAzure-policy&sig=GJ_verySecretHashFromAzure_aw%3D';
$shellCmd='ls -la '.$outFileName;
$lsOutput=shell_exec($shellCmd);
#print_r($lsOutput);
$exploded=explode(' ', $lsOutput);
#print_r($exploded);
$fileLength=$exploded[7];
$curlAzure1="curl -v -X PUT -T '" . $outFileName . "' -H 'Content-Length: " . $fileLength . "' ";
$buildedCurlForUploading=$curlAzure1."'".$azure1.$outFileName.$azure3."'";
var_dump($buildedCurlForUploading);
shell_exec($buildedCurlForUploading);
This is the actual curl
shell_exec("curl -v -X PUT -T 'fileName' -H 'Content-Length: fileSize' 'https://viperprodstorage1.blob.core.windows.net/bucketnameAtAzure/fileName?sr=c&si=bucketNameAtAzure-policy&sig=GJ_verySecretHashFromAzure_aw%3D'")
Below are the code for upload image/file in amazon s3 bucket.
function upload_agreement_data($target_path, $source_path, $file_name, $content_type)
{
$fileup_flag = false;
/*------------- call global settings helper function starts ----------------*/
$bucketName = "pilateslogic";
//$global_setting_option = '__cloud_front_bucket__';
//$bucketName = get_global_settings($global_setting_option);
/*------------- call global settings helper function ends ----------------*/
if(!$bucketName)
{
die("ERROR: Template bucket name not found!");
}
// Amazon profile_template template js upload URL
$target_profile_template_js_url = "/".$bucketName."/".$target_path;
// Chatching profile_template template js upload URL
//$source_profile_template_js_url = dirname(dirname(dirname(__FILE__))).$source_path."/".$file_name;
// file name
$template_js_file = $file_name;
$this->s3->setEndpoint("s3-ap-southeast-2.amazonaws.com");
if($this->s3->putObjectFile($source_path, $target_profile_template_js_url, $template_js_file, S3::ACL_PUBLIC_READ, array(), array("Content-Type" => $content_type)))
{
$fileup_flag = true;
}
return $fileup_flag;
}
I've uploaded nearly 25k files (large media files) to an s3 bucket. I used AWS SDK2 for PHP (S3Client::putObject) to perform uploads. Now, I need to update metadata for these files i.e change the ContentDisposition to attachment and assign a filename.
Is there a way to perform this without requiring to re-upload the file? Please help.
Yes, you can use the copyObject method, where you set the CopySource parameter equal to the Bucket and Key parameters.
Example:
// setup your $s3 connection, and define the bucket and key for your resource.
$s3->copyObject(array(
'Bucket' => $bucket,
'CopySource' => "$bucket/$key",
'Key' => $key,
'Metadata' => array(
'ExtraHeader' => 'HEADER VALUE'
),
'MetadataDirective' => 'REPLACE'
));
Update Cache Control Metadata on S3 Objects
<?php
define('S3_BUCKET', 'bucket-name');
define('S3_ACCESS_KEY', 'your-access-key');
define('S3_SECRET_KEY', 'secret-key');
define('S3_REGION', 'ap-south-1'); //Mumbai
require 'vendors/aws/aws-autoloader.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
try {
$s3 = S3Client::factory(array(
'version' => 'latest',
'region' => S3_REGION,
'credentials' => array(
'secret' => S3_SECRET_KEY,
'key' => S3_ACCESS_KEY,
)
));
$objects = $this->s3->getIterator('ListObjects', array('Bucket' => S3_BUCKET));
echo "Keys retrieved!\n";
foreach ($objects as $object) {
echo $object['Key'] . "\n";
$s3->copyObject(array(
'Bucket' => S3_BUCKET,
'CopySource' => S3_BUCKET . '/' . $object['Key'],
'Key' => $key,
'ContentType' => 'image/jpeg',
'ACL' => 'public-read',
'StorageClass' => 'REDUCED_REDUNDANCY',
'CacheControl' => 'max-age=172800',
'MetadataDirective' => 'REPLACE'
));
}
} catch (S3Exception $e) {
echo $e->getMessage() . "\n";
}
Try this
For delete existing object
$keyname = 'product-file/my-object1.dll';
try
{
$delete = $this->s3->deleteObject([
'Bucket' => 'belisc',
'Key' => $keyname
]);
if ($delete['DeleteMarker']){
return true;
} else {
return false;
}
}
catch (S3Exception $e) {
return $e->getAwsErrorMessage();
}
For check object
return true if object is still exists
$keyname = 'product-file/my-object1.dll';
try {
$this->s3->getObject([
'Bucket' => 'belisc',
'Key' => $keyname
]);
return true;
} catch (S3Exception $e) {
return $e->getAwsErrorMessage();
}
Then you can upload the new one
try {
return $this->s3->putObject([
'Bucket' => 'belisc',
'Key' => 'product-file/MiFlashSetup_eng.rar',
'SourceFile' => 'c:\MiFlashSetup_eng.rar'
]);
} catch (S3Exception $e) {
die("There was an error uploading the file. ".$e->getAwsErrorMessage());
}