Image Uploading Management in php - php

In our application a user is allowed to upload an image of dimension 1024 X 768(around 150 KB).
When the user upload an image following things happen :
1)Image uploaded on temporary directory
2)Crop the Image into four different sizes.
3)Upload the original image and its cropped images on amazon s3 server.
The above process prove to be time consuming for the user.
After profiling with xdebug it seems that 90% of the time is being consumed by uploading image on amazon s3.
I am using given below method to save image in amazon s3 bucket
public function saveInBucket( $sourceLoc,$bucketName = '', $destinationLoc = '' ) {
if( $bucketName <> '' && $destinationLoc <> '' && $sourceLoc <> '') {
$s3 = new AmazonS3();
$response = $s3->create_object( $bucketName.'.xyz.com',$destinationLoc, array(
'contentType' => 'application/force-download',
'acl' => AmazonS3::ACL_PUBLIC,
'fileUpload' => $sourceLoc
)
);
if ( ( int ) $response->isOK() ) {
return TRUE;
}
$this->ErrorMessage = 'File upload operation failed,Please try again later';
return FALSE;
}
return FALSE;
}
I also thought of uploading image directly to amazon s3 but i can not do that since i also have to crop image into 4 different sizes
How can i speed up or improve image management process.

This happened to me before. What you can do is:
When you resize your image, you have to convert to string.
I was using WideImage class.
Example:
$image = WideImage::load($_FILES["file"]['tmp_name']);
$resized = $image->resize(1024);
$data = $resized->asString('jpg');
And then when you're uploading on Amazon, you have to use the param 'body' instead of 'fileUpload'.
Example:
$response = $s3->create_object( $bucketName.'.xyz.com',$destinationLoc, array(
'contentType' => 'application/force-download',
'acl' => AmazonS3::ACL_PUBLIC,
'body' => $data
)
);
I hope that helps.

Related

How to download latest file or recently added file from Aws S3 using PHP. (Yii2)

I have a non-versioned S3 bucket (VersionId is null for all files), files have different names.
My current code is:
$path = $this->key.'/primary/pdfs/'.$id.'/';
$result = $this->s3->listObjects(['Bucket' => $this->bucket,"Prefix" => $path])->toArray();
//get the last object from s3
$object = end($result['Contents']);
$key = $object['Key'];
$file = $this->s3->getObject([
'Bucket' => $this->bucket,
'Key' => $key
]);
//download the file
header('Content-Type: application/pdf');
echo $file['Body'];
The above is incorrect as it is giving the end file which is not the latest file.
Do I need to use the below api call ? if so, how to use it ?
$result = $this->s3->listObjectVersions(['Bucket' => $this->bucket,"Prefix" => $path])->toArray();
Since the VersionId of all files is null, there should be only one version of the files in the bucket

How to upload large files (around 10GB)

I want to transfer to my Amazon S3 bucket an archive of around 10GB, using a PHP script (it's a backup script).
I actually use the following code :
$uploader = new \Aws\S3\MultipartCopy($s3Client, $tmpFilesBackupDirectory, [
'Bucket' => 'MyBucketName',
'Key' => 'backup'.date('Y-m-d').'.tar.gz',
'StorageClass' => $storageClass,
'Tagging' => 'expiration='.$tagging,
'ServerSideEncryption' => 'AES256',
]);
try
{
$result = $uploader->copy();
echo "Upload complete: {$result['ObjectURL']}\n";
}
catch (Aws\Exception\MultipartUploadException $e)
{
echo $e->getMessage() . "\n";
}
My issue is that after few minutes (let's say 10mn), I receive an error message from the apache server : 504 Gateway timeout.
I understand that this error is related to the configuration of my Apache server, but I don't want to increase the timeout of my server.
My idea is to use the PHP SDK Low-Level API to do the following steps:
Use Aws\S3\S3Client::uploadPart() method in order to manually upload 5 parts, and store the response obtained in $_SESSION (I need the ETag values to complete the upload);
Reload the page using header('Location: xxx');
Perform again the first 2 steps for the next 5 parts, until all parts are uploaded;
Finalise the upload using Aws\S3\S3Client::completeMultipartUpload().
I suppose that this should work but before to use this method, I'd like to know if there is an easier way to achieve my goal, for example by using the high-level API...
Any suggestions?
NOTE : I'm not searching for some existing script : my main goal is to learn how to fix this issue :)
Best regards,
Lionel
Why not just use the AWS CLI to copy the file? You can create a script in the CLI and that way everything is AWS native. (Amazon has a tutorial on that.) You can use the scp command:
scp -i Amazonkey.pem /local/path/backupfile.tar.gz ec2-user#Elastic-IP-of-ec2-2:/path/backupfile.tar.gz
From my perspective, it would be easier to do the work within AWS, which has features to move files and data. If you'd like to use a shell script, this article on automating EC2 backups has a good one, plus more detail on backup options.
To answer my own question (I hope it might help someone one day!), he is how I fixed my issue, step by step:
1/ When I load my page, I check if the archive already exists. If not, I create my .tar.gz file and I reload the page using header().
I noticed that this step was quite slow since there is lot of data to archive. That's why I reload my page to avoid any timeout during the next steps!
2/ If the backup file exists, I use AWS MultipartUpload to send 10 chunks of 100MB each. Everytime that a chunk is sent successfully, I update a session variable ($_SESSION['backup']['partNumber']) to know what is the next chunk that needs to be uploaded.
Once my 10 chunks are sent, I reload the page again to avoid any timeout.
3/ I repeat the second step until the upload of all parts is done, using my session variable to know which part of the upload needs to be sent next.
4/ Finally, I complete the multipart upload and I delete the archive stored locally.
You can of course send more than 10 times 100MB before to reload your page. I chose this value to be sure that I won't reach a timeout even if the download is slow. But I guess I could send easilly around 5GB each time without issue.
Note: You cannot redirect you script to itself too much time. There is a limit (I think it's around 20 times for Chrome and Firefoxbefore to get an error, and more for IE). In my case (the archive is around 10GB), transfering 1GB per reload is fine (the page will be reloaded around 10 times). But it the archive size increases, I'll have to send more chunks each time.
Here is my full script. I could surely be improved, but it's working quite well for now and it may help someone having similar issue!
public function backup()
{
ini_set('max_execution_time', '1800');
ini_set('memory_limit', '1024M');
require ROOT.'/Public/scripts/aws/aws-autoloader.php';
$s3Client = new \Aws\S3\S3Client([
'version' => 'latest',
'region' => 'eu-west-1',
'credentials' => [
'key' => '',
'secret' => '',
],
]);
$tmpDBBackupDirectory = ROOT.'Var/Backups/backup'.date('Y-m-d').'.sql.gz';
if(!file_exists($tmpDBBackupDirectory))
{
$this->cleanInterruptedMultipartUploads($s3Client);
$this->createSQLBackupFile();
$this->uploadSQLBackup($s3Client, $tmpDBBackupDirectory);
}
$tmpFilesBackupDirectory = ROOT.'Var/Backups/backup'.date('Y-m-d').'.tar.gz';
if(!isset($_SESSION['backup']['archiveReady']))
{
$this->createFTPBackupFile();
header('Location: '.CURRENT_URL);
}
$this->uploadFTPBackup($s3Client, $tmpFilesBackupDirectory);
unlink($tmpDBBackupDirectory);
unlink($tmpFilesBackupDirectory);
}
public function createSQLBackupFile()
{
// Backup DB
$tmpDBBackupDirectory = ROOT.'Var/Backups/backup'.date('Y-m-d').'.sql.gz';
if(!file_exists($tmpDBBackupDirectory))
{
$return_var = NULL;
$output = NULL;
$dbLogin = '';
$dbPassword = '';
$dbName = '';
$command = 'mysqldump -u '.$dbLogin.' -p'.$dbPassword.' '.$dbName.' --single-transaction --quick | gzip > '.$tmpDBBackupDirectory;
exec($command, $output, $return_var);
}
return $tmpDBBackupDirectory;
}
public function createFTPBackupFile()
{
// Compacting all files
$tmpFilesBackupDirectory = ROOT.'Var/Backups/backup'.date('Y-m-d').'.tar.gz';
$command = 'tar -cf '.$tmpFilesBackupDirectory.' '.ROOT;
exec($command);
$_SESSION['backup']['archiveReady'] = true;
return $tmpFilesBackupDirectory;
}
public function uploadSQLBackup($s3Client, $tmpDBBackupDirectory)
{
$result = $s3Client->putObject([
'Bucket' => '',
'Key' => 'backup'.date('Y-m-d').'.sql.gz',
'SourceFile' => $tmpDBBackupDirectory,
'StorageClass' => '',
'Tagging' => '',
'ServerSideEncryption' => 'AES256',
]);
}
public function uploadFTPBackup($s3Client, $tmpFilesBackupDirectory)
{
$storageClass = 'STANDARD_IA';
$bucket = '';
$key = 'backup'.date('Y-m-d').'.tar.gz';
$chunkSize = 100 * 1024 * 1024; // 100MB
$reloadFrequency = 10;
if(!isset($_SESSION['backup']['uploadId']))
{
$response = $s3Client->createMultipartUpload([
'Bucket' => $bucket,
'Key' => $key,
'StorageClass' => $storageClass,
'Tagging' => '',
'ServerSideEncryption' => 'AES256',
]);
$_SESSION['backup']['uploadId'] = $response['UploadId'];
$_SESSION['backup']['partNumber'] = 1;
}
$file = fopen($tmpFilesBackupDirectory, 'r');
$parts = array();
//Reading parts already uploaded
for($i = 1; $i < $_SESSION['backup']['partNumber']; $i++)
{
if(!feof($file))
{
fread($file, $chunkSize);
}
}
// Uploading next parts
while(!feof($file))
{
do
{
try
{
$result = $s3Client->uploadPart(array(
'Bucket' => $bucket,
'Key' => $key,
'UploadId' => $_SESSION['backup']['uploadId'],
'PartNumber' => $_SESSION['backup']['partNumber'],
'Body' => fread($file, $chunkSize),
));
}
}
while (!isset($result));
$_SESSION['backup']['parts'][] = array(
'PartNumber' => $_SESSION['backup']['partNumber'],
'ETag' => $result['ETag'],
);
$_SESSION['backup']['partNumber']++;
if($_SESSION['backup']['partNumber'] % $reloadFrequency == 1)
{
header('Location: '.CURRENT_URL);
die;
}
}
fclose($file);
$result = $s3Client->completeMultipartUpload(array(
'Bucket' => $bucket,
'Key' => $key,
'UploadId' => $_SESSION['backup']['uploadId'],
'MultipartUpload' => Array(
'Parts' => $_SESSION['backup']['parts'],
),
));
$url = $result['Location'];
}
public function cleanInterruptedMultipartUploads($s3Client)
{
$tResults = $s3Client->listMultipartUploads(array('Bucket' => ''));
$tResults = $tResults->toArray();
if(isset($tResults['Uploads']))
{
foreach($tResults['Uploads'] AS $result)
{
$s3Client->abortMultipartUpload(array(
'Bucket' => '',
'Key' => $result['Key'],
'UploadId' => $result['UploadId']));
}
}
if(isset($_SESSION['backup']))
{
unset($_SESSION['backup']);
}
}
If someone has questions don't hesitate to contact me :)

How can I serve an SVG image from Google Cloud Storage?

Right now I'm working on allowing user image uploads to my site using the Google Cloud Storage. Uploading regular image files such as jpg, png, gif, and webp works fine. However, SVG images do not work. They get uploaded ok but when I have the PHP code echo the URL as an image source, all browsers just display the missing image icon. However, it does appear as if the image is downloading in the network tab of the code inspector. Not only that, pasting the link into it's own tab causes the file to download. This makes me think that the server is telling the browser to download the file rather than serve it as an image. Here is the code that I am using:
include 'GDS/GDS.php';
//create datastore
$obj_store = new GDS\Store('HomeImages');
$bucket = CloudStorageTools::getDefaultGoogleStorageBucketName();
$root_path = 'gs://' . $bucket . '/' . $_SERVER["REQUEST_ID_HASH"] . '/';
$public_urls = [];
//loop through all files that are images
foreach($_FILES['images']['name'] as $idx => $name) {
if ($_FILES['images']['type'][$idx] === 'image/jpeg' || $_FILES['images']['type'][$idx] === 'image/png' || $_FILES['images']['type'][$idx] === 'image/gif' || $_FILES['images']['type'][$idx] === 'image/webp' || $_FILES['images']['type'][$idx] === 'image/svg+xml') {
//path where the file should be moved to
$original = $root_path . 'original/' . $name;
//move the file
move_uploaded_file($_FILES['images']['tmp_name'][$idx], $original);
//don't use the getImageServingUrl function on SVG files because they aren't really images
if($_FILES['images']['type'][$idx] === 'image/svg+xml')
$public_urls[] = [
'name' => $name,
'original' => CloudStorageTools::getPublicUrl($original, true),
'thumb' => CloudStorageTools::getPublicUrl($original, true),
'location' => $original
];
else
$public_urls[] = [
'name' => $name,
'original' => CloudStorageTools::getImageServingUrl($original, ['size' => 1263, 'secure_url' => true]),
'thumb' => CloudStorageTools::getImageServingUrl($original, ['size' => 150, 'secure_url' => true]),
'location' => $original
];
}
}
//store image location and name in the datastore
foreach($public_urls as $urls){
$image = new GDS\Entity();
$image->URL = $urls['original'];
$image->thumbURL = $urls['thumb'];
$image->name = $urls['name'];
$image->location = $urls['location'];
$obj_store->upsert($image);
}
//redirect back to the admin page
header('Location: /admin/homeimages');
Having run into this issue just now, I found a solution. It turns out that every file in a bucket has metadata attached and stored as key-value pairs. The key we're after is 'Content-Type', and the value isn't always correct for SVG. the value needs to be 'image/svg+xml'. I don't know how to set that programmatically, but if you only have a few objects, then it's easy to do in the file's ellipses menu in the online interface for the bucket.

trying to save image in amazon s3 but image not getting saved

I am trying to save images from image url to the amazon s3, but image is created there in bucket, but image is not shown in browser, displays message "image cannot be displayed because it contains error.
This is my code:
require_once("aws/aws-autoloader.php");
// Amazon S3
use Aws\S3\S3Client;
// Create an Amazon S3 client object
$s3Client = S3Client::factory(array(
'key' => 'XXXXXXXXXXXX',
'secret' => 'XXXXXXXXX'
));
// Register the stream wrapper from a client object
$s3Client->registerStreamWrapper();
// Save Thumbnail
$s3Path = "s3://smmrescueimages/";
$s3Stream = fopen($s3Path . 'gt.jpg', 'w');
fwrite($s3Stream, 'http://sippy.in/gt.jpg');
#fclose($s3Stream);
echo "done";
This is the image path generated https://s3.amazonaws.com/smmrescueimages/gt.jpg
change this line
fwrite($s3Stream, 'http://sippy.in/gt.jpg');
to
fwrite($s3Stream, file_get_contents('http://sippy.in/gt.jpg'));
otherwise you save the url string instant of the image binary into your file.
dont use # to prevent error messages of php functions!
just check if a valid handler is present
$s3Stream = fopen($s3Path . 'gt.jpg', 'w');
if( $s3Stream ) {
fwrite($s3Stream, file_get_contents('http://sippy.in/gt.jpg'));
fclose($s3Stream);
}

S3 - need to download particular images from aws and download them in zip

I need to download images from AWS bucket into a local directory and zip download them.
I have tried my code, but can't figure out how will I copy the images into my local directory.
here is my function :
public function commomUpload($contentId,$prop_id)
{
$client = S3Client::factory(array(
'key' => 'my key',
'secret' => '----secret-----',
));
$documentFolderName = 'cms_photos';
$docbucket = "propertiesphotos/$prop_id/$documentFolderName";
$data = $this->Photo->find('all',array('conditions'=>array('Photo.content_id'=>$contentId)));
//pr($data);die;
$split_point = '/';
foreach($data as $row){
$string = $row['Photo']['aws_link'];
$result = array_map('strrev', explode($split_point, strrev($string)));
$imageName = $result[0];
$result = $client->getObject(array(
'Bucket' => $docbucket,
'Key' => $imageName
));
$uploads_dir = '/img/uploads/';
if (!copy($result, $uploads_dir)) {
echo "failed to copy $result...\n";
}
//move_uploaded_file($imageName, "$uploads_dir/$imageName");
}
}
Dont complicate things. There exists a very simple tools called s3cmd. Install it in any platforms.Click here to know more. Once you download the images from the s3,use can either use gzip or zip it using just a bash script. Dont forget to configure your s3cmd. You need to have AWS access key and secret key with you.
you can use Amazon S3 cakephp plugin https://github.com/fullybaked/CakeS3

Categories