Given the following snippet, I'm looping through and downloading an array of files. It appears if the file list is greater than 50 objects, well, that's it. Does Amazon have a limit on the number of objects I can download at one time? if so, is there a work around or am i just doing this wrong? Again, everything works fine for under 50 files.
Just to clarify, the intent is to generate a zip file for someone to download. the file list could vary from a couple of files to hundreds.
if ($zip->open($path, ZipArchive::CREATE)){
foreach($sheets as $sheet){
// the saved file name as it exists on the server...
$keyname = $sheet.".pdf";
// the new name of the file, what it was originaly uploaded as
$sql = "SELECT * FROM files WHERE id=$sheet";
if ($result = $mysqli->query($sql)) {
while($row = $result->fetch_assoc()) {
$saveas = $row['number']."-".$row['name'].".pdf";
}
}
// Save object locally (downloads directory) to a file.
$filepath = "/var/www/html/downloads/".$saveas;
$result = $client->getObject(array(
'Bucket' => $bucket,
'Key' => $keyname,
'SaveAs' => $filepath
));
// insert the recently saved file to the zip
$zip->addFile($filepath, $saveas);
}
}
Related
I'd like to create a .zip archive, upload it to Amazon S3, then delete the created .zip from the server. Steps 1 and 2 are working great, but the delete step is returning:
unlink(temp/file.zip): Resource temporarily unavailable
I've tried to unset all the related variables and resources, but I'm still getting the error.
Here's the code:
$zipFile = 'temp/file.zip';
// create the zip archive:
$z = new \ZipArchive();
$z->open($zipFile, \ZipArchive::CREATE);
$z->addEmptyDir('testdirectory');
// add a file
$filename = 'fileName.txt';
$content = 'Hello World';
$z->addFromString('testdirectory/' . $filename, $content);
$z->close();
// upload to S3
$s3 = AWS::createClient('s3');
$result = $s3->putObject(array(
'Bucket' => 'my-bucket-name',
'Key' => basename($zipFile),
'SourceFile' => $zipFile
));
// check to see if the file was uploaded
if ($result['#metadata']['statusCode'] == "200") {
$uploaded = true;
}
// delete the temp file
if ($uploaded) {
unset($result);
unset($s3);
unset($z);
if (file_exists($zipFile)) {
unlink($zipFile);
}
}
Some additional details: I'm using Lumen 5.4 and the aws-sdk-php-laravel package.
Any insight would be much appreciated! Thanks.
S3 is holding resources so we have to forcefully clear the gc (Garbage Collector).
Just do gc_collect_cycles() before deleting that file.
I need to download all the attachments on a Podio app. I can get all the files id and their url, etc.. i just can't make the download. I've tried many possible solutions (get_raw(), file_get_contents, etc..).
Lets say i have this file that i want to save:
$items = PodioFile::get_for_app(APP_ID, array(
'sort_by' => 'name',
'sort_desc' => 'false',
'limit' => 1
));
(...)
$item->file_id = '123456789'
$item->link = 'https://files.podio.com/111222333';
$path_to_save = 'backup/';
How can i save it?
There's an example ready for copy+paste at: http://podio.github.io/podio-php/api-requests/
// Get the file object. Only necessary if you don't already have it!
$file = PodioFile::get($file_id);
// Download the file. This might take a while...
$file_content = $file->get_raw();
// Store the file on local disk
file_put_contents($path_to_file, $file_content);
mkdirs("downloads");
foreach ($podio_item->fields['images']->values as $key => $podio_file) {
$file = PodioFile::get($podio_file->file_id);
$file_content = $file->get_raw();
file_put_contents("downloads/$podio_image->name", $file_content);
}
In my program I need to read .png files from a .tar file.
I am using pear Archive_Tar class (http://pear.php.net/package/Archive_Tar/redirected)
Everything is fine if the file im looking for exists, but if it is not in the .tar file then the function timouts after 30 seconds. In the class documentation it states that it should return null if it does not find the file...
$tar = new Archive_Tar('path/to/mytar.tar');
$filePath = 'path/to/my/image/image.png';
$file = $tar->extractInString($filePath); // This works fine if the $filePath is correct
// if the path to the file does not exists
// the script will timeout after 30 seconds
var_dump($file);
return;
Any suggestions on solving this or any other library that I could use to solve my problem?
The listContent method will return an array of all files (and other information about them) present in the specified archive. So if you check if the file you wish to extract is present in that array first, you can avoid the delay that you are experiencing.
The below code isn't optimised - for multiple calls to extract different files for example the $files array should only be populated once - but is a good way forward.
include "Archive/Tar.php";
$tar = new Archive_Tar('mytar.tar');
$filePath = 'path/to/my/image/image.png';
$contents = $tar->listContent();
$files = array();
foreach ($contents as $entry) {
$files[] = $entry['filename'];
}
$exists = in_array($filePath, $files);
if ($exists) {
$fileContent = $tar->extractInString($filePath);
var_dump($fileContent);
} else {
echo "File $filePath does not exist in archive.\n";
}
I'm trying to make a PDf downloader where the user can select a couple files and then download a zip of what they selected. I had it working on my personal server, but on the production server I get an odd error. The zip file is generated, but it's appended with a string of numbers eg;
zipfile.zip.a08752, zipfile.zip.b08752
Weirder still, if I delete the string off the end and download the file it expands properly.
I read in this topic PHP Zip Archive sporadically creating multiple files that it's an issue with the file attempting to close multiple times, failing and the retrying.
Heres the code for my zip function, though I suspect it's something to do with the configuration of the
function buildZip($params){
/* Generate unique Id */
$downloadid = uniqid();
/* Pull in the order.xml */
if(!empty($_REQUEST['downloadlink'])){
if( $params->usexml == true){
$xml = #simplexml_load_file($params->pdfolder.'/order.xml');
$order = $xml->children();
}else{
$order = $params->files;
}
/* Create the new Zip */
$zip = new ZipArchive();
if ($zip->open($params->zipname.'version'.$downloadid.'.zip', ZIPARCHIVE::CREATE) !== TRUE) {
die ("Could not open archive");
}
/* Generate the download link to output further down the page */
global $downloadLink;
$downloadLink = $params->zipname.'version'.$downloadid.'.zip';
/* Make selected variable available to build the listSelection function */
global $selected;
$selected = array();
$i = 0;
foreach($order as $el){
if (isset($_POST[$i]) == true){
//generate list of selected PDF's
array_push($selected, $el->name);
//grab selected pdf's and zip them.
echo $zip->addFile($params->pdfolder.'/'.$el->link);
$zip->addFile($params->pdfolder.'/'.$el->link) or die ("ERROR: Could not add file: pdf'.$i.'.html");
}
$i++;
}
$zip->close();
}
}
<code>
For Clarity, I'm pulling in an XML list called order.xml to pull in the array of possible files.
Try trimming the weird string of numbers after zipping is done
I just ran into the same problem. This is happening because the zip file fails to get created. It's failing because your files are too large. To fix the problem, I had to create multiple smaller zip files. I hope this helps.
I have to send a file to an API. The API documentation provides an example of how to do this through a file upload script ($_FILES). But, I have the files on the same machine, so would like to skip this step and just read in the file directly to save time.
How can I read in the file (a video) so that it will work with this code snippet?
I understand that FILES is an array, but I could set the other parts of it (the filename) seperately, I just really need the data part of it to be read in the same format to work with this code (do I use fread? file get contents?)
<?php
# Include & Instantiate
require('../echove.php');
$bc = new Echove(
'z9Jp-c3-KhWc4fqNf1JWz6SkLDlbO0m8UAwOjDBUSt0.',
'z9Jp-c3-KhWdkasdf74kaisaDaIK7239skaoKWUAwOjDBUSt0..'
);
# Create new metadata
$metaData = array(
'name' => $_POST['title'],
'shortDescription' => $_POST['shortDescription']
);
# Rename the video file
$file = $_FILES['video'];
rename($file['tmp_name'], '/tmp/' . $file['name']);
$file_location = '/tmp/' . $file['name'];
# Send video to Brightcove
$id = $bc->createVideo($file_location, $metaData);
?>
Thanks in advance for any help!
Er - all you'd need to do is feed the location, not a file handle or anything, no?
$files = $_FILES['video'];
$filename = $files['name'];
$file_location = '/home/username/' . $filename; // maybe append the extension
$id = $bc->createVideo($file_location, $metaData);
Or more simply
$id = $bc->createVideo( '/foo/bar/baz.txt', $metaData );
Looks like you don't need to do anything except point at the file on your local disk. What about:
<?php
# Include & Instantiate
require('../echove.php');
$bc = new Echove(
'z9Jp-c3-KhWc4fqNf1JWz6SkLDlbO0m8UAwOjDBUSt0.',
'z9Jp-c3-KhWdkasdf74kaisaDaIK7239skaoKWUAwOjDBUSt0..'
);
# Create new metadata
$metaData = array(
'name' => $_POST['title'],
'shortDescription' => $_POST['shortDescription']
);
# point at some file already on disk
$file_location = '/path/to/my/file.dat'; // <== if the file is already on the box, just set $file_location with the pathname and bob's yer uncle
# Send video to Brightcove
$id = $bc->createVideo($file_location, $metaData);