Once the user login to the portal, a list of PDF reports are displayed.
In order to download the reports in demand, user can check/uncheck the box associated to each report.
For Instance,
There are 10 reports in the list. User has selected 7 reports. Clicked Download. This workflow should result in the download of a zipped file which comprises of all the selected reports(7) rather than downloading each file individually.
These 10 reports in the above example are stored in the Google Drive. We store the Google download URL in database. Using this download URL we need to accomplish the aforesaid result.
Tried using Google Drive API Quickstart Reference. Error: 403 hit at the second attempt to save the files in file system.
PHP cURL implementation failed with 403 status code at the third round of running the script.
Basically, the plan was to save each selected file inside a folder in the file system. Then, Zip the folder and download the zip.
Here is what I have tried recently,
<?php
define('SAVE_REPORT_DIR', getcwd(). '/pathtosave/'. time());
function fs_report_save($fileUrl)
{
static $counter = 1;
if (!file_exists(SAVE_REPORT_DIR)) {
mkdir(SAVE_REPORT_DIR, 0777, true);
}
//The path & filename to save to.
$saveTo = SAVE_REPORT_DIR. '/'. time(). '.pdf';
//Open file handler.
$fp = fopen($saveTo, 'w+');
//If $fp is FALSE, something went wrong.
if($fp === false){
throw new Exception('Could not open: ' . $saveTo);
}
//Create a cURL handle.
$ch = curl_init($fileUrl);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
//Pass our file handle to cURL.
curl_setopt($ch, CURLOPT_FILE, $fp);
//Timeout if the file doesn't download after 20 seconds.
curl_setopt($ch, CURLOPT_TIMEOUT, 20);
//Execute the request.
curl_exec($ch);
//If there was an error, throw an Exception
if(curl_errno($ch)){
throw new Exception(curl_error($ch));
}
//Get the HTTP status code.
$statusCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
//Close the cURL handler.
curl_close($ch);
//Close the file handler.
fclose($fp);
if($statusCode == 200){
echo 'File: '. $saveTo .'. Downloaded!<br>';
} else{
echo "Status Code: " . $statusCode;
}
}
$reports = array(
'https://drive.google.com/uc?id=a&export=download',
'https://drive.google.com/uc?id=b&export=download',
'https://drive.google.com/uc?id=c&export=download'
);
foreach($reports as $report) {
fs_report_save($report);
}
?>
Please give a direction to accomplish the result.
Thanks
As #DalmTo has said, the API is not going to let you download multiples files in bulk as a zip, what you can do is create a Folder inside Drive and download that folder as zip.
There is a ton more information in this answer by #Tanaike:
Download Folder as Zip Google Drive API
Related
I'm trying to automate downloading a file (in this case it's a PDF invoice) on wordpress order completed hook.
I have first tried to download it using wp_remote_get which seemed simple, but without success (no file downloads):
function download_pdf_invoice__on_order_completed( $order_id, $order ) {
wp_remote_get( "http://www.africau.edu/images/default/sample.pdf" );
}
add_action( 'woocommerce_order_status_completed', 'download_pdf_invoice__on_order_completed', 20, 2 );
So far I have managed to make it work and download any file with cURL as long as the extension is in the URL, but I can't get it to work with my dynamic download URL, which is this test/demo URL:
https://www.moloni.com/downloads/index.php?action=getDownload&h=b75b2d99c08c56480da0c5dff4900b4a&d=189279574&e=teste#moloni.com&i=1&t=n
function action_woocommerce_admin_order_get_invoice_pdf($url){
//The resource that we want to download.
$fileUrl = 'https://www.moloni.com/downloads/index.php?action=getDownload&h=b75b2d99c08c56480da0c5dff4900b4a&d=189279574&e=teste#moloni.com&i=1&t=n';
//The path & filename to save to.
$saveTo = '/myserver/public_html/wp-content/plugins/my-custom-functionality-master/logo.jpg';
//Open file handler.
$fp = fopen($saveTo, 'w+');
//If $fp is FALSE, something went wrong.
if($fp === false){
throw new Exception('Could not open: ' . $saveTo);
}
//Create a cURL handle.
$ch = curl_init($fileUrl);
//Pass our file handle to cURL.
curl_setopt($ch, CURLOPT_FILE, $fp);
//Timeout if the file doesn't download after 20 seconds.
curl_setopt($ch, CURLOPT_TIMEOUT, 20);
//Execute the request.
curl_exec($ch);
//Get the HTTP status code.
$statusCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
//Close the cURL handler.
curl_close($ch);
}
add_action( 'woocommerce_order_status_completed', 'action_woocommerce_admin_order_get_invoice_pdf', 20, 2 );
However if I replace the $fileUrl with this sample PDF http://www.africau.edu/images/default/sample.pdf then it will work
I have considered implementing some sort of error / log to be able to see what errors could be caused by the code however I have not figured it out how to do this under these circumstances of hooking the download to the woocommerce order completed action
Use file_get_contents to download file from URL.
$fileUrl = 'https://www.moloni.com';
$saveTo = ABSPATH . '/wp-content/plugins/my-custom-functionality-master/logo.jpg'
file_put_contents(
$saveTo,
file_get_contents($fileUrl)
);
I have a Laravel web app in which users can upload files. These files can be sensitive and although they are stored on S3 they are only accessed via my webservers (streamed download). Once uploaded users may wish to download a selection of these files.
Previously when users went to download a selection of files my web server would download the files from S3, zip them locally and then send the zip down to the client. However once in production due to file sizes the server response would frequently time out.
As an alternative method I want to zip the files on the fly via ZipStream but I haven't had much luck. The zip file either ends up with corrupted files or is corrupted itself and incredibly small.
If it possible to pass a stream resource for a file on S3 to ZipStream and what is the best way to address my timeout issues?
I have tried several method my most recent two are as follows:
// First method using fopen
// Results in tiny corrupt zip files
if (!($fp = fopen("s3://{$bucket}/{$key}", 'r')))
{
die('Could not open stream for reading');
}
$zip->addFileFromPath($file->orginal_filename, "s3://{$bucket}/{$key}");
fclose($fp);
// Second method tried get download the file from s3 before sipping
// Results in a reasonable sized zip file that is corrupt
$contents = file_get_contents("s3://{$bucket}/{$key}");
$zip->addFile($file->orginal_filename, $contents);
Each of these sits within a loop that goes through each files. After the loop I call $zip->finish().
Note I do not get any php errors just corrupt files.
In the end the solution was to use signed S3 url's and curl to provide a file stream for ZipStream as demonstrated by s3 bucket steam zip php. The resulting code edited from the aforementioned source is as follows:
public function downloadZip()
{
// ...
$s3 = Storage::disk('s3');
$client = $s3->getDriver()->getAdapter()->getClient();
$client->registerStreamWrapper();
$expiry = "+10 minutes";
// Create a new zipstream object
$zip = new ZipStream($zipName . '.zip');
foreach($files as $file)
{
$filename = $file->original_filename;
// We need to use a command to get a request for the S3 object
// and then we can get the presigned URL.
$command = $client->getCommand('GetObject', [
'Bucket' => config('filesystems.disks.s3.bucket'),
'Key' => $file->path()
]);
$signedUrl = $request = $client->createPresignedRequest($command, $expiry)->getUri();
// We want to fetch the file to a file pointer so we create it here
// and create a curl request and store the response into the file
// pointer.
// After we've fetched the file we add the file to the zip file using
// the file pointer and then we close the curl request and the file
// pointer.
// Closing the file pointer removes the file.
$fp = tmpfile();
$ch = curl_init($signedUrl);
curl_setopt($ch, CURLOPT_TIMEOUT, 120);
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_exec($ch);
curl_close($ch);
$zip->addFileFromStream($filename, $fp);
fclose($fp);
}
$zip->finish();
}
Note this requires curl and php-curl to be installed and functioning on your server.
I had the same issues as #cubiclewar and investigated a little bit. I found that the most up to date solution to this doesn't need curl and it visible here on the wiki for the maennchen/ZipStream-PHP/ library.
https://github.com/maennchen/ZipStream-PHP/wiki/Symfony-example
use ZipStream;
//...
/**
* #Route("/zipstream", name="zipstream")
*/
public function zipStreamAction()
{
//sample test file on s3
$s3keys = array(
"ziptestfolder/file1.txt"
);
$s3Client = $this->get('app.amazon.s3'); //s3client service
$s3Client->registerStreamWrapper(); //required
//using StreamedResponse to wrap ZipStream functionality for files on AWS s3.
$response = new StreamedResponse(function() use($s3keys, $s3Client)
{
// Define suitable options for ZipStream Archive.
$opt = array(
'comment' => 'test zip file.',
'content_type' => 'application/octet-stream'
);
//initialise zipstream with output zip filename and options.
$zip = new ZipStream\ZipStream('test.zip', $opt);
//loop keys - useful for multiple files
foreach ($s3keys as $key) {
// Get the file name in S3 key so we can save it to the zip
//file using the same name.
$fileName = basename($key);
//concatenate s3path.
$bucket = 'bucketname'; //replace with your bucket name or get from parameters file.
$s3path = "s3://" . $bucket . "/" . $key;
//addFileFromStream
if ($streamRead = fopen($s3path, 'r')) {
$zip->addFileFromStream($fileName, $streamRead);
} else {
die('Could not open stream for reading');
}
}
$zip->finish();
});
return $response;
}
I am looking for a function that gets the metadata of a .mp3 file from a URL (NOT local .mp3 file on my server).
Also, I don't want to install http://php.net/manual/en/id3.installation.php or anything similar to my server.
I am looking for a standalone function.
Right now i am using this function:
<?php
function getfileinfo($remoteFile)
{
$url=$remoteFile;
$uuid=uniqid("designaeon_", true);
$file="../temp/".$uuid.".mp3";
$size=0;
$ch = curl_init($remoteFile);
//==============================Get Size==========================//
$contentLength = 'unknown';
$ch1 = curl_init($remoteFile);
curl_setopt($ch1, CURLOPT_NOBODY, true);
curl_setopt($ch1, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch1, CURLOPT_HEADER, true);
curl_setopt($ch1, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here)
$data = curl_exec($ch1);
curl_close($ch1);
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
$contentLength = (int)$matches[1];
$size=$contentLength;
}
//==============================Get Size==========================//
if (!$fp = fopen($file, "wb")) {
echo 'Error opening temp file for binary writing';
return false;
} else if (!$urlp = fopen($url, "r")) {
echo 'Error opening URL for reading';
return false;
}
try {
$to_get = 65536; // 64 KB
$chunk_size = 4096; // Haven't bothered to tune this, maybe other values would work better??
$got = 0; $data = null;
// Grab the first 64 KB of the file
while(!feof($urlp) && $got < $to_get) { $data = $data . fgets($urlp, $chunk_size); $got += $chunk_size; } fwrite($fp, $data); // Grab the last 64 KB of the file, if we know how big it is. if ($size > 0) {
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RESUME_FROM, $size - $to_get);
curl_exec($ch);
// Now $fp should be the first and last 64KB of the file!!
#fclose($fp);
#fclose($urlp);
} catch (Exception $e) {
#fclose($fp);
#fclose($urlp);
echo 'Error transfering file using fopen and cURL !!';
return false;
}
$getID3 = new getID3;
$filename=$file;
$ThisFileInfo = $getID3->analyze($filename);
getid3_lib::CopyTagsToComments($ThisFileInfo);
unlink($file);
return $ThisFileInfo;
}
?>
This function downloads 64KB from a URL of an .mp3 file, then returns the array with the metadata by using getID3 function (which works on local .mp3 files only) and then deletes the 64KB's previously downloaded.
Problem with this function is that it is way too slow from its nature (downloads 64KB's per .mp3, imagine 1000 mp3 files.)
To make my question clear : I need a fast standalone function that reads metadata of a remote URL .mp3 file.
This function downloads 64KB from a URL of an .mp3 file, then returns the array with the metadata by using getID3 function (which works on local .mp3 files only) and then deletes the 64KB's previously downloaded. Problem with this function is that it is way too slow from its nature (downloads 64KB's per .mp3, imagine 1000 mp3 files.)
Yeah, well what do you propose? How do you expect to get data if you don't get data? There is no way to have a generic remote HTTP server send you that ID3 data. Really, there is no magic. Think about it.
What you're doing now is already pretty solid, except that it doesn't handle all versions of ID3 and won't work for files with more than 64KB of ID3 tags. What I would do to improve it to is to use multi-cURL.
There are several PHP classes available that make this easier:
https://github.com/jmathai/php-multi-curl
$mc = EpiCurl::getInstance();
$results[] = $mc->addUrl(/* Your stream URL here /*); // Run this in a loop, 10 at a time or so
foreach ($results as $result) {
// Do something with the data.
}
1 - I have configure google picker and it is working fine and I select the file from picker and get the file id.
2 - After refresh token etc all process I get the file metadata and get the file export link
$downloadExpLink = $file->getExportLinks();
$downloadUrl = $downloadExpLink['application/vnd.openxmlformats-officedocument.wordprocessingml.document'];
3 - After that I use this
if ($downloadUrl) {
$request = new Google_HttpRequest($downloadUrl, 'GET', null, null);
$httpRequest = Google_Client::$io->authenticatedRequest($request);
if ($httpRequest->getResponseHttpCode() == 200)
{
$content = $httpRequest->getResponseBody();
print_r($content);
} else {
// An error occurred.
return null;
}
and get this response
[responseBody:protected] => PK��DdocProps/app.xml���
�0D���k�I[ѫ��m
��!����A={���}�
2G�Z�g�V��Bľ֧�n�Ҋ�ap!����fb�d����k}Ikc�_`t<+�(�NJ̽�����#��EU-
�0#���P����........
4 - I use some cURL functions to get file from google drive and save it to server. IN server directory a file created but cropped. I use this code
$downloadExpLink = $file->getExportLinks();
$downloadUrl = $downloadExpLink['application/vnd.openxmlformats-officedocument.wordprocessingml.document'];
//$downloadUrl value is
/*https://docs.google.com/feeds/download/documents/export/Export?id=1CEt1ya5kKLtgK************IJjDEY5BdfaGI&exportFormat=docx*/
When I put this url into browser it will download file successfully but when I use this url to fetch file with cURL or any php code and try to save it on server it saves corrupted file.
$ch = curl_init();
$source = $downloadUrl;
curl_setopt($ch, CURLOPT_URL, $source);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec ($ch);
curl_close ($ch);
$destination = "test/afile5.docx";
$file = fopen($destination, "w+");
fputs($file, $data);
fclose($file);
It result a corrupted file stored on server but whe I use this code to get any file other then google drive I download it successfully on server.
Can any one please help that how to download file from $downloadUrl to my server using php ?
I have script-1 on server A, where user ask for a file.
I have script-2 on server B (the file repository) where I check that user can access it and return the correct file (I'm using Smart File Download http://www.zubrag.com/scripts/download.php).
I've tried cURL and file_get_contents, I've changed Content Header in various ways, but I wasn't still able to download the file.
This is my request:
$request = "http://mysite.com/download.php?f=test.pdf";
and it works fine.
What should I call in script-1 to force the file be downloaded?
Some of my tries
This works, but I don't know how to handle unauthorized or broken downloads
header('Content-type: application/pdf');
$handle = fopen($request, "r");
if ($handle) {
while (!feof($handle)) {
$buffer = fgets($handle, 4096);
echo $buffer;
}
fclose($handle);
}
This prints the pdf code (not the text) straight in the browser (I think it's a header problem):
$c = curl_init();
curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($c, CURLOPT_URL, $request);
$contents = curl_exec($c);
curl_close($c);
if ($contents) return $contents;
else return FALSE;
This generate a white page
file_get_contents($request);
To force download, add
header('Content-disposition: attachment');
But Note, that it's not in HTTP 1.1 spec anymore, see Uses of content-disposition in an HTTP response header first answer
Without your code I don't know what you've tried, but you need to get the contents of the file via cURL and then save it to your server. Something like...
$url = 'http://website.com/file.pdf';
$path = '/tmp/file.pdf';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$contents = curl_exec($ch);
curl_close($ch);
file_put_contents($path, $contents);
If you want to downloads file from the FTP server you can use php File Transfer Protocol (FTP) extension. Please find below code:
<?php
$SERVER_ADDRESS="";
$SERVER_USERNAME="";
$SERVER_PASSWORD="";
$conn_id = ftp_connect($SERVER_ADDRESS);
// login with username and password
$login_result = ftp_login($conn_id, $SERVER_USERNAME, $SERVER_PASSWORD);
$server_file="test.pdf" //FTP server file path
$local_file = "new.pdf"; //Local server file path
##----- DOWNLOAD $SERVER_FILE AND SAVE IT TO $LOCAL_FILE--------##
if (ftp_get($conn_id, $local_file, $server_file, FTP_BINARY)) {
echo "Successfully written to $local_file\n";
} else {
echo "There was a problem\n";
}
ftp_close($conn_id);
?>
Download the file with curl, then check this: http://php.net/function.readfile
It shows how to force download.
SOLVED
I ended by simply redirect the request with:
header("Location: $request");