Get the Zip file by URL in PHP - php

I am getting the response by Third party API call. They providing me a file_url in response. If I hitting the file_url in browser URL got the zip file on local machine.
stdClass Object
(
[start_date] => 2018-06-01 08:00:00
[report_date] => 2018-07-02
[account_name] => Maneesh
[show_headers] => 1
[file_url] => https:examplefilename
[date_range] => 06/01/2018 - 07/03/2018
)
How can I download the zip file on public/phoneList folder on server and unzipped the file?
$fileUrl = $rvmDetails->file_url;
$zip = realpath('/phoneList/').'zipped.zip';
file_put_contents($zip, file_get_contents($fileUrl));
$zip = new \ZipArchive();
$res = $zip->open('zipped.zip');
if ($res === TRUE) {
$zip->extractTo('/phoneList/'); // phoneList is folder
$zip->close();
} else {
echo "Error opening the file $fileUrl";
}
The above code works. but getting issue while unzipp the folder.
ZipArchive::extractTo(): Permission denied

With the PHP build-in system you can open/extract zips in a certain path and download it with CURL (you should create a filename too), like:
$fileUrl = $obj->file_url;
$fileName = date().".zip"; //create a random name or certain kind of name here
$fh = fopen($filename, 'w');
$ch = curl_init()
curl_setopt($ch, CURLOPT_URL, $fileUrl);
curl_setopt($ch, CURLOPT_FILE, $fh);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); // this will follow redirects
curl_exec($ch);
curl_close($ch);
fclose($fh);
// get the absolute path to $file
$path = pathinfo(realpath($filename), PATHINFO_DIRNAME);
$zip = new ZipArchive;
$res = $zip->open($file);
if ($res === TRUE) {
$zip->extractTo($path);
$zip->close();
} else {
echo "Error opening the file $file";
}
You can find more info here: download a zip file from a url and here: Unzip a file with php
Hope it helps!

Related

Download Multiple Google Drive Files as a Zipped File

Once the user login to the portal, a list of PDF reports are displayed.
In order to download the reports in demand, user can check/uncheck the box associated to each report.
For Instance,
There are 10 reports in the list. User has selected 7 reports. Clicked Download. This workflow should result in the download of a zipped file which comprises of all the selected reports(7) rather than downloading each file individually.
These 10 reports in the above example are stored in the Google Drive. We store the Google download URL in database. Using this download URL we need to accomplish the aforesaid result.
Tried using Google Drive API Quickstart Reference. Error: 403 hit at the second attempt to save the files in file system.
PHP cURL implementation failed with 403 status code at the third round of running the script.
Basically, the plan was to save each selected file inside a folder in the file system. Then, Zip the folder and download the zip.
Here is what I have tried recently,
<?php
define('SAVE_REPORT_DIR', getcwd(). '/pathtosave/'. time());
function fs_report_save($fileUrl)
{
static $counter = 1;
if (!file_exists(SAVE_REPORT_DIR)) {
mkdir(SAVE_REPORT_DIR, 0777, true);
}
//The path & filename to save to.
$saveTo = SAVE_REPORT_DIR. '/'. time(). '.pdf';
//Open file handler.
$fp = fopen($saveTo, 'w+');
//If $fp is FALSE, something went wrong.
if($fp === false){
throw new Exception('Could not open: ' . $saveTo);
}
//Create a cURL handle.
$ch = curl_init($fileUrl);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
//Pass our file handle to cURL.
curl_setopt($ch, CURLOPT_FILE, $fp);
//Timeout if the file doesn't download after 20 seconds.
curl_setopt($ch, CURLOPT_TIMEOUT, 20);
//Execute the request.
curl_exec($ch);
//If there was an error, throw an Exception
if(curl_errno($ch)){
throw new Exception(curl_error($ch));
}
//Get the HTTP status code.
$statusCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
//Close the cURL handler.
curl_close($ch);
//Close the file handler.
fclose($fp);
if($statusCode == 200){
echo 'File: '. $saveTo .'. Downloaded!<br>';
} else{
echo "Status Code: " . $statusCode;
}
}
$reports = array(
'https://drive.google.com/uc?id=a&export=download',
'https://drive.google.com/uc?id=b&export=download',
'https://drive.google.com/uc?id=c&export=download'
);
foreach($reports as $report) {
fs_report_save($report);
}
?>
Please give a direction to accomplish the result.
Thanks
As #DalmTo has said, the API is not going to let you download multiples files in bulk as a zip, what you can do is create a Folder inside Drive and download that folder as zip.
There is a ton more information in this answer by #Tanaike:
Download Folder as Zip Google Drive API

Receive image via PHP curl, then upload to S3

I am using PHP CURL to generate a customized PNG image from a REST API. Once this image has loaded I would like to upload it into an AWS S3 Bucket and show the link to it.
Here's my script so far:
$ch = curl_init();
$timeout = 5;
curl_setopt($ch, CURLOPT_URL, 'http://url-to-generate-image.com?options=' + $_GET['options']);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec($ch);
$info=curl_getinfo($ch);
curl_close($ch);
//require S3 Class
if (!class_exists('S3')) {
require_once('S3.php');
}
//AWS access info
if (!defined('awsAccessKey')) {
define('awsAccessKey', 'MY_ACCESS_KEY');
}
if (!defined('awsSecretKey')) {
define('awsSecretKey', 'MY_SECRET_KEY');
}
//instantiate the class
$s3 = new S3(awsAccessKey, awsSecretKey);
$s3->putBucket('bucket-name', S3::ACL_PUBLIC_READ);
$file_name = md5(rand(99,99999999)) + '-myImage.png';
if ($s3->putObjectFile($data, 'bucket-name' , $file_name, S3::ACL_PUBLIC_READ)) {
echo 'success';
$gif_url = 'http://bucket-name.s3.amazonaws.com/'.$file_name;
} else {
echo 'failed';
}
It keeps failing. Now, I think the problem is with where I use putObjectFile - the $data variable represents the image, but maybe it has to be passed in another way?
I am using a common PHP Class for S3: http://undesigned.org.za/2007/10/22/amazon-s3-php-class
Use PHP memory wrapper to store the contents of the image, and use $s3->putObject() method:
$fp = fopen('php://memory', 'wb');
fwrite($fp, $data);
rewind($fp);
$s3->putObject([
'Bucket' => $bucketName,
'Key' => $fileName,
'ContentType' => 'image/png',
'Body' => $fp,
]);
fclose($fp);
Proven method (you may need to alter the code a bit) with PHP 5.5 and latest AWS libraries.
http://php.net/manual/en/wrappers.php.php

Download file from google drive api to my server using php

1 - I have configure google picker and it is working fine and I select the file from picker and get the file id.
2 - After refresh token etc all process I get the file metadata and get the file export link
$downloadExpLink = $file->getExportLinks();
$downloadUrl = $downloadExpLink['application/vnd.openxmlformats-officedocument.wordprocessingml.document'];
3 - After that I use this
if ($downloadUrl) {
$request = new Google_HttpRequest($downloadUrl, 'GET', null, null);
$httpRequest = Google_Client::$io->authenticatedRequest($request);
if ($httpRequest->getResponseHttpCode() == 200)
{
$content = $httpRequest->getResponseBody();
print_r($content);
} else {
// An error occurred.
return null;
}
and get this response
[responseBody:protected] => PK��DdocProps/app.xml���
�0D���k�I[ѫ��m
��!����A={���}�
2G�Z�g�V��Bľ֧�n�Ҋ�ap!����fb�d����k}Ikc�_`t<+�(�NJ̽�����#��EU-
�0#���P����........
4 - I use some cURL functions to get file from google drive and save it to server. IN server directory a file created but cropped. I use this code
$downloadExpLink = $file->getExportLinks();
$downloadUrl = $downloadExpLink['application/vnd.openxmlformats-officedocument.wordprocessingml.document'];
//$downloadUrl value is
/*https://docs.google.com/feeds/download/documents/export/Export?id=1CEt1ya5kKLtgK************IJjDEY5BdfaGI&exportFormat=docx*/
When I put this url into browser it will download file successfully but when I use this url to fetch file with cURL or any php code and try to save it on server it saves corrupted file.
$ch = curl_init();
$source = $downloadUrl;
curl_setopt($ch, CURLOPT_URL, $source);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec ($ch);
curl_close ($ch);
$destination = "test/afile5.docx";
$file = fopen($destination, "w+");
fputs($file, $data);
fclose($file);
It result a corrupted file stored on server but whe I use this code to get any file other then google drive I download it successfully on server.
Can any one please help that how to download file from $downloadUrl to my server using php ?

How to remotely upload a file using php to google drive

I have tried to use file_get_contents() and also curl to do this .. But both of these functions download the file temporarily to my pc and then upload to drive ...
Is there any way in which i can directly upload file from the url to my drive ?
Here is one code i tried :
$file = new Google_DriveFile();
$file->setTitle('My app');
$file->setDescription('Application');
$file->setMimeType('application/exe');
$url = "http://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.exe";
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec ($ch); // execute
curl_close ($ch); // close curl handle
$createdFile = $service->files->insert($file, array('data' => $data,'mimeType' => 'application/exe',));
Here is another one :
$file = new Google_DriveFile();
$file->setTitle('My app');
$file->setDescription('Application');
$file->setMimeType('application/exe');
$url = "http://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.exe";
$data = file_get_contents($url);
$createdFile = $service->files->insert($file, array('data' => $data,'mimeType' => 'application/exe',));
Both of these codes are downloading the files first to my pc ... then they start uploading
When you pass a URL in stead of a local file path to file_get_contents() or curl_init(), PHP will automatically attempt to download the file that the URL points to.
All you need to do to prevent this is to change the value of $url to a local file path:
$url = '/tmp/somefile.txt'; // *nix
$url = 'c:/somefile.txt'; // Windows

Setting automatic GIT deployment of PHP project

What I want to do is, to switch from FTP deployment into GIT. I mean, I want to keep automatically keep synced my Bitbucket private repo and my shared webhosting. I googled and found following script to deploy my webserver (based on this article).
// Set these dependant on your BB credentials
$username = 'username';
$password = 'password';
// Grab the data from BB's POST service and decode
$json = stripslashes($_POST['payload']);
$data = json_decode($json);
// Set some parameters to fetch the correct files
$uri = $data->repository->absolute_url;
$node = $data->commits[0]->node;
$files = $data->commits[0]->files;
// Foreach through the files and curl them over
foreach ($files as $file) {
if ($file->type == "removed") {
unlink($file->file);
} else {
$url = "https://api.bitbucket.org/1.0/repositories"
. $uri . "raw/" .$node ."/" . $file->file;
$path = $file->file;
$dirname = dirname($path);
if (!is_dir($dirname)) {
mkdir($dirname, 0775, true);
}
$fp = fopen($path, 'w');
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_USERPWD, "$username:$password");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
curl_close($ch);
fclose($fp);
}
}
The problem is, this works on simple changesets like 5-10 file change. But when I push the whole project for the first time (for example with 600-700 files and folders) into my bitbucket private profile, this script doesn't work. (just doesn't, no error on errors.log)
What am I missing?
By the way, Can I do something like that:
As we know, Bitbucket can send POST information into an exact url (given by user) directly after a commit has been made. So when deploy.php receives POST, we can get the entire commit as a zip or tar, clean our current files and unzip the new commit into webserver.
Is that possible? If yes then how? Any other good way?
Update
I found the code below for automated deploying php project. The problem is https://bitbucket.org/$username/$reponame/get/tip.zip this url doesnt work on bitbucket private git repo: probably related with authentication (I haven't tested this on public repo) What i need is to get the last commit's zip file and unzip inside my project.
<?
// your Bitbucket username
$username = "edifreak";
// your Bitbucket repo name
$reponame = "canvas-game-demo";
// extract to
$dest = "./"; // leave ./ for relative destination
////////////////////////////////////////////////////////
// Let's get stuff done!
// set higher script timeout (for large repo's or slow servers)
set_time_limit(380);
// download the repo zip file
$repofile = file_get_contents("https://bitbucket.org/$username/$reponame/get/tip.zip");
file_put_contents('tip.zip', $repofile);
unset($repofile);
// unzip
$zip = new ZipArchive;
$res = $zip->open('tip.zip');
if ($res === TRUE) {
$zip->extractTo('./');
$zip->close();
} else {
die('ZIP not supported on this server!');
}
// delete unnecessary .hg files
#unlink("$username-$reponame-tip/.hgignore");
#unlink("$username-$reponame-tip/.hg_archival.txt");
// function to delete all files in a directory recursively
function rmdir_recursively($dir) {
if (is_dir($dir)) {
$objects = scandir($dir);
foreach ($objects as $object) {
if ($object != "." && $object != "..") {
if (filetype($dir."/".$object) == "dir") rmdir_recursively($dir."/".$object); else unlink($dir."/".$object);
}
}
reset($objects);
rmdir($dir);
}
}
// function to recursively copy the files
function copy_recursively($src, $dest) {
if (is_dir($src)) {
if($dest != "./") rmdir_recursively($dest);
#mkdir($dest);
$files = scandir($src);
foreach ($files as $file)
if ($file != "." && $file != "..") copy_recursively("$src/$file", "$dest/$file");
}
else if (file_exists($src)) copy($src, $dest);
rmdir_recursively($src);
}
// start copying the files from extracted repo and delete the old directory recursively
copy_recursively("$username-$reponame-tip", $dest);
// delete the repo zip file
unlink("tip.zip");
// Yep, we're done :)
echo "We're done!";
?>
This solution do not provides authentication:
// download the repo zip file
$repofile = file_get_contents("https://bitbucket.org/$username/$reponame/get/tip.zip");
file_put_contents('tip.zip', $repofile);
unset($repofile);
But curl allows it. So a zip archive can be downloaded from a private repository in same way like in first script.
$node = ''; // a node from repo, like c366e96f16...
$fp = fopen($path, 'w');
$ch = curl_init("https://bitbucket.org/$username/$reponame/get/$node.zip");
curl_setopt($ch, CURLOPT_USERPWD, "$username:$password");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
curl_close($ch);
fclose($fp);
I have tested it for my bitbucket account. It's work very well.
If necessary to get last changeset node that we should use bitbucket api GET a list of changesets:
$username = 'login';
$password = 'pass';
$owner = $username; // if user is owner
$repo = 'repo name';
$response = "";
$callback = function($url, $chunk) use (&$response){
$response .= $chunk;
return strlen($chunk);
};
$ch = curl_init("https://api.bitbucket.org/1.0/repositories/$owner/$repo/changesets?limit=1");
curl_setopt($ch, CURLOPT_USERPWD, "$username:$password");
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('User-Agent:Mozilla/5.0'));
curl_setopt($ch, CURLOPT_WRITEFUNCTION, $callback);
curl_exec($ch);
curl_close($ch);
$changesets = json_decode($response, true);
$node = $changesets['changesets'][0]['node'];
$raw_node = $changesets['changesets'][0]['raw_node'];
print($node . PHP_EOL);
print($raw_node . PHP_EOL);
I recently discovered Capistrano which is a great tool. It was initially developed for ruby but it's also great in combination with php http://www.davegardner.me.uk/blog/2012/02/13/php-deployment-with-capistrano/
Based on your update, replace you php files contents with code below:
<?php
// Set these dependant on your BB credentials
$username = '';
$password = '';
// your Bitbucket repo name
$reponame = "";
// extract to
$dest = "./"; // leave ./ for relative destination
// Grab the data from BB's POST service and decode
$json = stripslashes($_POST['payload']);
$data = json_decode($json);
// set higher script timeout (for large repo's or slow servers)
set_time_limit(5000);
// Set some parameters to fetch the correct files
$uri = $data->repository->absolute_url;
$node = $data->commits[0]->node;
$files = $data->commits[0]->files;
// download the repo zip file
$fp = fopen("tip.zip", 'w');
$ch = curl_init("https://bitbucket.org/$username/$reponame/get/$node.zip");
curl_setopt($ch, CURLOPT_USERPWD, "$username:$password");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
curl_close($ch);
fclose($fp);
// unzip
$zip = new ZipArchive;
$res = $zip->open('tip.zip');
if ($res === TRUE) {
$zip->extractTo('./');
$zip->close();
} else {
die('ZIP not supported on this server!');
}
// function to delete all files in a directory recursively
function rmdir_recursively($dir) {
if (is_dir($dir)) {
$objects = scandir($dir);
foreach ($objects as $object) {
if ($object != "." && $object != "..") {
if (filetype($dir . "/" . $object) == "dir")
rmdir_recursively($dir . "/" . $object); else
unlink($dir . "/" . $object);
}
}
reset($objects);
rmdir($dir);
}
}
// function to recursively copy the files
function copy_recursively($src, $dest) {
if (is_dir($src)) {
if ($dest != "./")
rmdir_recursively($dest);
#mkdir($dest);
$files = scandir($src);
foreach ($files as $file)
if ($file != "." && $file != "..")
copy_recursively("$src/$file", "$dest/$file");
}
else if (file_exists($src))
copy($src, $dest);
rmdir_recursively($src);
}
// start copying the files from extracted repo and delete the old directory recursively
copy_recursively("$username-$reponame-$node", $dest);
// delete the repo zip file
unlink("tip.zip");
?>
Update
Here are repositories of this script (Modified by Me) on
GitHub
Bitbucket

Categories