Download and storing file remotelly - php

im trying to download a remote xml file, but is not working, is not storing the file on my storage.
my code:
$url = 'http://xml.url.xml';
set_time_limit(0);
// Download file and save it on folder
$guzzleClient = new Client();
$response = $guzzleClient->get($url);
$body = $response->getBody();
$body->seek(0);
$size = $body->getSize();
$file = $body->read($size);
Storage::download($file);

The Storage::download() method is used to generate response, that will force the download in the browser.
Use Storage::put('filename.xml', $content) instead.
You can read more in the docs:
https://laravel.com/docs/5.6/filesystem

Related

Google Drive - Obtaining the FileSize of a File with Just the FileID

I need to obtain a file's meta-data from just using the Google Drive FileID only. The assumption being that I don't have knowledge of the folderID. By just using the FileID the API just returns null for the filesize but the correct values for the other meta-data.
Just to be clear, I can get all the meta data (including filesize) by processing the files within a folder, for example:
<?php
/*
* With this example I have the folderID from a Google Drive shared link
*/
$folderId = "https://drive.google.com/drive/folders/1ggWcacF9qQroZOkhfb3tTEwvBzclccwI?usp=share_link";
$Auth = new Oauth2Authentication();
$client = $Auth->GetClient();
$client = $Auth->initializeClient($client);
$drive = new DriveService($client);
$pattern = "/[-\w]{25,}(?!.*[-\w]{25,})/";
if (preg_match($pattern, $folderId, $match)); // grabs folderID from the url
/*
* Now list the folder contents
*/
$responseFiles = $drive->ListFiles($match[0]);
$files = $responseFiles['files'];
foreach ($files as $file) {
$id[] = $file->id;
$name[] = $file->name;
$mimeType[] = $file->mimeType;
$size[] = $file->size; //<- correct file size
};
$drive->DownloadFile($id, $name, $size);
The code below, which I need to work, returns most of the meta-data except that, as stated above, the $file->size is always null. My DownloadFile method needs the file size so that it can better handle large files. Is there anyone who can help me better understand the problem?
<?php
/*
* With this example I have the only the fileID from a Google Drive shared link
*/
$fileId = "https://drive.google.com/file/d/1EjNk1ijPLKJMwXfzkEIG487HFzx0v80v/view?usp=share_link";
$Auth = new Oauth2Authentication();
$client = $Auth->GetClient();
$client = $Auth->initializeClient($client);
$drive = new DriveService($client);
$pattern = "/[-\w]{25,}(?!.*[-\w]{25,})/";
if (preg_match($pattern, $fileId, $match)); // grabs fileID from the url
/*
* Now get the file meta data and download the file
*/
$fileId = $match[0];
$service = new Google_Service_Drive($client);
$file = $service->files->get($fileId);
$drive->DownloadFile($file->id, $file->name, $file->size); //$file->size is always null
Update: Thanks to #DaImTo I was pointed in the right direction. The full fileID's details can be obtained by adding
array('fields' =>'*')
to the argument list of the $service->files->get method, i.e.
$file = $service->files->get($fileId,array('fields' =>'*'));
Files.list retuns a list of files with limited response.
files": [
{
"kind": "drive#file",
"mimeType": "application/vnd.google-apps.spreadsheet",
"id": "1x8-vD-X2wltablGF22Lpwup8VtxNY",
"name": "Experts Activity Dump go/ExpertsActivities"
},
To get the file size either tweek the optimal parameter fields or just use .
Fileds=*
or
$responseFiles = $drive->ListFiles($match[0],array('fields' =>'*'))

Read the csv file content

I want to read the csv file content using php, google drive api v3
I got the fileid and file name but I am not sure how I can read the file content?
$service = new Drive($client);
$results = $service->files->listFiles();
$fileId="1I****************";
$file = $service->files->get($fileId);
The google drive api is a file storage api. It allows you to upload, download and manage storage of files.
It does not give you access to the contents of the file.
To do this you would need to download the file and open it locally.
Alternately since its a csv file you may want to consider converting it to a google sheet then you could use the google sheets api to access the data within the file programmatically.
Code for downloading a file from Google drive api would look something like this
Full sample can be found here large-file-download.php
// If this is a POST, download the file
if ($_SERVER['REQUEST_METHOD'] == 'POST') {
// Determine the file's size and ID
$fileId = $files[0]->id;
$fileSize = intval($files[0]->size);
// Get the authorized Guzzle HTTP client
$http = $client->authorize();
// Open a file for writing
$fp = fopen('Big File (downloaded)', 'w');
// Download in 1 MB chunks
$chunkSizeBytes = 1 * 1024 * 1024;
$chunkStart = 0;
// Iterate over each chunk and write it to our file
while ($chunkStart < $fileSize) {
$chunkEnd = $chunkStart + $chunkSizeBytes;
$response = $http->request(
'GET',
sprintf('/drive/v3/files/%s', $fileId),
[
'query' => ['alt' => 'media'],
'headers' => [
'Range' => sprintf('bytes=%s-%s', $chunkStart, $chunkEnd)
]
]
);
$chunkStart = $chunkEnd + 1;
fwrite($fp, $response->getBody()->getContents());
}
// close the file pointer
fclose($fp);

Create & zip & Download an S3 folder/multiple-files with AWS SDK PHP

I'm willing to create a php function, trigger in js, that can :
Retrieve all AWS S3 bucket files from a specific folder (I can also provide the path of each files)
Create a Zip containing all S3 files
Download the Zip when the trigger is hit (cta)
I'm able to download a single file with the getObject method from this example However, I can't find any informations in order to download multiple file and Zip it.
I tried the downloadBucket method, however it download all files inside my project architecture and not as a zip file.
Here is my code:
<?php
// AWS Info + Connection
$IAM_KEY = 'XXXXX';
$IAM_SECRET = 'XXXXX';
$region = 'XXXXX';
require '../../vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$FolderToDownload="FolderToDownload";
// Connection OK
$s3 = S3Client::factory(
array(
'credentials' => array(
'key' => $IAM_KEY,
'secret' => $IAM_SECRET
),
'version' => 'latest',
'region' => $region
)
);
try {
$bucketName = 'XXXXX';
$destination = 'NeedAZipFileNotAFolderPath';
$options = array('debug'=>true);
// ADD All Files into the folder: NeedAZipFileNotAFolderPath/Files1.png...Files12.png...
$s3->downloadBucket($destination,$bucketName,$FolderToDownload,$options);
// Looking for a zip file that can be downloaded
// Can I use downloadBucket? Is there a better way to do it?
// if needed I can create an array of all files (paths) that needs to be added to the zip file & dl
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}
?>
If some one can help, would be nice.
Thanks
You can use ZipArchive, list the objects from a prefix (the bucket's folder). In the case, I also use 'registerStreamWrapper' to get the keys and added to the created zip file.
Something like that:
<?php
require '/path/to/sdk-or-autoloader';
$s3 = Aws\S3\S3Client::factory(/* sdk config */);
$s3->registerStreamWrapper();
$zip = new ZipArchive;
$zip->open('/path/to/zip-you-are-creating.zip', ZipArchive::CREATE);
$bucket = 'your-bucket';
$prefix = 'your-prefix-folder'; // ex.: 'image/test/folder/'
$objects = $s3->getIterator('ListObjects', array(
'Bucket' => $bucket,
'Prefix' => $prefix
));
foreach ($objects as $object) {
$contents = file_get_contents("s3://{$bucket}/{$object['Key']}"); // get file
$zip->addFromString($object['Key'], $contents); // add file contents in zip
}
$zip->close();
// Download de zip file
header("Content-Description: File Transfer");
header("Content-Type: application/octet-stream");
header("Content-Disposition: attachment; filename=\"/path/to/zip-you-are-creating.zip\"");
readfile ('/path/to/zip-you-are-creating.zip');
?>
You can also delete the zip file after the download, if you prefer.
Must work in most of cases, but I don't want to save the file in the server, but I don't see how I can download multiple objects directly from the AWS S3 bucket by the browser without saving a file in my server. If anyone know, please share with us.

How to create Temporary URL to upload file to RackSpace Cloud Files using PHP?

I have this code to get files from rackspace cloud files:
$username = getenv('RS_USERNAME');
$apikey = getenv('RS_APIKEY');
$containerName = 'imagenesfc';
$region = 'DFW';
$client = new Rackspace(Rackspace::US_IDENTITY_ENDPOINT, array(
'username' => $username,
'apiKey' => $apikey,
));
$filename = "myfile.ext";
$service = $client->objectStoreService(null, $region);
$container = $service->getContainer($containerName);
$object = $container->getObject($filename);
$account = $service->getAccount();
$account->setTempUrlSecret();
$tempUrl = $object->getTemporaryUrl(15, 'GET');
In the open php cloud documentation says you can change 'GET' to 'PUT' to what I imagine is being able to put a file, instead of getting it, the problem is that the file doesn't exist yet, and apparently the only way to create a file is uploading it first? PHP SDK Container
In Amazon s3 I can do the following to get what I want:
$keyname = "myfile.ext";
$arr = array(
'Bucket' => 'bucket',
'Key' => $keyname
);
$cmd = $s3Client->getCommand('PutObject', $arr);
$request = $s3Client->createPresignedRequest($cmd, '+10 minutes');
$presignedUrl = (string) $request->getUri();
Then I can write to the presignedUrl anyway I prefer, like with Java:
url = new URL(jObj.get("presignedUrl").toString().replace("\\", ""));
connection=(HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
out = new DataOutputStream(connection.getOutputStream());
...(write,flush,close)
So, basically what I want to do, is get the upload URL from RackSpace and write to it like I do with Amazon s3.
Is it possible? And if it is, how can you do it?
I need to do it this way because my API will provide only download and upload links, so no traffic goes directly through it. I can't have it saving the files to my API server then upload it to cloud.
Yes, you can simulate a file upload without actually uploading content to the API - all you need to do is determine what the filename will be. The code you will need is:
$object = $container->getObject();
$object->setName('blah.txt');
$account = $service->getAccount();
$account->setTempUrlSecret();
echo $object->getTemporaryUrl(100, 'PUT');
The getObject method returns an empty object, and all you're doing is setting the remote name on that object. When a temp URL is created, it uses the name you just set and presents a temporary URL for you to use - regardless if the object exists remotely. The temp URL can then be used to create the object.

phpleague flysystem read and write to large file on server

I am using flysystem with IRON IO queue and I am attempting to run a DB query that will be taking ~1.8 million records and while doing 5000 at at time. Here is the error message I am receiving with file sizes of 50+ MB:
PHP Fatal error: Allowed memory size of ########## bytes exhausted
Here are the steps I would like to take:
1) Get the data
2) Turn it into a CSV appropriate string (i.e. implode(',', $dataArray) . "\r\n")
3) Get the file from the server (in this case S3)
4) Read that files' contents and append this new string to it and re-write that content to the S3 file
Here is a brief run down of the code I have:
public function fire($job, $data)
{
// First set the headers and write the initial file to server
$this->filesystem->write($this->filename, implode(',', $this->setHeaders($parameters)) . "\r\n", [
'visibility' => 'public',
'mimetype' => 'text/csv',
]);
// Loop to get new sets of data
$offset = 0;
while ($this->exportResult) {
$this->exportResult = $this->getData($parameters, $offset);
if ($this->exportResult) {
$this->writeToFile($this->exportResult);
$offset += 5000;
}
}
}
private function writeToFile($contentToBeAdded = '')
{
$content = $this->filesystem->read($this->filename);
// Append new data
$content .= $contentToBeAdded;
$this->filesystem->update($this->filename, $content, [
'visibility' => 'public'
]);
}
I'm assuming this is NOT the most efficient? I am going off of these docs:
PHPLeague Flysystem
If anyone can point me in a more appropriate direction, that would be awesome!
Flysystem supports read/write/update stream
Please check latest API https://flysystem.thephpleague.com/api/
$stream = fopen('/path/to/database.backup', 'r+');
$filesystem->writeStream('backups/'.strftime('%G-%m-%d').'.backup', $stream);
// Using write you can also directly set the visibility
$filesystem->writeStream('backups/'.strftime('%G-%m-%d').'.backup', $stream, [
'visibility' => AdapterInterface::VISIBILITY_PRIVATE
]);
if (is_resource($stream)) {
fclose($stream);
}
// Or update a file with stream contents
$filesystem->updateStream('backups/'.strftime('%G-%m-%d').'.backup', $stream);
// Retrieve a read-stream
$stream = $filesystem->readStream('something/is/here.ext');
$contents = stream_get_contents($stream);
fclose($stream);
// Create or overwrite using a stream.
$putStream = tmpfile();
fwrite($putStream, $contents);
rewind($putStream);
$filesystem->putStream('somewhere/here.txt', $putStream);
if (is_resource($putStream)) {
fclose($putStream);
}
If you are working with S3, I would use the AWS SDK for PHP directly to solve this particular problem. Appending to a file is actually very easy using the SDK's S3 streamwrapper, and doesn't force you to read the entire file into memory.
$s3 = \Aws\S3\S3Client::factory($clientConfig);
$s3->registerStreamWrapper();
$appendHandle = fopen("s3://{$bucket}/{$key}", 'a');
fwrite($appendHandle, $data);
fclose($appendHandle);

Categories