listObject equivelent in Azure using php sdk - php

I am new to Azure and i'm trying to list all objects within a blob with an extension of .json
I can do this easily in AWS and it works perfectly. I've been trying to find an equivalent command in Azure.
Here is what i've tried. It only returns the first .json file.
I need to list all the .json files in all folders ( I know they are not really folders but hopefully people understand what i'm trying to do )
$containerName = 'calls';
$result = $blobClient->listBlobs($containerName, $listBlobsOptions);
foreach ($results as $result) {
$keyname = $result['keyname'];
echo "\n Key name is ".$keyname; // I can see the keyname
if (strpos($keyname, 'meta-data.json') !== false) {
$id = $result['id'];
echo "\n" . $count;
$blobUrl = $keyname; // I get the blob url returned but its only ever the first level
$metadata = file_get_contents($blobUrl); // tried to get the file.. no success , another issue to address later
echo $metadata;
}
}
Any help would be greatly appreciated.

You can use the below code to list blobs in a folder by using azure php SDK:
// List blobs.
$key = 'keyname';
$blobListOptions = new ListBlobsOptions();
$blobListOptions->setPrefix($key);
$blobList = $blobRestProxy->listBlobs($blobContainer, $blobListOptions);
foreach($blobList->getBlobPrefixes() as $key => $blob) {
echo "BlobPrefix ".$key.": \t".$blob->getName()."\n";
}
foreach($blobList->getBlobs() as $key => $blob) {
echo "Blob ".$key.": \t".$blob->getName()."\t(".$blob->getUrl().")\n";
}
Thanks to Microsoft for providing more information on Azure PHP SDKs in GitHub here.

Related

Php gunzip and get name

Good morning!
I unzipped the archive using php, but after that I need to get the name of the files and perform another action, but I can not figure out how to do it.
Is it possible to do this with php? If so, tell me how and, if possible, an example, even a light one.
Thanks
$filelist = glob("/emp/*.gz");
$filedirtxt = glob("/emp/*.txt");
foreach ($filelist as $key => $value) {
$filename = pathinfo($value);
$gzname = $filename['basename'];
$gunzip = shell_exec("gunzip "."/emp/".$gzname);
foreach ($filedirtxt as $keytxt => $valuetxt) {
$filename_txt = pathinfo($valuetxt);
$name_txt = $filename_txt['basename'];
echo $name_txt."\n";
}
}

open file on client stored on server

I want to open a server stored html report file on a client machine.
I want to bring back a list of all the saved reports in that folder (scandir).
This way the user can click on any of the crated reports to open them.
So id you click on a report to open it, you will need the location where the report can be opend from
This is my dilemma. Im not sure how to get a decent ip, port and folder location that the client can understand
Here bellow is what Ive been experimenting with.
Using this wont work obviously:
$path = $_SERVER['DOCUMENT_ROOT']."/reports/saved_reports/";
So I though I might try this instead.
$host= gethostname();
$ip = gethostbyname($host);
$ip = $ip.':'.$_SERVER['SERVER_PORT'];
$path = $ip."/reports/saved_reports/";
$files = scandir($path);
after the above code I loop through each file and generate a array with the name, date created and path. This is sent back to generate a list of reports in a table that the user can interact with. ( open, delete, edit)
But this fails aswell.
So im officially clueless on how to approach this.
PS. Im adding react.js as a tag, because that is my front-end and might be useful to know.
Your question may be partially answered here: https://stackoverflow.com/a/11970479/2781096
Get the file names from the specified path and hit curl or get_text() function again to save the files.
function get_text($filename) {
$fp_load = fopen("$filename", "rb");
if ( $fp_load ) {
while ( !feof($fp_load) ) {
$content .= fgets($fp_load, 8192);
}
fclose($fp_load);
return $content;
}
}
$matches = array();
// This will give you names of all the files available on the specified path.
preg_match_all("/(a href\=\")([^\?\"]*)(\")/i", get_text($ip."/reports/saved_reports/"), $matches);
foreach($matches[2] as $match) {
echo $match . '<br>';
// Again hit a cURL to download each of the reports.
}
Get list of reports:
<?php
$path = $_SERVER['DOCUMENT_ROOT']."/reports/saved_reports/";
$files = scandir($path);
foreach($files as $file){
if($file !== '.' && $file != '..'){
echo "<a href='show-report.php?name=".$file. "'>$file</a><br/>";
}
}
?>
and write second php file for showing html reports, which receives file name as GET param and echoes content of given html report.
show-report.php
<?php
$path = $_SERVER['DOCUMENT_ROOT']."/reports/saved_reports/";
if(isset($_GET['name'])){
$name = $_GET['name'];
echo file_get_contents($path.$name);
}

Getting the real URL of files uploaded in Moodle

I am trying to get the url to a file that I have uploaded to moodle. I want to access the file via the browser using http. The pathname of the file is hashed in the moodle database. Is there a way of getting the real url of uploaded files in moodle? this is the code I was trying out using the Moodle File API.
<?php
require_once("../config.php");
$course_name=$_GET["course"];
$table_files="files";
$results=$DB->get_records($table_files,array('filename'=>$course_name));
//Get the file details here::::
foreach($results as $obj){
$contextid=$obj->contextid;
$component=$obj->component;
$filearea=$obj->filearea;
$itemid=$obj->itemid;
}
$url=$CFG->wwwroot/pluginfile.php/$contextid/$component/$filearea/$itemid/$course_name;
echo print_r($url);
?>
Will appreciate the help.
This works for Moodle version 2.3
<?php
require_once("../config.php");
$filename = "my_image.jpg";
$table_files = "files";
$results = $DB->get_record($table_files, array('filename' => $filename, 'sortorder' => 1));
$baseurl = "$CFG->wwwroot/pluginfile.php/$results->contextid/$results->component/$results->filearea/$results->itemid/$filename";
echo $baseurl;
The result is the Url to the image "my_image.jpg"

google drive PHP : how to list complete subtree of the folder

How to get basic info (id, title, mime-type at least) for each file and folder in subtree of given folder with as few API-calls as possible? ie. not to call api to download details for every subfolder?
I found the workaround to read all files with some non-hierarchical-characteristic (eg. owner) and to build tree-structure in client script. My files are unfortunately all from one owner (application), so I cannot do it this way.
ok, here is the example code for the recursion-multiple-api-calls way, which can be enough for some use cases. But I would like to find better concept (not to discuss this implementation, but another way, how to not call the API for each folder):
class Foo {
const FOLDER_MIME_TYPE = 'application/vnd.google-apps.folder';
public function getSubtreeForFolder($parentId, $sort=true)
{
$service = $this->createCrmGService();
// A. folder info
$file = $service->files->get($parentId);
$ret = array(
'id' => $parentId,
'name' => $file->getTitle(),
'description' => $file->getDescription(),
'mimetype' => $file->getMimeType(),
'is_folder' => true,
'children' => array(),
'node' => $file,
);
if ($ret['mimetype'] != self::FOLDER_MIME_TYPE) {
throw new Exception(_t("{$ret['name']} is not a folder."));
}
$items = $this->findAllFiles($queryString='trashed = false', $parentId, $fieldsFilter='items(alternateLink,description,fileSize,id,mimeType,title)', $service);
foreach ($items as $child)
{
if ($this->isFolder($child))
{
$ret['children'][] = $this->getSubtreeForFolder($child->id, $sort);
}
else
{
// B. file info
$a['id'] = $child->id;
$a['name'] = $child->title;
$a['description'] = $child->description;
$a['is_folder'] = false;
$a['url'] = $file->getDownloadUrl();
$a['url_detail'] = $child->getAlternateLink();
$a['versionLabel'] = false; //FIXME
$a['node'] = $child;
if (!$a['versionLabel']) {
$a['versionLabel'] = '1.0'; //old files compatibility hack
}
$ret['children'][] = $a;
}
}
if ($sort && isset($ret['children']))
{
if ($sort === true) {
$sort = create_function('$a, $b', 'if ($a[\'name\'] == $b[\'name\']) return 0; return strcasecmp($a[\'name\'], $b[\'name\']);');
}
usort($ret['children'], $sort);
}
return $ret;
}
public function findAllFiles($queryString, $parentId=false, $fieldsFilter='items(id,title)', $service = false)
{
if (!$service) $service = $this->createCrmGService();
$result = array();
$pageToken = NULL;
if ($parentId) {
$queryString .= ($queryString ? ' AND ' : '') . "'{$parentId}' in parents";
}
do {
try {
$parameters = array('q' => $queryString);
if ($fieldsFilter) $parameters['fields'] = $fieldsFilter;
if ($pageToken) {
$parameters['pageToken'] = $pageToken;
}
$files = $service->files->listFiles($parameters);
$result = array_merge($result, $files->getItems());
$pageToken = $files->getNextPageToken();
} catch (Exception $e) {
print "An error occurred: " . $e->getMessage();
$pageToken = NULL;
}
} while ($pageToken);
return $result;
}
/**
* #param Google_DriveFile $file
* #return boolean, jestli je $file slozka.
*/
protected function isFolder($file)
{
return $file->getMimeType() == self::FOLDER_MIME_TYPE;
}
}
First, I would suggest you not to get all files and folders. It takes too much time for some users who have many files uploaded on their Drive. Also, there is query limit in your application key. In fact, many applications who have custom file picker make queries each time user requests subfolder.
Second, if this is webapp, it would be better idea to use Google Picker. Google Picker is way much faster and more efficient to pick files from Drive. There are many options and filters and you have decent control over files.
Third, you cannot fully represent Drive's files and folders in tree structure. As you can see in queries, each file have parents, which means there can be more than one parent for each file/folder. You need to think of some workarounds like choosing only one of the parents for each file.
If you still want to get all file/folder information, in terms of performance, the best implementation would be recursively calling Children.list(). FYI, file id 'root' is reserved id you can easily start with. And once you get id's of children, you can make batch query of Files.get() with multipart. This is the fastest way to traverse through file system of Google Drive afaik.
Again, unless you have very good reason, please do not try to traverse through all files in Drive at once. There are some users who have a lot of files in their Drive and you will make them wait forever no matter how great optimization you made. Also, you will easily hit query limit.

Trouble with last_modified - Rackspace Cloud Files (PHP API)

Using Rackspace cloud files as a backup repository but new to their PHP API. I want to delete files past a certail age but having difficulty returning the last_modified date using the api.
$container = $conn->get_container('tmp');
$files = $container->list_objects();
foreach ($files as $file) {
echo $file; // echo filename
echo $file->last_modified(); // this syntax is incorrect
}
list_objects returns an array of strings, the names of the objects. You can also get PHP objects that allow you to use OOP to do things to those objects. So changing as little of your code as possible, we can convert the strings to objects:
$container = $conn->get_container('tmp');
$files = $container->list_objects();
foreach ($files as $file) {
echo $file; // echo filename
$file_obj = $container->get_object($file);
echo $file_obj->last_modified;
}
A little faster, just get an array of objects instead:
$container = $conn->get_container('tmp');
$files = $container->get_objects();
foreach ($files as $file) {
echo $file->name; // echo filename
echo $file->last_modified;
}
Node that code has not been tested, but should get you pretty close to something that works.

Categories