In my project I have a code for downloading images from a server. In php code I have:
if (is_dir($dir)){ //check if is directory
$files = scandir($dir); //list files
$i=-1;
$result = array();
foreach($files as $file){
$file = $dir.$file; //complete dir of file
if(is_file($file)){ //if it's a file
$i++;
$fileo = #fopen($file,"rb"); //open and write it on $result[$i]
if ($fileo) {
$result[$i] = "";
while(!feof($fileo)) {
$result[$i] .= fread($fileo, 1024*8);
flush();
if (connection_status()!=0) {
#fclose($fileo);
die();
}
}
$result[$i] = utf8_encode($result[$i]); //This prevents null returning on jsonencode
#fclose($fileo);
}
}
}
}else{
echo json_encode(array('Error'));
}
echo json_encode($result); //returns images as an array of strings, each one with all code of an image.
And in java I have an httpost method with asynctask. The problem of this is that I need to wait until all images are downloaded. I have thought that maybe I can download an image with a number and if the number is 0 for example recall to the server to download the next image but I think that maybe is a waste of time calling again the server and listing files again. Is there a better way so I can download image per image instead of all at the same time without calling N times to the sever?
I've had the same problem that you are having, and my solution was the next:
1º Download all data you need
2º Download to the computer the image web direction
3º Execute an asynctask per Image to download it and update your Activity while is needed
Maybe i'm not so exact and i would need some more details to give a better solution.
Related
I want to open a server stored html report file on a client machine.
I want to bring back a list of all the saved reports in that folder (scandir).
This way the user can click on any of the crated reports to open them.
So id you click on a report to open it, you will need the location where the report can be opend from
This is my dilemma. Im not sure how to get a decent ip, port and folder location that the client can understand
Here bellow is what Ive been experimenting with.
Using this wont work obviously:
$path = $_SERVER['DOCUMENT_ROOT']."/reports/saved_reports/";
So I though I might try this instead.
$host= gethostname();
$ip = gethostbyname($host);
$ip = $ip.':'.$_SERVER['SERVER_PORT'];
$path = $ip."/reports/saved_reports/";
$files = scandir($path);
after the above code I loop through each file and generate a array with the name, date created and path. This is sent back to generate a list of reports in a table that the user can interact with. ( open, delete, edit)
But this fails aswell.
So im officially clueless on how to approach this.
PS. Im adding react.js as a tag, because that is my front-end and might be useful to know.
Your question may be partially answered here: https://stackoverflow.com/a/11970479/2781096
Get the file names from the specified path and hit curl or get_text() function again to save the files.
function get_text($filename) {
$fp_load = fopen("$filename", "rb");
if ( $fp_load ) {
while ( !feof($fp_load) ) {
$content .= fgets($fp_load, 8192);
}
fclose($fp_load);
return $content;
}
}
$matches = array();
// This will give you names of all the files available on the specified path.
preg_match_all("/(a href\=\")([^\?\"]*)(\")/i", get_text($ip."/reports/saved_reports/"), $matches);
foreach($matches[2] as $match) {
echo $match . '<br>';
// Again hit a cURL to download each of the reports.
}
Get list of reports:
<?php
$path = $_SERVER['DOCUMENT_ROOT']."/reports/saved_reports/";
$files = scandir($path);
foreach($files as $file){
if($file !== '.' && $file != '..'){
echo "<a href='show-report.php?name=".$file. "'>$file</a><br/>";
}
}
?>
and write second php file for showing html reports, which receives file name as GET param and echoes content of given html report.
show-report.php
<?php
$path = $_SERVER['DOCUMENT_ROOT']."/reports/saved_reports/";
if(isset($_GET['name'])){
$name = $_GET['name'];
echo file_get_contents($path.$name);
}
I'm using the getID3 library to get the details of a remote video file. I'm trying to read a portion of the file to get the details of the file, however some videos don't have the full details at the start.
For these videos, I'm trying to download the full video, and then extract the relevant information. However, even after the video has downloaded completely, getID3->analyze($filename), returns the same erroneous file info.
But when I copy the video, and then run the function analyze($filename.'copied.mp4') on copied video, it returns the correct info even though the file contents are same. Perhaps getID3 isn't loading the video again, however, how can I fix this issue without copying the video.
Please find the code below.
if ($fp_remote = fopen($remotefilename, 'r')) {
echo 'conn opened';
$localtempfilename = tempnam('/home/xerox/abc', 'whateva').'.mp4';
if ($fp_local = fopen($localtempfilename, 'wb')) {
$count = 0;
$countExpiry = 8;
while ($buffer = fread($fp_remote, 8192)) {
$count++;
fwrite($fp_local, $buffer);
if ($count >= $countExpiry) {
fflush($fp_local);
$getID3 = new getID3;
$ThisFileInfo = $getID3->analyze($localtempfilename);
if ($ThisFileInfo["error"]){
print "problem encouterd";
$countExpiry += 1000;
} else {
break;}
}
}
fclose($fp_local);
$getID31 = new getID3;
copy ( $localtempfilename, $localtempfilename.'_copied.mp4' );
$ThisFileInfoz = $getID31->analyze($localtempfilename.'_copied.mp4');
// Delete temporary file
unlink($localtempfilename);
fclose($fp_remote);
var_dump($ThisFileInfoz);
}
}
A call to clearstatcache solved the problem for me,
since repeated calls to things like filesize will be cached by the
filesystem and getID3 won't read beyond end-of-file.
source: James Heinrich, developer of getID3.
I'm struggling around with a simple PHP functionality: Creating a ZIP Archive with some files in.
The problem is, it does not create only one file called filename.zip but two files called filename.zip.a07600 and filename.zip.b07600. Pls. see the following screenshot:
The two files are perfect in size and I even can rename each of them to filename.zip and extract it without any problems.
Can anybody tell me what is going wrong???
function zipFilesAndDownload_Defect($archive_file_name, $archiveDir, $file_path = array(), $files_array = array()) {
// Archive File Name
$archive_file = $archiveDir."/".$archive_file_name;
// Time-to-live
$archiveTTL = 86400; // 1 day
// Delete old zip file
#unlink($archive_file);
// Create the object
$zip = new ZipArchive();
// Create the file and throw the error if unsuccessful
if ($zip->open($archive_file, ZIPARCHIVE::CREATE) !== TRUE) {
$response->res = "Cannot open '$archive_file'";
return $response;
}
// Add each file of $file_name array to archive
$i = 0;
foreach($files_array as $value){
$expl = explode("/", $value);
$file = $expl[(count($expl)-1)];
$path_file = $file_path[$i] . "/" . $file;
$size = round((filesize ($path_file) / 1024), 0);
if(file_exists($path_file)){
$zip->addFile($path_file, $file);
}
$i++;
}
$zip->close();
// Then send the headers to redirect to the ZIP file
header("HTTP/1.1 303 See Other"); // 303 is technically correct for this type of redirect
header("Location: $archive_file");
exit;
}
The code which calls the function is a file with a switch-case... it is called itself by an ajax-call:
case "zdl":
$files_array = array();
$file_path = array();
foreach ($dbh->query("select GUID, DIRECTORY, BASENAME, ELEMENTID from SMDMS where ELEMENTID = ".$osguid." and PROJECTID = ".$osproject.";") as $subrow) {
$archive_file_name = $subrow['ELEMENTID'].".zip";
$archiveDir = "../".$subrow['DIRECTORY'];
$files_array[] = $archiveDir.DIR_SEPARATOR.$subrow['BASENAME'];
$file_path[] = $archiveDir;
}
zipFilesAndDownload_Defect($archive_file_name, $archiveDir, $file_path, $files_array);
break;
One more code... I tried to rename the latest 123456.zip.a01234 file to 123456.zip and then unlink the old 123456.zip.a01234 (and all prior added .a01234 files) with this function:
function zip_file_exists($pathfile){
$arr = array();
$dir = dirname($pathfile);
$renamed = 0;
foreach(glob($pathfile.'.*') as $file) {
$path_parts = pathinfo($file);
$dirname = $path_parts['dirname'];
$basename = $path_parts['basename'];
$extension = $path_parts['extension'];
$filename = $path_parts['filename'];
if($renamed == 0){
$old_name = $file;
$new_name = str_replace(".".$extension, "", $file);
#copy($old_name, $new_name);
#unlink($old_name);
$renamed = 1;
//file_put_contents($dir."/test.txt", "old_name: ".$old_name." - new_name: ".$new_name." - dirname: ".$dirname." - basename: ".$basename." - extension: ".$extension." - filename: ".$filename." - test: ".$test);
}else{
#unlink($file);
}
}
}
In short: copy works, rename didn't work and "unlink"-doesn't work at all... I'm out of ideas now... :(
ONE MORE TRY: I placed the output of $zip->getStatusString() in a variable and wrote it to a log file... the log entry it produced is: Renaming temporary file failed: No such file or directory.
But as you can see in the graphic above the file 43051221.zip.a07200 is located in the directory where the zip-lib opens it temporarily.
Thank you in advance for your help!
So, after struggling around for days... It was so simple:
Actually I work ONLY on *nix Servers so in my scripts I created the folders dynamically with 0777 Perms. I didn't know that IIS doesn't accept this permissions format at all!
So I remoted to the server, right clicked on the folder Documents (the hierarchically most upper folder of all dynamically added files and folders) and gave full control to all users I found.
Now it works perfect!!! The only thing that would be interesting now is: is this dangerous of any reason???
Thanks for your good will answers...
My suspicion is that your script is hitting the PHP script timeout. PHP zip creates a temporary file to zip in to where the filename is yourfilename.zip.some_random_number. This file is renamed to yourfilename.zip when the zip file is closed. If the script times out it will probably just get left there.
Try reducing the number of files to zip, or increasing the script timeout with set_time_limit()
http://php.net/manual/en/function.set-time-limit.php
In my program I need to read .png files from a .tar file.
I am using pear Archive_Tar class (http://pear.php.net/package/Archive_Tar/redirected)
Everything is fine if the file im looking for exists, but if it is not in the .tar file then the function timouts after 30 seconds. In the class documentation it states that it should return null if it does not find the file...
$tar = new Archive_Tar('path/to/mytar.tar');
$filePath = 'path/to/my/image/image.png';
$file = $tar->extractInString($filePath); // This works fine if the $filePath is correct
// if the path to the file does not exists
// the script will timeout after 30 seconds
var_dump($file);
return;
Any suggestions on solving this or any other library that I could use to solve my problem?
The listContent method will return an array of all files (and other information about them) present in the specified archive. So if you check if the file you wish to extract is present in that array first, you can avoid the delay that you are experiencing.
The below code isn't optimised - for multiple calls to extract different files for example the $files array should only be populated once - but is a good way forward.
include "Archive/Tar.php";
$tar = new Archive_Tar('mytar.tar');
$filePath = 'path/to/my/image/image.png';
$contents = $tar->listContent();
$files = array();
foreach ($contents as $entry) {
$files[] = $entry['filename'];
}
$exists = in_array($filePath, $files);
if ($exists) {
$fileContent = $tar->extractInString($filePath);
var_dump($fileContent);
} else {
echo "File $filePath does not exist in archive.\n";
}
I am working on a piece of code that I am wanting to "spice" up with jQuery but I can't think of a way to actually make it work. I am sure its simple, I just need a little advice to get me going.
I am wanting to create a piece of code that makes an Ajax request out to start a big loop that will download files and then upload them to an S3 bucket of mine. The place where I am stuck is I am wanting to send back a request back to the browser everytime a file is uploaded and output a string of text to the screen upon completion.
I don't have any of the frontend code working... just trying to get my head wrapped around the logic first... any ideas?
PHP Backend Code:
<?php
public function photos($city) {
if(isset($city))
$this->city_name = "{$city}";
// grab data array from Dropbox folder
$postcard_assets = $this->conn->getPostcardDirContent("{$this->city_name}", "Photos", TRUE);
$data = array();
foreach($postcard_assets['contents'] as $asset) {
//only grab contents in root folder... do not traverse into sub folders && make sure the folder is not empty
if(!$asset['is_dir'] && $asset['bytes'] > 0) {
// get information on file
$file = pathinfo($asset['path']);
// download file from Dropbox
$original_file = $this->conn->downloadFile(str_replace(" ", "%20", $asset['path']));
// create file name
$file_name = $this->cleanFileName($file['basename']);
// write photo to TMP_DIR ("/tmp/photos/") for manipulation
$fh = fopen(self::TMP_DIR . $file_name, 'w');
fwrite($fh, $original_file);
fclose($fh);
// Resize photo
$this->resize_photo($file_name);
// hash file name
$raw_file = sha1($file_name);
// create S3 hashed name
$s3_file_name = "1_{$raw_file}.{$file['extension']}";
// Upload manipulated file to S3
$this->s3->putObject($s3_file_name, file_get_contents(self::TMP_DIR . $file_name), $this->photo_s3_bucket, 'public-read');
// check to see if file exists in S3 bucket
$s3_check = $this->s3->getObjectInfo($s3_file_name, $this->photo_s3_bucket);
// if the file uploaded successully to S3, load into DB
if($s3_check['content-length'] > 0) {
$data['src'] = $s3_file_name;
$data['width'] = $this->width;
$data['height'] = $this->height;
Photo::create_postcard_photo($data, "{$this->city_name}");
// Now that the photo has been uploaded to S3 and saved in the DB, remove local file for cleanup
unlink(self::TMP_DIR . $file_name);
echo "{$file_name} uploaded to S3 and resized!<br />";
}
}
}
// after loop is complete, kill script or nasty PHP header warnings will appear
exit();
}
?>
The main problem is that with PHP, the output is buffered so it won't return a line at a time. You can try and force the flush but it's not always reliable.
You could add an entry to the DB for each file that is exchanged and create a seperate API to get the details of what has completed.
Generally, Jquery will wait till the request has finished before it allows you to manipulate data from a HTTP request.