I'm trying to upload multiple files to a SFTP site from a local directory.
I can get it to work for a single file but I would like to be able to upload multiple files which have variables names too.
$localFile_xml = "C:\xml\Race_" . $value;
chdir($localFile_xml);
//This successfully lists the files
foreach (glob("*.xml") as $filename) {
echo "$filename size " . filesize($filename) . "\n";
}
$remote_XMLfiles = "/FTP/XML/Race_" . $value;
$xmlstream = fopen("ssh2.sftp://$sftp" . $remote_XMLfiles, 'w');
foreach (glob("*.xml") as $filename) {
$xmlfile = file_get_contents($localFile_xml);
fwrite($xmlstream, $xmlfile);
fclose($xmlstream);
}
I believe its there but I cannot get the last bit correct.
Thank you so much in advance
Assuming the remote SSH connection is valid, and that the method you used in your question works for single files, I believe your order of operations needs to be corrected.
As mentioned in my comments, your code appears to be attempting to use file_get_contents on a local directory, which is not permitted. It also appears you attempted the same on the $xmlstream, which must be performed per file, rather than directory. Assuming 'C:\\xml\\Race_' . $value; is a directory like C:\\xml\\Race_1 and not a file.
Some minor issues for validation of resources and Windows specific issues that need to be addressed:
Windows Directory Separators should be written as \\ (even when
using single quotes), since \ is an escape sequence
it causes \x \t \n \r \' \" \\ to be treated as special characters.
When using fopen($path, $mode) it is recommended to specify the b flag as the last character of the mode, to ensure the file is
binary-safe (verbatim) and to avoid ambiguity between operating systems. Alternatively on Windows specify the t mode to transparently translate \n to \r\n (only desirable for plain-text files).
$mode = 'rb' (binary-safe read)
$mode = 'rt' (text-mode translation read)
When working with networking streams, it is recommended to test that the stream has successfully written all of its content. I provided the fwrite_stream function below from the PHP manual.
Example
try {
//--- example purposes only ---
//use your own ssh2_connnect, ssh2_auth_password, ssh2_sftp
if (!$ssh2 = ssh2_connect('192.168.56.101', 22)) {
throw new \RuntimeException('Unable to connect to remote host');
}
if (!ssh2_auth_password($ssh2, 'root', '')) {
throw new \RuntimeException('Unable to Authenticate');
}
if (!$sftp = ssh2_sftp($ssh2)) {
throw new \RuntimeException('Unable to initialize SFTP');
}
$value = '1';
//--- end example purposes only ---
$localFile_xml = 'C:\\xml\\Race_' . $value;
if (!$localFile_xml || !is_dir($localFile_xml)) {
throw new \RuntimeException('Unable to retrieve local directory');
}
//retrieve list of XML files
$iterator = new \GlobIterator($localFile_xml . '/*.xml',
\FilesystemIterator::KEY_AS_PATHNAME |
\FilesystemIterator::CURRENT_AS_FILEINFO |
\FilesystemIterator::SKIP_DOTS
);
if (!$iterator->count()) {
throw new \RuntimeException('Unable to retrieve local files');
}
$success = [];
$remote_XMLfiles = '/FTP/XML/Race_' . $value;
$remote_XMLpath = "ssh2.sftp://$sftp" . $remote_XMLfiles;
//ensure the remote directory exists
if (!#mkdir($remote_XMLpath, 0777, true) && !is_dir($remote_XMLpath)) {
throw new \RuntimeException(sprintf('Unable to create remote directory "%s"', $remote_XMLpath));
}
/**
* #var string $filepath
* #var \SplFileInfo $fileinfo
*/
foreach ($iterator as $filepath => $fileinfo) {
$filesize = $fileinfo->getSize();
printf("%s size %d\n", $filepath, $filesize);
try {
//open local file resource for binary-safe reading
$xmlObj = $fileinfo->openFile('rb');
//retrieve entire file contents
if (!$xmlData = $xmlObj->fread($filesize)) {
//do not permit empty files
printf("No data found for \"%s\"\n", $filepath);
continue;
}
} finally {
//shortcut to close the opened local file resource on success or fail
$xmlObj = null;
unset($xmlObj);
}
try {
$remote_filepath = $remote_XMLpath . '/' . $fileinfo->getBasename();
//open a remote file resource for binary-safe writing
//using current filename, overwriting the file if it already exists
if (!$xmlstream = fopen($remote_filepath, 'wb')) {
throw new \RuntimeException(sprintf('Unable to create remote file "%s"', $remote_filepath));
}
//write the local file data to the remote file stream
if (false !== ($bytes = fwrite_stream($xmlstream, $xmlData))) {
$success[] = [
'filepath' => $filepath,
'remote_filepath' => $remote_filepath,
'bytes' => $bytes,
];
}
} finally {
//shortcut to ensure the xmlstream is closed on success or failure
if (isset($xmlstream) && is_resource($xmlstream)) {
fclose($xmlstream);
}
}
}
//testing purposes only to show the resulting uploads
if (!empty($success)) {
var_export($success);
}
} finally {
//shortcut to disconnect the ssh2 session on success or failure
$sftp = null;
unset($sftp);
if (isset($ssh2) && is_resource($ssh2)) {
ssh2_disconnect($ssh2);
}
}
/*
* Taken from PHP Manual
* Writing to a network stream may end before the whole string is written.
* Return value of fwrite() may be checked
*/
function fwrite_stream($fp, $string)
{
for ($written = 0, $writtenMax = strlen($string); $written < $writtenMax; $written += $fwrite) {
$fwrite = fwrite($fp, substr($string, $written));
if (false === $fwrite) {
return $written;
}
}
return $written;
}
NOTE
All file operations will be created using the ssh2_auth_password
user as the owner/group. You must ensure the specified user has read
and write access to the desired directories.
Use the appropriate file masks to ensure desired file/directory permissions
0777 (default) allows everyone to read, write, execute!
0750 is typically desired for directories
0640 is typically desired for individual files
use chmod($path, 0750) to change permissions on the remote file(s)
use chown($path, 'user') to change the owner on the remote file(s)
use chgrp($path, 'group') to change the group on the remote file(s)
Result
C:\xml\Race_1\file1.xml size 9
C:\xml\Race_1\file2.xml size 11
array (
0 =>
array (
'filepath' => 'C:\\xml\\Race_1\\file1.xml',
'remote_filepath' => 'ssh2.sftp://Resource id #5/FTP/XML/Race_1/file1.xml',
'bytes' => 9,
),
1 =>
array (
'filepath' => 'C:\\xml\\Race_1\\file2.xml',
'remote_filepath' => 'ssh2.sftp://Resource id #5/FTP/XML/Race_1/file2.xml',
'bytes' => 11,
),
)
Related
I am trying to upload videos having file sizes from anywhere between 1 MB to 2 GB from the Unity3D editor. I am doing this by breaking each video into chunks of a byte array of 10 MB each and then uploading the chunks to the local wamp server and then merging them back into one single file. I am labeling each chunk with a serial number based on the queue and all the chunks are uploaded one by one, with the next upload only starting after the first is completed and is successful.
On the server-side, my PHP script looks like this:
define("CHUNK_FILE_EXTENSION", ".part");
if($_SERVER['REQUEST_METHOD'] == "POST")
{
$folder_name = isset($_POST['folder_name']) ? trim($_POST['folder_name']) : '';
$target_file_name = isset($_POST['target_file_name']) ? trim($_POST['target_file_name']) : '';
$chunkByteArray = isset($_FILES['chunk_byte_array']) ? $_FILES['chunk_byte_array'] : '';
$currentChunkNumber = isset($_POST['current_chunk_number']) ? trim($_POST['current_chunk_number']) : '';
$totalChunksNumber = isset($_POST['total_chunks_number']) ? trim($_POST['total_chunks_number']) : '';
$startMerge = isset($_POST['start_merge']) ? trim($_POST['start_merge']) : '';
$totalFileSize = isset($_POST['total_file_size']) ? trim($_POST['total_file_size']) : '';
$startRollback = isset($_POST['start_rollback']) ? trim($_POST['start_rollback']) : '';
function targetFileDirectoryPath($folder_name) {
//$tempDir = $_SERVER['DOCUMENT_ROOT']."\\media\\temp\\test\\%s";
$tempDir = $_SERVER['DOCUMENT_ROOT']."\\media\\temp\\test";
return sprintf($tempDir, $folder_name);
}
function chunksFileDirectoryPath($folder_name) {
return CombinePath(targetFileDirectoryPath($folder_name), "chunks");
}
function mergeChunkFiles($targetFileName, $chunkFileDir, $targetFileTempPath) {
$files = array_diff(scandir($chunkFileDir), array('.','..',$targetFileName));
sort($files);
$final = fopen($targetFileTempPath, 'w');
foreach ($files as $file) {
$filePath = CombinePath($chunkFileDir, $file);
if(($filePath != $targetFileTempPath) && (filesize($filePath) > 0)) {
$myfile = fopen($filePath, "r");
$buff = fread($myfile,filesize($filePath));
$write = fwrite($final, $buff);
fclose($myfile);
}
}
fclose($final);
}
if (!empty($currentChunkNumber) && !empty($totalChunksNumber) && !empty($chunkByteArray)) {
$chunkFileDir = chunksFileDirectoryPath($folder_name);
$chunkFilePath = CombinePath($chunkFileDir, $currentChunkNumber.CHUNK_FILE_EXTENSION);
$tempPath = $chunkByteArray['tmp_name'];
if (createDirectory($chunkFileDir)) {
if(move_uploaded_file($tempPath, $chunkFilePath)) {
$responseJson = array(
"status" => 1,
"message" => $currentChunkNumber." uploaded successfully"
);
}
else {
$responseJson = array(
"status" => 2,
"message" => $currentChunkNumber." not uploaded to ".$chunkFilePath." from ".$tempPath,
"uploaded_chunk_file" => $chunkByteArray,
"is_uploaded_file" => is_uploaded_file($tempPath)
);
}
}
else {
$responseJson = array(
"status" => 3,
"message" => "Chunk file user directory not created # ".$chunkFileDir
);
}
}
else if (!empty($startMerge) && !empty($totalFileSize)) {
$targetFileName = $target_file_name;
$chunkFileDir = chunksFileDirectoryPath($folder_name);
$targetFileTempDir = NormalizePath(targetFileDirectoryPath($folder_name));
$targetFileTempPath = CombinePath($targetFileTempDir, $targetFileName);
if(createDirectory($targetFileTempDir)) {
mergeChunkFiles($targetFileName, $chunkFileDir, $targetFileTempPath);
removeFolder($chunkFileDir);
if (filesize($targetFileTempPath) == $totalFileSize) {
$responseJson = array(
"status" => 1,
"message" => "Target file saved successfully!"
);
}
else {
$responseJson = array(
"status" => 2,
"message" => "Target file size doesn't match with actual file size. ".
"Please try again! Target File Size: ".filesize($targetFileTempPath).
" & Input File Size: ".$totalFileSize);
}
}
else {
$responseJson = array(
"status" => 3,
"message" => "Unable to create target directory for merging chunks # ".$targetFileTempDir
);
}
}
else if (!empty($startRollback)) {
$responseJson = array(
"status" => 4,
"message" => "Rollback successful!"
);
}
else {
$responseJson = array(
"status" => 0,
"message" => "Invalid request parameters!!"
);
}
}
else {
$responseJson = array(
"status" => 0,
"message" => "Invalid request method!!"
);
}
/* Output header */
header('Content-type: application/json;charset=utf-8');
echo json_encode($responseJson, JSON_UNESCAPED_UNICODE);
//Remove folder and its inner folder and files at the input path
function removeFolder($folder) {
if (empty($folder)) {
return;
}
$folder = NormalizePath($folder);
if(is_file($folder)) {
unlink($folder);
}
else if(is_dir($folder)) {
$files = scandir($folder);
foreach($files as $file) {
if (( $file != '.' ) && ( $file != '..' )) {
$file = CombinePath($folder, $file);
if(is_dir($file)) {
removeFolder($file);
}
else {
unlink($file);
}
}
}
rmdir($folder);
}
}
//Check if directory is exist return true, else create new directory and returns bool
function createDirectory($directoryPath) {
$directoryPath = NormalizePath($directoryPath);
if(!is_dir($directoryPath)) {
return mkdir($directoryPath, 0775, true);
}
else {
return true;
}
}
//Method to combine local file or folder paths using a DIRECTORY_SEPARATOR
function NormalizePath($path)
{
//normalize
$path = str_replace('/', DIRECTORY_SEPARATOR, $path);
$path = str_replace('\\', DIRECTORY_SEPARATOR, $path);
//remove leading/trailing dir separators
if(!empty($path) && substr($path, -1) == DIRECTORY_SEPARATOR) {
$path = substr($path, 0, -1);
}
return $path;
}
//Method to combine local file or folder paths using a DIRECTORY_SEPARATOR
function CombinePath($one, $other, $normalize = true)
{
//normalize
if($normalize) {
$one = NormalizePath($one);
$other = NormalizePath($other);
}
//remove leading/trailing dir separators
if(!empty($one)) {
$one = rtrim($one, DIRECTORY_SEPARATOR);
}
if(!empty($other)) {
$other = ltrim($other, DIRECTORY_SEPARATOR);
}
//return combined path
if(empty($one)) {
return $other;
} elseif(empty($other)) {
return $one;
} else {
return $one.DIRECTORY_SEPARATOR.$other;
}
}
?>
It works for videos less than 100 MB, but somehow the videos greater than 100 MB does not play properly. I am testing it in local wampserver and upload_max_filesize and post_max_size are set to 20M in php.ini.
I have tried varying the chunk size to 5 MB, but still the same issue. The video gets uploaded successfully and I can also see the video filesize exactly the same as the one on the clientside, but still, somehow it gets corrupted in case of uploading a bigger video.
Just to re-iterate, it somehow works for videos less than 100 MB. As in, the videos are broken into chunks of 10 MB raw bytes uploaded to localhost and merged back to the full file and the video plays as good as the original one.
What am I doing wrong here? Please help.
Edit:
Not sure if it might help, but I checked error in the video file using ffmpeg on the uploaded video that was of 106 MB. Below is the command I executed:
ffmpeg -v error -i {video_file_path} -f null - 2>{error_log_file_path}
Here is the error log file:
https://drive.google.com/file/d/1YQ0DNtNlhl4cLUJaw20k91Vv6tfjnqsX/view?usp=sharing
On the server side, you read in the chunks completely into memory before you write out your target file. This approach is limited by PHPs maximum memory usage. It is configured via the memory_limit setting, that has a default of 128MB. You will need some memory for other things besides the actual final file as well, so the seen limit of ~ 100 MB looks as if it is a result of this limitation. (See the link, it also contains docs how to increase the limit.)
But increasing the memory limit is not a good solution in my opinion, because your server will not have endless memory. I recommend one of the following solutions:
use rsync - it is widely used, available for many operating systems, often even preinstalled and you would not have to fiddle around with splitting up and rejoining the big files. I'm not an expert in it's usage and there are enough tutorials available so I will not explain the correct usage in detail. It is also super fast.
if you cannot use rsync for whatever reason, you should write out the chunks to the disk on the server as soon as you receive them. You will have to ensure the correct order on upload (which will make parallel uploads for chunks of the same file not really feasible), and you have to use the fopen mode "a" (for append) instead of "w".
if you upload the part-files individually and store them as part-files on disk on the server, for Linux you could just use the cat command (abbrevation of con_cat_enate) or for Windows the copy command with the + parameter to join the part files into one big one.
i don't know exactly what is wrong with your script, but i can theorize:
you're using "w" and "r" fopen modes, they're horrible in theory, and if you're running on Microsoft Windows, they're horrible in practice as well, use "wb" and "rb", perhaps your files are getting corrupted beacuse of your non-binary fopen modes? (but that doesn't explain why it works on smaller video files)
you lack error checking on fwrite, if fwrite does not return strlen(input) you're ignoring a potential error. maybe try something like the fwrite_all function from https://www.php.net/manual/en/function.fwrite.php#125731
you also lack error checking on fread, at no point after $buff = fread($myfile,filesize($filePath)); do you make sure that strlen($buff) === filesize($filePath)
i had several problems uploading 10MB on ubuntu+php-fpm+nginx, nginx's default client_max_body_size was 1M, php-fpm's default php.ini upload_max_filesize was 8M, and post_max_size was 2M (or maybe it was the other way around, either way...)
buuuut your script is kind of hard to debug/read nonetheless, how about a Kiss simpler implementation?
my attempt:
Warning, there is no authentication to this code, and a hacker could easily pwn your webserver with this code, uploading evil.php as a "movie".
<?php
declare(strict_types = 1);
function jsresponse($response)
{
header("Content-Type: application/json");
echo json_encode($response, JSON_PRETTY_PRINT | JSON_UNESCAPED_SLASHES | JSON_UNESCAPED_UNICODE | JSON_THROW_ON_ERROR);
}
if ($_SERVER['REQUEST_METHOD'] !== "POST") {
http_response_code(405);
jsresponse(["error" => "invalid request method"]);
die();
}
$folder_name = isset($_POST['folder_name']) ? trim($_POST['folder_name']) : '';
if (empty($folder_name)) {
$folder_name = getcwd();
} elseif (!is_dir($folder_name) && !mkdir($folder_name, 0755, true)) {
http_response_code(400);
jsresponse(["error" => "could not create folder name"]);
die();
}
if (!chdir($folder_name)) {
http_response_code(400);
jsresponse(["error" => "could not access folder"]);
die();
}
$target_file_name = isset($_POST['target_file_name']) ? trim($_POST['target_file_name']) : '';
if (empty($target_file_name)) {
http_response_code(400);
jsresponse(["error" => "target file name is empty"]);
die();
}
if (!touch($target_file_name)) {
http_response_code(400);
jsresponse(["error" => "could not touch target file"]);
die();
}
if (empty($_FILES['chunk_byte_array']['tmp_name'])) {
http_response_code(400);
jsresponse(["error" => "chunk byte array is missing"]);
die();
}
// todo: ram-optimize with stream_copy_to_stream(), this is a very ram-hungry way of appending
$bytes_to_append = file_get_contents($_FILES['chunk_byte_array']['tmp_name']);
if (strlen($bytes_to_append) !== $_FILES['chunk_byte_array']['size']) {
// should never happen
http_response_code(500);
jsresponse(["error" => "could not read chunk byte array file.."]);
die();
}
$bytes_appended = file_put_contents($target_file_name, $bytes_to_append, FILE_APPEND | LOCK_EX);
if (strlen($bytes_to_append) !== $bytes_appended) {
http_response_code(500);
jsresponse(["error" => "could not append all bytes!",
"data" => ["bytes_to_append" => strlen($bytes_to_append), "bytes_actually_appended" => $bytes_appended,
"error_get_last" => error_get_last()]]);
}
jsresponse("success!");
testing it:
$ pwd
/temp
$ b3sum John.Wick3.mp4
2c6445acd31ac3153df52917ca4ab003624377cf50b6e78d0b3c8065d7d2d9f6 John.Wick3.mp4
$ du -h John.Wick3.mp4
2.1G John.Wick3.mp4
$ cat John.Wick3.mp4 | php -r '$i=0;while(!feof(STDIN) && false!==($str=stream_get_contents(STDIN,10*1024*1024))){++$i;file_put_contents("John.Wick3.mp4.part".$i,$str);}'
$ ls | sort -V | head
John.Wick3.mp4
John.Wick3.mp4.part1
John.Wick3.mp4.part2
John.Wick3.mp4.part3
John.Wick3.mp4.part4
John.Wick3.mp4.part5
John.Wick3.mp4.part6
John.Wick3.mp4.part7
John.Wick3.mp4.part8
John.Wick3.mp4.part9
$ cat $(ls | grep -i part | sort -V) | b3sum
2c6445acd31ac3153df52917ca4ab003624377cf50b6e78d0b3c8065d7d2d9f6 -
$ ls | grep -i part | sort -V | xargs --max-args=1 --max-procs=1 '-I{}' curl \
-F folder_name="testfolder" \
-F target_file_name="John.Wick3.mp4" \
-F chunk_byte_array=#"{}" \
http://localhost:81/upload.php
"success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!""success!"
$ du -h /srv/http/default/www/testfolder/John.Wick3.mp4
2.1G /srv/http/default/www/testfolder/John.Wick3.mp4
$ b3sum /srv/http/default/www/testfolder/John.Wick3.mp4
2c6445acd31ac3153df52917ca4ab003624377cf50b6e78d0b3c8065d7d2d9f6 /srv/http/default/www/testfolder/John.Wick3.mp4
$ b3sum John.Wick3.mp4
2c6445acd31ac3153df52917ca4ab003624377cf50b6e78d0b3c8065d7d2d9f6 John.Wick3.mp4
success! uploaded a 2.1GB file with no corruption, as proven by the b3sum being equivalent :) (btw i'm sure there's a better way to split the movie, couldn't think of any though)
I have created a JavaFX client to send large files in chunks of max post size (I am using 2 MB) and a PHP receiver script to assemble the chunks into original file. I am releasing the code under apache license here : http://code.google.com/p/gigaupload/ Feel free to use/modify/distribute.
I have a number of different hosting accounts set up for clients and need to calculate the amount of storage space being used on each account, which would update regularly.
I have a database set up to record each clients storage usage.
I attempted this first using a PHP file on each account, run by a Cron Job. If run manually by myself, it would output the correct filesize and update the correct size to the database, although when run from the Cron Job, it would output 0.
I then attempted to run this file from a Cron Job from the main account but figured this wouldn't actually work as my hosting would block files from another server and I would end up with the same result as before.
I am now playing around with FTP access to each account from a Cron Job from the main account which looks something like below, the only problem is I don't know how to calculate directory size rather than single file sizes using FTP access, and don't know how to reiterate this way? Hoping somebody might be able to help here before I end up going around in circles?
I will also add the previous first attempt too.
$ftp_conn = ftp_connect($ftp_host, 21, 420) or die("Could not connect to server");
$ftp_login = ftp_login($ftp_conn, $ftp_username, 'mypassword');
$total_size = 0;
$contents = ftp_nlist($ftp_conn, ".");
// output $contents
foreach($contents as $folder){
while($search == true){
if($folder == '..' || $folder == '.'){
} else {
$file = $folder;
$res = ftp_size($ftp_conn, $file);
if ($res != -1) {
$total_size = $total_size + $res;
} else {
$total_size = $total_size;
}
}
}
}
ftp_close($ftp_conn);
This doesn't work as it doesn't calculate folder sizes and I don't know how to open the reiterate using this method?
This second script did work but would only work if opened manually, and return 0 if run by the cron job.
class Directory_Calculator {
function calculate_whole_directory($directory)
{
if ($handle = opendir($directory))
{
$size = 0;
$folders = 0;
$files = 0;
while (false !== ($file = readdir($handle)))
{
if ($file != "." && $file != "..")
{
if(is_dir($directory.$file))
{
$array = $this->calculate_whole_directory($directory.$file.'/');
$size += $array['size'];
$files += $array['files'];
$folders += $array['folders'];
}
else
{
$size += filesize($directory.$file);
$files++;
}
}
}
closedir($handle);
}
$folders++;
return array('size' => $size, 'files' => $files, 'folders' => $folders);
}
}
/* Path to Directory - IMPORTANT: with '/' at the end */
$directory = '../public_html/';
// return an array with: size, total files & folders
$array = $directory_size->size($directory);
$size_of_site = $array['size'];
echo $size_of_site;
Please bare in mind that I am currently testing and none of the MySQLi or PHP scripts are secure yet.
If your server supports MLSD command and you have PHP 7.2 or newer, you can use ftp_mlsd function:
function calculate_whole_directory($ftp_conn, $directory)
{
$files = ftp_mlsd($ftp_conn, $directory) or die("Cannot list $directory");
$result = 0;
foreach ($files as $file)
{
if (($file["type"] == "cdir") || ($file["type"] == "pdir"))
{
$size = 0;
}
else if ($file["type"] == "dir")
{
$size = calculate_whole_directory($ftp_conn, $directory."/".$file["name"]);
}
else
{
$size = intval($file["size"]);
}
$result += $size;
}
return $result;
}
If you do not have PHP 7.2, you can try to implement the MLSD command on your own. For a start, see user comment of the ftp_rawlist command:
https://www.php.net/manual/en/function.ftp-rawlist.php#101071
If you cannot use MLSD, you will particularly have problems telling if an entry is a file or folder. While you can use the ftp_size trick, as you do, calling ftp_size for each entry can take ages.
But if you need to work against one specific FTP server only, you can use ftp_rawlist to retrieve a file listing in a platform-specific format and parse that.
The following code assumes a common *nix format.
function calculate_whole_directory($ftp_conn, $directory)
{
$lines = ftp_rawlist($ftp_conn, $directory) or die("Cannot list $directory");
$result = 0;
foreach ($lines as $line)
{
$tokens = preg_split("/\s+/", $line, 9);
$name = $tokens[8];
if ($tokens[0][0] === 'd')
{
$size = calculate_whole_directory($ftp_conn, "$directory/$name");
}
else
{
$size = intval($tokens[4]);
}
$result += $size;
}
return $result;
}
Based on PHP FTP recursive directory listing.
Regarding cron: I'd guess that the cron does not start your script with a correct working directory, so you calculate a size of a non-existing directory.
Use an absolute path here:
$directory = '../public_html/';
Though you better add some error checking so that you can see yourself what goes wrong.
i have an issue with uploading multiple files to disk. here is my code.
i have a request with 2 pictures that gets sent to a upload function. the 2 pictures are in a var called $multiUpload
$folderPath = '/var/www/';
if (is_array($multiUpload)){
$file = array();
$filename = array();
foreach($multiUpload as $key=>$val){
// get the file extension
$file[] = explode('.',$val);
// create custom file name
$filename[] = time().'.'.$file[$key][1];
//send to the upload function
$this->uploadToDisk($folderPath, $filename[$key]);
// sleep 1 sec so that the pic names will be different
sleep(1);
}
return $filename;
}
public function uploadToDisk($folderPath, $filename)
{
$adapter = new Zend_File_Transfer_Adapter_Http();
$adapter->setDestination($folderPath);
$adapter->addFilter( 'Rename',array(
'target' => $folderPath."/".$filename,
'overwrite' => true
) );
if ($adapter->receive()) {
$message = "success";
} else {
$message = "fail";
}
return $message;
}
this will return
Array
(
[0] => Array
(
[0] => 1332977938.jpg
[1] => 1332977939.jpg
)
)
but only array[0][0] or 1332977938.jpg will actually get saves to the disk.
Why are they now both get saved? wired
any ideas?
I suspect the second call to uploadToDisk is returning fail because you can only call Zend_File_Transfer_Adapter_Http::receive() once for each file. Since you are not specifying a file when calling receive, it is receiving all of the files the first time you call uploadToDisk and subsequently is failing with a File Upload Attack error.
Here is some code you can try. This tries to receive each file individually and then save them one at a time with each call to uploadToDisk.
A few notes about the code:
The first parameter to uploadToDisk ($val) may need to be changed as I am not sure what the original values are. It should correspond to one of the element names used for the file upload (See Zend_File_Transfer_Adapter_Http::getFileInfo()) for a list of the files.
I changed the method for generating a unique filename so you don't have to sleep(1)
Zend_File_Transfer_Adapter_Abstract::setDestination() is deprecated and will go away in the future. Instead, just use the Rename filter. When using Rename, setDestination() has no effect.
And here it is...
<?php
$folderPath = '/var/www/';
if (is_array($multiUpload)){
$filenames = array();
foreach($multiUpload as $key => $val){
// get the file extension
$ext = explode('.', $val);
$ext = $ext[sizeof($ext) - 1];
// create custom file name
do {
$filename = uniqid(time()) . '.' . $ext;
$diskPath = $folderPath . $filename;
} while (file_exists($diskPath));
$filenames[$key] = $filename;
//send to the upload function
// $val is the file to receive, $diskPath is where it will be moved to
$this->uploadToDisk($val, $diskPath);
}
return $filename;
}
public function uploadToDisk($file, $filename)
{
// create the transfer adapter
// note that setDestination is deprecated, instead use the Rename filter
$adapter = new Zend_File_Transfer_Adapter_Http();
$adapter->addFilter('Rename', array(
'target' => $filename,
'overwrite' => true
));
// try to receive one file
if ($adapter->receive($file)) {
$message = "success";
} else {
$message = "fail";
}
return $message;
}
I connect via ftp_connect and ftp_login to a FTP server. Once connected, I go to a directory with ftp_chdir. In the directory, I have to delete with ftp_delete all files that have the word "ub" in their filenames. So I have to read somehow every filename and delete only those files who have "ub" in their filenames. I have no idea how to do this. Please help. Thanks.
If you use an interactive ftp command-line tool, you can issue the command
mdel *ub*
but the low-level protocol doesn't support wildcard operations; this is something that has to be implemented in the client by fetching all the names, comparing against the pattern, and deleting one-by-one, as you said. You might want to consider scripting this using command-line ftp, rather than using php?
#Pekka's comment has one possible solution. Another is using glob.
$files = glob('*ub*');
foreach (glob("*ub*") as $file) {
ftp_delete('YOUR_CONNECTION', $file);
}
regards
Since there is no real answer to this question, I will answer with my functions that allows to delete multiple files over ftp:
/**
* Delete multiple files on FTP server. Allowed wildcards are * and ?.
* #param resource $ftp_connection
* #param string $delete_pattern
* #param bool $case_sensitive Case sensitivity is by default
* #return bool|int Number of deleted files, FALSE on failure
*/
function ftp_mdelete($ftp_connection, $delete_pattern = "", $case_sensitive = TRUE){
if(!is_resource($ftp_connection) || strtolower(get_resource_type($ftp_connection)) !== "ftp buffer"){
trigger_error("First parameter for ftp_mdelete should be a valid FTP connection", E_USER_WARNING);
return FALSE;
}elseif(!is_string($delete_pattern) || !strlen($delete_pattern)){
trigger_error("Second parameter for ftp_mdelete should be a non-empty string", E_USER_WARNING);
return FALSE;
}
$raw_list = ftp_rawlist($ftp_connection, '.');
if(!is_array($raw_list)){
return FALSE;
}
$matched_count = 0;
$deleted_count = 0;
if($raw_list){
$delete_pattern = preg_quote($delete_pattern);
$delete_pattern = '/^'.str_replace(array('\*', '\?'), array('.*', '.'), $delete_pattern).'/S'.($case_sensitive?'':'i');
foreach($raw_list as $entry){
if($entry{0} === '-'){
$entry = preg_split("/[\s]+/S", $entry, 9);
$entry = $entry[8];
if(preg_match($delete_pattern, $entry)){
++$matched_count;
if(ftp_delete($ftp_connection, $entry)){
++$deleted_count;
}
}
}
}
unset($raw_list, $entry);
}
if($matched_count != $deleted_count && $deleted_count){
trigger_error("Only {$deleted_count} out of {$matched_count} files deleted.", E_USER_NOTICE);
}elseif($matched_count && !$deleted_count){
trigger_error("No files were deleted ({$matched_count} files matched given pattern).", E_USER_WARNING);
return FALSE;
}
return $deleted_count;
}
Usage example:
$ftp = ftp_connect('127.0.0.1');
ftp_login($ftp, 'user', 'pass');
ftp_chdir($ftp, 'dir');
$deleted = ftp_mdelete($ftp, '*ub*');
ftp_close($ftp);
echo "Number of deleted files: ".intval($deleted);
I'm writing a photo gallery script in PHP and have a single directory where the user will store their pictures. I'm attempting to set up page caching and have the cache refresh only if the contents of the directory has changed. I thought I could do this by caching the last modified time of the directory using the filemtime() function and compare it to the current modified time of the directory. However, as I've come to realize, the directory modified time does not change as files are added or removed from that directory (at least on Windows, not sure about Linux machines yet).
So my questions is, what is the simplest way to check if the contents of a directory have been modified?
As already mentioned by others, a better way to solve this would be to trigger a function when particular events happen, that changes the folder.
However, if your server is a unix, you can use inotifywait to watch the directory, and then invoke a PHP script.
Here's a simple example:
#!/bin/sh
inotifywait --recursive --monitor --quiet --event modify,create,delete,move --format '%f' /path/to/directory/to/watch |
while read FILE ; do
php /path/to/trigger.php $FILE
done
See also: http://linux.die.net/man/1/inotifywait
What about touching the directory after a user has submitted his image?
Changelog says: Requires php 5.3 for windows to work, but I think it should work on all other environments
with inotifywait inside php
$watchedDir = 'watch';
$in = popen("inotifywait --monitor --quiet --format '%e %f' --event create,moved_to '$watchedDir'", 'r');
if ($in === false)
throw new Exception ('fail start notify');
while (($line = fgets($in)) !== false)
{
list($event, $file) = explode(' ', rtrim($line, PHP_EOL), 2);
echo "$event $file\n";
}
Uh. I'd simply store the md5 of a directory listing. If the contents change, the md5(directory-listing) will change. You might get the very occasional md5 clash, but I think that chance is tiny enough..
Alternatively, you could store a little file in that directory that contains the "last modified" date. But I'd go with md5.
PS. on second thought, seeing as how you're looking at performance (caching) requesting and hashing the directory listing might not be entirely optimal..
IMO edubem's answer is the way to go, however you can do something like this:
if (sha1(serialize(Map('/path/to/directory/', true))) != /* previous stored hash */)
{
// directory contents has changed
}
Or a more weak / faster version:
if (Size('/path/to/directory/', true) != /* previous stored size */)
{
// directory contents has changed
}
Here are the functions used:
function Map($path, $recursive = false)
{
$result = array();
if (is_dir($path) === true)
{
$path = Path($path);
$files = array_diff(scandir($path), array('.', '..'));
foreach ($files as $file)
{
if (is_dir($path . $file) === true)
{
$result[$file] = ($recursive === true) ? Map($path . $file, $recursive) : $this->Size($path . $file, true);
}
else if (is_file($path . $file) === true)
{
$result[$file] = Size($path . $file);
}
}
}
else if (is_file($path) === true)
{
$result[basename($path)] = Size($path);
}
return $result;
}
function Size($path, $recursive = true)
{
$result = 0;
if (is_dir($path) === true)
{
$path = Path($path);
$files = array_diff(scandir($path), array('.', '..'));
foreach ($files as $file)
{
if (is_dir($path . $file) === true)
{
$result += ($recursive === true) ? Size($path . $file, $recursive) : 0;
}
else if (is_file() === true)
{
$result += sprintf('%u', filesize($path . $file));
}
}
}
else if (is_file($path) === true)
{
$result += sprintf('%u', filesize($path));
}
return $result;
}
function Path($path)
{
if (file_exists($path) === true)
{
$path = rtrim(str_replace('\\', '/', realpath($path)), '/');
if (is_dir($path) === true)
{
$path .= '/';
}
return $path;
}
return false;
}
Here's what you may try. Store all pictures in a single directory (or in /username subdirectories inside it to speed things up and to lessen the stress on the FS) and set up Apache (or whaterver you're using) to serve them as static content with "expires-on" set to 100 years in the future. File names should contain some unique prefix or suffix (timestamp, SHA1 hash of file content, etc), so whenever uses changes the file its name gets changed and Apache will serve a new version, which will get cached along the way.
You're thinking the wrong way.
You should execute your directory indexer script as soon as someone's uploaded a new file and it's moved to the target location.
Try deleting the cached version when a user uploads a file to his directory.
When someone tries to view the gallery, look if there's a cached version first. If there's a cached version, load it, otherwise, generate the page, cache it, done.
I was looking for something similar and I just found this:
http://www.franzone.com/2008/06/05/php-script-to-monitor-ftp-directory-changes/
For me looks like a great solution since I'll have a lot of control (I'll be doing an AJAX call to see if anything changed).
Hope that this helps.
Here is a code sample, that would return 0 if the directory was changed.
I use it in backups.
The changed status is determined by presence of files and their filesizes.
You could easily change this, to compare file contents by replacing
$longString .= filesize($file);
with
$longString .= crc32(file_get_contents($file));
but it will affect execution speed.
#!/usr/bin/php
<?php
$dirName = $argv[1];
$basePath = '/var/www/vhosts/majestichorseporn.com/web/';
$dataFile = './backup_dir_if_changed.dat';
# startup checks
if (!is_writable($dataFile))
die($dataFile . ' is not writable!');
if (!is_dir($basePath . $dirName))
die($basePath . $dirName . ' is not a directory');
$dataFileContent = file_get_contents($dataFile);
$data = #unserialize($dataFileContent);
if ($data === false)
$data = array();
# find all files ang concatenate their sizes to calculate crc32
$files = glob($basePath . $dirName . '/*', GLOB_BRACE);
$longString = '';
foreach ($files as $file) {
$longString .= filesize($file);
}
$longStringHash = crc32($longString);
# do changed check
if (isset ($data[$dirName]) && $data[$dirName] == $longStringHash)
die('Directory did not change.');
# save hash do DB
$data[$dirName] = $longStringHash;
file_put_contents($dataFile, serialize($data));
die('0');