I have a problem with the php_zip.dll's ZipArchive class. I'm using it through the ZipArchiveImproved wrapper class suggested on php.net to avoid the max file-handle issue.
The problem is really simple: 700 files are added properly (jpg image files), and the rest fails. The addFile method returns false.
The PHP version is 5.2.6.
The weird thing is that this actually used to work.
What could be the problem? Can you give me any clues?
Thank you very much in advance!
Edit: sorry, it's not true that I'm not getting any error message (display_errors was switched off in php.ini I didn't notice it before). From the 701. file on, I'm getting the following error message:
Warning: ZipArchive::addFile() [ziparchive.addfile]: Invalid or unitialized Zip object in /.../includes/ZipArchiveImproved.class.php on line 104
Looks like the close() call returns false, but issues no error. Any ideas?
Edit 2: the relevant source:
include_once DIR_INCLUDES . 'ZipArchiveImproved.class.php';
ini_set('max_execution_time', 0);
$filePath = $_SESSION['fqm_archivePath'];
$zip = new ZipArchiveImproved();
if(! $zip->open($filePath, ZipArchive::CREATE))
{
echo '<div class="error">Hiba: a célfájl a(z) "' . $filePath . '" útvonalon nem hozható létre.</div>';
return;
}
echo('Starting (' . count($_POST['files']) . ' files)...<br>');
$addedDirs = array();
foreach($_POST['files'] as $i => $f)
{
$d = getUserNameByPicPath($f);
if(! isset($addedDirs[$d]))
{
$addedDirs[$d] = true;
$zip->addEmptyDir($d);
echo('Added dir "' . $d . '".<br>');
}
$addName = $d . '/' . basename($f);
$r = $zip->addFile($f, $addName);
if(! $r)
{
echo('<font color="Red">[' . ($i + 1) . '] Failed to add file "' . $f . '" as "' . $addName . '".</font><br>');
}
}
$a = $zip->addFromString('test.txt', 'Moooo');
if($a)
{
echo 'Added string successfully.<br>';
}
else
{
echo 'Failed to add string.<br>';
}
$zip->close();
It's probably because of maximal number of open files in your OS (see http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/ for more detailed info; it can by system-wide or just user limit).
Zip keeps every added file open until Zip::close is called.
The solution is to close and reopen archive every X files (256 or 512 should be safe value).
The problem is described here: http://www.php.net/manual/en/function.ziparchive-open.php#88765
Have you tried to specify both flags?
I solved this problem by increasing the ulimit: ulimit -n 8192.
Related
I have a function that writes ~120Kb-150Kb HTML and meta data on ~8000 .md files with fixed names every few minutes:
a-agilent-technologies-healthcare-nyse-us-39d4
aa-alcoa-basic-materials-nyse-us-159a
aaau-perth-mint-physical-gold--nyse-us-8ed9
aaba-altaba-financial-services-nasdaq-us-26f5
aac-healthcare-nyse-us-e92a
aadr-advisorshares-dorsey-wright-adr--nyse-us-d842
aal-airlines-industrials-nasdaq-us-29eb
If file does not exist, it generates/writes quite fast.
If however the file exists, it does the same much slower, since the existing file carries ~150KB data.
How do I solve this problem?
Do I generate a new file with a new name in the same directory, and unlink the older file in the for loop?
or do I generate a new folder and write all files then I unlink the previous directory? The problem with this method is that sometimes 90% of files are being rewritten and some remain the same.
Code
This function is being called in a for loop, which you can see it in this link
public static function writeFinalStringOnDatabase($equity_symbol, $md_file_content, $no_extension_filename)
{
/**
*#var is the MD file content with meta and entire HTML
*/
$md_file_content = $md_file_content . ConfigConstants::NEW_LINE . ConfigConstants::NEW_LINE;
$dir = __DIR__ . ConfigConstants::DIR_FRONT_SYMBOLS_MD_FILES; // symbols front directory
$new_filename = EQ::generateFileNameFromLeadingURL($no_extension_filename, $dir);
if (file_exists($new_filename)) {
if (is_writable($new_filename)) {
file_put_contents($new_filename, $md_file_content);
if (EQ::isLocalServer()) {
echo $equity_symbol . " 💚 " . ConfigConstants::NEW_LINE;
}
} else {
if (EQ::isLocalServer()) {
echo $equity_symbol . " symbol MD file is not writable in " . __METHOD__ . " 💔 Maybe, check permissions!" . ConfigConstants::NEW_LINE;
}
}
} else {
$fh = fopen($new_filename, 'wb');
fwrite($fh, $md_file_content);
fclose($fh);
if (EQ::isLocalServer()) {
echo $equity_symbol . " front md file does not exit in " . __METHOD__ . " It's writing on the database now 💛" . ConfigConstants::NEW_LINE;
}
}
}
I haven't programmed in PHP for years, but this question has drawn my interest today. :D
Suggestion
How do I solve this problem?
Do I generate a new file with a new name in the same directory, and unlink the older file in the for loop?
Simply use the 3 amigos fopen(), fwrite() & fclose() again, since fwrite will also overwrite the entire content of an existing file.
if (file_exists($new_filename)) {
if (is_writable($new_filename)) {
$fh = fopen($new_filename,'wb');
fwrite($fh, $md_file_content);
fclose($fh);
if (EQ::isLocalServer()) {
echo $equity_symbol . " 💚 " . ConfigConstants::NEW_LINE;
}
} else {
if (EQ::isLocalServer()) {
echo $equity_symbol . " symbol MD file is not writable in " . __METHOD__ . " 💔 Maybe, check permissions!" . ConfigConstants::NEW_LINE;
}
}
} else {
$fh = fopen($new_filename, 'wb');
fwrite($fh, $md_file_content);
fclose($fh);
if (EQ::isLocalServer()) {
echo $equity_symbol . " front md file does not exit in " . __METHOD__ . " It's writing on the database now 💛" . ConfigConstants::NEW_LINE;
}
}
For the sake of DRY principle:
// It's smart to put the logging and similar tasks in a separate function,
// after you end up writing the same thing over and over again.
public static function log($content)
{
if (EQ::isLocalServer()) {
echo $content;
}
}
public static function writeFinalStringOnDatabase($equity_symbol, $md_file_content, $no_extension_filename)
{
$md_file_content = $md_file_content . ConfigConstants::NEW_LINE . ConfigConstants::NEW_LINE;
$dir = __DIR__ . ConfigConstants::DIR_FRONT_SYMBOLS_MD_FILES; // symbols front directory
$new_filename = EQ::generateFileNameFromLeadingURL($no_extension_filename, $dir);
$file_already_exists = file_exists($new_filename);
if ($file_already_exists && !is_writable($new_filename)) {
EQ::log($equity_symbol . " symbol MD file is not writable in " . __METHOD__ . " 💔 Maybe, check permissions!" . ConfigConstants::NEW_LINE);
} else {
$fh = fopen($new_filename,'wb'); // you should also check whether fopen succeeded
fwrite($fh, $md_file_content); // you should also check whether fwrite succeeded
if ($file_already_exists) {
EQ::log($equity_symbol . " 💚 " . ConfigConstants::NEW_LINE);
} else {
EQ::log($equity_symbol . " front md file does not exit in " . __METHOD__ . " It's writing on the database now 💛" . ConfigConstants::NEW_LINE);
}
fclose($fh);
}
}
Possible cause
tl;dr To much overhead due to the Zend string API being used.
The official PHP manual says:
file_put_contents() is identical to calling fopen(), fwrite() and fclose() successively to write data to a file.
However, if you look at the source code of PHP on GitHub, you can see that the part "writing data" is done slightly different in file_put_contents() and fwrite().
In the fwrite function the raw input data (= $md_file_content) is directly accessed in order to write the buffer data to the stream context:
Line 1171:
ret = php_stream_write(stream, input, num_bytes);
In the file_put_contents function on the other hand the Zend string API is used (which I never heard before).
Here the input data and length is encapsulated for some reason.
Line 662
numbytes = php_stream_write(stream, Z_STRVAL_P(data), Z_STRLEN_P(data));
(The Z_STR.... macros are defined here, if you are interested).
So, my suspicion is that possibly the Zend string API is causing the overhead while using file_put_contents.
side note
At first I thought that every file_put_contents() call creates a new stream context, since the lines related to creating context were also slightly different:
PHP_NAMED_FUNCTION(php_if_fopen) (Reference):
context = php_stream_context_from_zval(zcontext, 0);
PHP_FUNCTION(file_put_contents) (Reference):
context = php_stream_context_from_zval(zcontext, flags & PHP_FILE_NO_DEFAULT_CONTEXT);
However, on closer inspection, the php_stream_context_from_zval call is made effectively with the same params, that is the first param zcontext is null, and since you don't pass any flags to file_put_contents, flags & PHP_FILE_NO_DEFAULT_CONTEXT becomes also 0 and is passed as second param.
So, I guess the default stream context is re-used here on every call. Since it's apparently a stream of type persistent it is not disposed after the php_stream_close() call.
So the Fazit, as the Germans say, is there is apparently either no additional overhead or equally same overhead regarding the creation or reusing a stream context in both cases.
Thank you for reading.
so I come to you with this problem:
GIVENS:
I have Hostgator as the ISP.
I'm using PHP 5.5
The LINUX box is CENTOS
Shared Hosting Environment
I am a professional coder and experienced with LAMP for many years
PROBLEMS:
I'm NOT familiar with Jailed Shell but have an idea
I've tried the script and have been searching for an answer
Still Stuck...
Here's my current code:
function getMyFakeDir($myfile) {
$target = "";
$link = 'content/purchased-items/link';
symlink($target, $link);
echo "READ LINK: ". readlink($link);
return readlink($link);
}
Here's the called to the function:
$linkText = getMyFakeDir('SomePDFThatTheUserCanDownload.pdf');
Then I pass that "$linkText" var to PHPMailer and wala!!! The user clicks to download through the Symlink and I've written a code to make it expire after 24 hours. Yeah, I got that from PHP.net.
So, basically that's my problem....
Here's the error:
Warning: symlink(): Permission denied in /homeSomewhere/someMasterDir/public_html/webServices/somePHPFile.php on line 654
This is link 654 from above:
symlink($target, $link);
Thanks...
Figured it out.... Simple LOGIC!
First I had the PATHS all wrong. Here's the CORRECTED CODE:
//Generate Symbolic Link that blows away a fake directory each time
//A symbolic Link is created to "THIS" file below
$filename = $myfile;
//This is ANY directory on the server...
//A randomly named directory is created here...STEP 1
$downloaddir = "/home/someHostDir/public_html/sldktrulwiu2555ivd0fjvdfgdfgdfgdf/";
//Any directory NOT accessible by a browser - This is one level up
$safedir = "/home/someHostDir/content/purchased-items/";
//This is the equivalent of the $downloaddir and the browser is REDIRECTED here and the download begins
$downloadURL = "http://www.theSomeDomain.com/sldktrulwiu2555ivd0fjvdfgdfgdfgdf/"; //THIS IS THE FAKE DIRECTORY (Which I simply type gobbligook and created that mess above)
$letters = 'abcdefghijklmnopqrstuvwxyz';
srand((double) microtime() * 1000000);
$string = '';
for ($i = 1; $i <= rand(4, 12); $i++) {
$q = rand(1, 24);
$string = $string . $letters[$q];
}
$handle = opendir($downloaddir);
while ($dir = readdir($handle)) {
if (is_dir($downloaddir . $dir)) {
if ($dir != "." && $dir != "..") {
#unlink($downloaddir . $dir . "/" . $filename);
#rmdir($downloaddir . $dir);
}
}
}
closedir($handle);
mkdir($downloaddir . $string, 0777);
symlink($safedir . $filename, $downloaddir . $string . "/" . $filename);
//Header("Location: " . $downloadURL . $string . "/" . $filename);
That just about does it.
The result is a Dynamic directory inside the "/sldktrulwiu2555ivd0fjvdfgdfgdfgdf/" directory like this: "dfsgss/" and a "shortcut" to the REAL LOCATION where the files are. THEN when the file is downloaded, and the page returns to the INDEX.HTML, the code BLOWS OUT the FAKE directory Never more to be seen again. Thus, if the user wants to go back, they get the horrible 404 PAGE error that the file / has been moved or deleted. BUMMER.
That's it folks. Thanks for your assistance, it was you all the helped me release my error. TEAM WORK!
I have been using an upload script on my server, like below
$newname = time() . '_' . $_FILES[$file]["name"];
if (strtolower(end(explode('.', $_FILES[$file]["name"]))) != 'pdf' AND $file != "damage_attachment_damageform_1" AND $file != "damage_attachment_damageform_2" AND $file != "damage_attachment_damageform_3" AND $file != "damage_attachment_damageform_4") {
if (move_uploaded_file($_FILES[$file]["tmp_name"], $_SERVER['DOCUMENT_ROOT'] . '/components/com_fleet/uploads/docs/' . $newname)) {
$images[] = $_SERVER['DOCUMENT_ROOT'] . '/components/com_fleet/uploads/docs/' . $newname;
$docs[] = $_SERVER['DOCUMENT_ROOT'] . '/components/com_fleet/uploads/docs/' . $newname;
} else {
die();
}
}
It uploads an image fine, but since a few days a get a Warning: move_uploaded_file(): Unable to move error. Ive seen these a dozen of times while learning to program, so I did all the usual stuff, check paths, the $_FILES[$file]["error"] and check all the right CHMODs. All is fine, path is spot-on, chmod is too, no errors etc...
1 extra weird thing I noticed the file does get written to the right /docs map but its Filesize is empty, and move_upload_file still sends false...
What am I forgetting? CHOWN maybe? And how can I solve that, I dont have SSH access or something.
Graa after an hour I now found out what was wrong, server Disk Quota was exceeded. Maybe people can still benefit from my problems...
I'm trying to create a bit of code to first check the content of a directory to see if a file exists and if it does, append a number to the filename. Unfortunately I can't get it to work at the moment, the php produces no errors but a new file is not created if one already exists. Here is my code atm:
$Scan_Name_Output = "dirbuster_" . $workload["Scan_Name"] . "_output.txt";
$Check_Output = exec("ls " . $Output_Directory . " | grep -w " . $Scan_Name_Output);
$j = 1;
while (!empty($Check_Output))
{
$Scan_Name_Output = $Scan_Name_Output . $j;
$j++;
If I replace the while loop with an if statement, it works - so it's not the file paths or anything that are causing the problem. I've tried a fair few combinations but can't get it to work.
I have tried using file_exists() but it doesn't work - I think it's because I'm passing it variables that have been put through escapeshellarg(). As a result I think file_exists literally looks for /path/to/dir/'Report1.txt' - obviously 'Report1.txt' doesn't exist, Report1.txt does. This is why I was using exec and ls.
Thanks for any responses
PHP has some nice functions built in to handle files. You should think about using file_exists() for example.
$basename = "dirbuster_" . $workload["Scan_Name"] . "_output.txt";
$Scan_Name_Output = $basename;
$j = 1;
while (file_exists($Scan_Name_Output)){
$Scan_Name_Output = $basename . $j;
$j++;
}
$ourFileHandle = fopen($Scan_Name_Output, 'w') or die("can't open file");
Try this:
$Scan_Name_Output = "dirbuster_" . $workload["Scan_Name"] . "_output.txt";
if (file_exists($Scan_Name_Output))
{
rename($Scan_Name_Output, $Scan_Name_Output . "1");
}
I have strong reason to believe that both functions rename() and unlink() are asynchronous, which, from my understanding, means that when the functions are called, the code below them are continued before it finishes its procedures on the filesystem. This is a problem for the internet app I'll explain below, because later code depends on these changes to already be set in stone. So, is there a way to make both synchronous, so that the code reader freezes when it hits these functions, until all of its tasks are fully carried out on the filesystem?
Here is the code in delete-image.php, which is called by ajax from another admin-images.php(the latter will not be shown):
`
foreach ($dirScan as $key => $value) {
$fileParts = explode('.', $dirScan[$key]);
if (isset($fileParts[1])) {
if ((!($fileParts[1] == "gif") && !($fileParts[1] == "jpg")) && (!($fileParts[1] == "png") && !($fileParts[1] == "jpg"))) {
unset($dirScan[$key]);
}
} else {
unset($dirScan[$key]);
}
}
$dirScan = array_values($dirScan);
// for thumbnail
$file = 'galleries/' . $currentGal . '/' . $currentDir . "/" . $dirScan[$imageNum - 1];
unlink($file);
for ($i = ($imageNum - 1) + 1; $i < count($dirScan); $i++) {
$thisFile = 'galleries/' . $currentGal . '/' . $currentDir . '/' . $dirScan[$i];
$thisSplitFileName = explode('.', $dirScan[$i]);
$newName = 'galleries/' . $currentGal . '/' . $currentDir . "/" . ($thisSplitFileName[0] - 1) . "." . $thisSplitFileName[1];
rename($thisFile, $newName);
}
// for large image
$fileParts = explode('.', $dirScan[$imageNum - 1]);
$file = 'galleries/' . $currentGal . '/' . $currentDir . "/large/" . $fileParts[0] . "Large." . $fileParts[1];
unlink($file);
for ($i = ($imageNum - 1) + 1; $i < count($dirScan); $i++) {
$thisSplitFileName = explode('.', $dirScan[$i]);
$thisFile = 'galleries/' . $currentGal . '/' . $currentDir . '/large/' . $thisSplitFileName[0] . "Large." . $thisSplitFileName[1];
$newName = 'galleries/' . $currentGal . '/' . $currentDir . "/large/" . ($thisSplitFileName[0] - 1) . "Large." . $thisSplitFileName[1];
rename($thisFile, $newName);
}
sleep(1);
echo 'deleted ' . $dirScan[$imageNum - 1] . " successfully!";
} else {
echo "please set the post data";
} ?>`
After this script returns its completed text, admin-images.php triggers a new function which populates an image table from these renamed and trimmed files. Sometimes it displays old names and files that were suppose to be deleted, and a simple page refresh gets rid of them. This seems to suggest that the above php script is running through all the code and spitting out echoed text to the mainfile before it completes its file-system manipulation (All of this other code is long and complicated, and hopefully unnecessary for the discussion at hand).
You'll notice, I've tried a sleep() function to halt the php script to hopefully give it time to finish. This is an ineligent, and problematic way of doing things, because I have to put a large amount of time to insure it works every-time, but I don't want the user to wait longer than she / he has to.
Mind that file-systems often use caches to reduce the load. Normally you won't notice, but sometimes you need to clear the cache if you need to have the real information. Check the configuration of your file-system if your issue is file-system related.
PHP itself uses a cache as well for some file-operations, so clear that, too.
See clearstatcache to clear the PHP stat cache.
Take note that this is a "view" issue, the file is actually deleted on disk, but PHP might still return it's there (until you clear the cache).
I suppose they are not asynchronous, because they return a result telling if the operation was successful or not.
I believe the problem happens because when you run scandir after making the modifications, it may be using "cached" data, from memory, instead of re-scanning the file system.
rename() is not, but unlink() is asynchronous on Windows.
Because there seems to be no way of waiting for a pending delete to finish, this answer suggests to rename a file before deleting it. PHP does not seem to do that, so you can assume it's asynchronous.
To use any file operation you are required to use the $_SERVER["DOCUMENT_ROOT"] to make that work. In case you wont do it.. the real operation wont work properly. Also in case you are using the Linux Server then you will be required to set the permissions for the folders in which you want to perform the file operation.
And mind it both the operations are synchronous they are not asynchronous. It also depends on the type of the server or the OS that you are using.