My environment is: Windows, MsSQL and PHP 5.4.
My scenario:
I'm doing a small shell script that creates a full backup from my wanted database to a temp folder and then moves it to a new location.
The backup goes fine and the file is created to my temp folder. Then I rename it to the 2nd folder and sometimes it goes ok, sometimes it cannot find the source file.
Of course at this point I know that I could skip the temporary location alltogether, but the actual problem with not finding the file bothers me. Why is it so random and might it also affect other file functions I've written before this one... Also i need to be able to control how and when the files move to the destination.
The base code is simple as it should be (although this is a simplified version of my actual code, since I doubt anyone would be interested in my error handling/logging conditions):
$query = "use test; backup database test to disk '//server01/temp/backups/file.bak', COMPRESSION;";
if($SQLClass->query($query)) {
$source="////server01//temp//backups//file.bak";
$destination="////server02//storage//backups//file.bak";
if(!rename($source , $destination)) {
//handleError is just a class function of mine that logs and outputs errors.
$this->handleError("Moving {$source} to {$destination} failed.");
}
}
else {
die('backup failed');
}
What I have tried is:
I added a file_exists before it and it can't find the source file either, when rename can't.
As the file can't be found, copy() and unlink() will not work either
Tried clearstatcache()
Tried sleep(10) after the sql backup completes
None of these didn't help at all. I and google seem to be out of ideas on what to do or try next. Of course I could some shell_execing, but that wouldn't remove my worries about my earlier products.
I only noticed this problem when I tried to run the command multiple times in a row. Is there some sort of cache for filenames that clearstatcache() won't touch ? It seems to be related to some sort of ghost file phenomena, where php is late to refresh the file system contents or such.
I would appreciate any ideas on what to try next and if you read this far thank you :).
You may try calling system's copy command.
I had once problem like yours (on Linux box) when i had to copy files between two NFS shares. It just failed from time to time with no visible reasons. After i switched to cp (analog of Windows copy) problem has gone.
Surely it is not perfect, but it worked for me.
It might be cache-related, or the mysql process has not yet released the file.
mysql will dump the file into another temp file, first and finally moves it to your temp folder.
While the file is beeing moved, it might be inaccessible by other processes.
First I would try to glob() all the files inside temp dir, when the error appears. Maybe you notice, its still not finished.
Also have you tried to implemente something like 10 retry iterations, with some delay?
$notMoved = 0;
while($notMoved < 10){
$source="////server01//temp//backups//file.bak";
$destination="////server02//storage//backups//file.bak";
if(!rename($source , $destination)) {
//handleError is just a class function of mine that logs and outputs errors.
if ($notMoved++ < 10){
sleep(20);
} else {
$this->handleError("Moving {$source} to {$destination} failed.");
break;
}
}else{
break;
}
}
To bypass the issue:
Don't dump and move
Move then dump :-)
(ofc. your backup store would be one behind then)
$source="////server01//temp//backups//file.bak";
$destination="////server02//storage//backups//file.bak";
if(!rename($source , $destination)) {
//handleError is just a class function of mine that logs and outputs errors.
$this->handleError("Moving {$source} to {$destination} failed.");
}
$query = "use test; backup database test to disk '//server01/temp/backups/file.bak', COMPRESSION;";
if($SQLClass->query($query)) {
//done :-)
}
else {
die('backup failed');
}
Try
$source = "\\server01\temp\backups\file.bak";
$destination = "\\server02\storage\backups\file.bak";
$content = file_get_content($source);
file_put_contents($destination, $content);
Related
I have a .txt file located under some folder of my data files. Now I have created a long polling system (actually copied the code ) which is run by ajax.
Now the problem is that my php script is unable to fetch file modification time of the text file (it totally disregards the file).
Below I have both the original code of the author and my twerked code. The one of the author worked fine, but not mine.
Plz help.
The apache server is hosted on windows server
THe file path is absolutly correct and file exist.
Here's the section of my code which has error
while (true) {
//**The error occurs here**
$fileModifyTime = filectime($file);
if ($fileModifyTime === false) {
throw new Exception('Could not read last modification time');
}
// if the last modification time of the file is greater than the last update sent to the browser...
if ($fileModifyTime > $lastUpdate) {
setcookie('lastUpdate', $fileModifyTime);
require 'msgread.php';
// get file contents from last lines...
$fileRead = tailCustom($file, 8);
exit(json_encode([
'status' => true,
'time' => $fileModifyTime,
'content' => $fileRead
]));
}
// to clear cache
clearstatcache();
// to sleep
sleep(1);
}
here's the original code from where i copied
the author's original polling code
and here's my full code, just in case needed
My script which has error
I suspect that your problem is that file.txt does not exist. have you created it and ensured that it's in the current working directory of the script?
It's impossible to say more without seeing your actual code. If you select it and press Ctrl + K that will indent it all.
I am testing my code using little database in txt files. The most important problem that I have found is: when users write at the same time into one file. To solve this I am using flock.
OS of my computer is windows with xampp installed (comment this because i understand flocks works fine over linux no windows) However I need to do this test over linux server.
Actually I have tested my code by loading the same script in 20 windows at the same time. The firsts results works fine, but after test database file appears empty.
My Code :
$file_db=file("test.db");
$fd=fopen("".$db_name."","w");
if (flock($fd, LOCK_EX))
{
ftruncate($fd,0);
for ($i=0;$i<sizeof($file_db);$i++)
{
fputs($fd,"$file_db[$i]"."\n");
}
fflush($fd);
flock($fd, LOCK_UN);
fclose($fd);
}
else
{
print "Db Busy";
}
How it's possible that the script deletes database file content. What is proper way: use flock with fixing of existing code or use some other alternative technique of flock?
I have re-wrote the script using #lolka_bolka's answer and it works. So in answer to your question, the file $db_name could be empty if the file test.db is empty.
ftruncate after fopen with "w" is useless.
file function
Returns the file in an array. Each element of the array corresponds to a line in the file, with the newline still attached. Upon failure, file() returns FALSE.
You do not have to add additional end of line symbol.
flock function
PHP supports a portable way of locking complete files in an advisory way (which means all accessing programs have to use the same way of locking or it will not work).
It means that file function not affected by the lock. It means that $file_db=file("test.db"); could read file while other process somewhere between ftruncate($fd,0); and fflush($fd);. So, you need read file content inside lock.
$db_name = "file.db";
$fd = fopen($db_name, "r+"); // w changed to r+ for getting file resource but not truncate it
if (flock($fd, LOCK_EX))
{
$file_db = file($db_name); // read file contents while lock obtained
ftruncate($fd, 0);
for ($i = 0; $i < sizeof($file_db); $i++)
{
fputs($fd, "$file_db[$i]");
}
fflush($fd);
flock($fd, LOCK_UN);
}
else
{
print "Db Busy";
}
fclose($fd); // fclose should be called anyway
P.S. you could test this script using console
$ for i in {1..20}; do php 'file.php' >> file.log 2>&1 & done
Consider this code:
public static function removeDir($src)
{
if (is_dir($src)) {
$dir = #opendir($src);
if ($dir === false)
return;
while(($file = readdir($dir)) !== false) {
if ($file != '.' && $file != '..') {
$path = $src . DIRECTORY_SEPARATOR . $file;
if (is_dir($path)) {
self::removeDir($path);
} else {
#unlink($path);
}
}
}
closedir($dir);
#rmdir($src);
}
}
This will remove a directory. But if unlink fails or opendir fails on any subdirectory, the directory will be left with some content.
I want either everything deleted, or nothing deleted. I'm thinking of copying the directory before removal and if anything fails, restoring the copy. But maybe there's a better way - like locking the files or something similar?
I general I would confirm the comment:
"Copy it, delete it, copy back if deleted else throw deleting message fail..." – We0
However let's take some side considerations:
Trying to implement a transaction save file deletion indicates that you want to allow competing file locks on the same set of files. Transaction handling is usually the most 'expensive' way to ensure consistency. This holds true even if php would have any kind of testdelete available, because you would need to testdelete everything in a first run and then do a second loop which costs time (and where you are in danger that something changed on your file system in the meanwhile). There are other options:
Try to isolate what really needs to be transaction save and handle those data accesses in databases. Eg. MySQL/InnoDB supports all the nitty gritty details of transaction handling
Define and implement dedicated 'write/lock ownership'. So you have folders A and B with sub items and your php is allowed to lock files in A and some other process is allowed to lock files in B. Both your php and the other process are allowed to read A and B. This gets tricky on files, because a file read causes a lock as well which lasts the longer the bigger the files are. So on file basis you probably need to enrich this with file size limits, torance periods and so on.
Define and implement dedicated access time frames. Eg. All files can be used within the week, but you have a maintenance time frame at sunday night which can also run deletions and therefore requires lock free environments.
Right, let's say my reasoning was not frightening enough :) - and you implement a transaction save file deletion anyway - your routine can be implemented this way:
backup all files
if the backup fails you could try a second, third, fourth time (this is an implementation decision)
if there is no successful backup, full stop
run your deletion process, two implementation options (in any way you need to log the files you deleted successfully):
always run through fully, and document all errors (this can be returned to the user later on as homework task list, however potentially runs long)
run through and stop at the first error
if the deletion was the successful, all fine/full stop; if not proceed with rolling back
copy back only previously successful deleted file from the archive (ONLY THEM!)
Wipe out your backup
This then is only transaction save on file level. It does NOT handle the case where somebody changes permissions on folders in between step 5 and 6.
Or you could try to just rename/move the directory to something like /tmp/, it succeeds or doesnt - but the files are not gone. Even if another process would have an open handle, the move should be ok. The files will be gone some time later when the tmp folder is emptied.
I have a script that re-writes a file every few hours. This file is inserted into end users html, via php include.
How can I check if my script, at this exact moment, is working (e.g. re-writing) the file when it is being called to user for display? Is it even an issue, in terms of what will happen if they access the file at the same time, what are the odds and will the user just have to wait untill the script is finished its work?
Thanks in advance!
More on the subject...
Is this a way forward using file_put_contents and LOCK_EX?
when script saves its data every now and then
file_put_contents($content,"text", LOCK_EX);
and when user opens the page
if (file_exists("text")) {
function include_file() {
$file = fopen("text", "r");
if (flock($file, LOCK_EX)) {
include_file();
}
else {
echo file_get_contents("text");
}
}
} else {
echo 'no such file';
}
Could anyone advice me on the syntax, is this a proper way to call include_file() after condition and how can I limit a number of such calls?
I guess this solution is also good, except same call to include_file(), would it even work?
function include_file() {
$time = time();
$file = filectime("text");
if ($file + 1 < $time) {
echo "good to read";
} else {
echo "have to wait";
include_file();
}
}
To check if the file is currently being written, you can use filectime() function to get the actual time the file is being written.
You can get current timestamp on top of your script in a variable and whenever you need to access the file, you can compare the current timestamp with the filectime() of that file, if file creation time is latest then the scenario occured when you have to wait for that file to be written and you can log that in database or another file.
To prevent this scenario from happening, you can change the script which is writing the file so that, it first creates temporary file and once it's done you just replace (move or rename) the temporary file with original file, this action would require very less time compared to file writing and make the scenario occurrence very rare possibility.
Even if read and replace operation occurs simultaneously, the time the read script has to wait will be very less.
Depending on the size of the file, this might be an issue of concurrency. But you might solve that quite easy: before starting to write the file, you might create a kind of "lock file", i.e. if your file is named "incfile.php" you might create an "incfile.php.lock". Once you're doen with writing, you will remove this file.
On the include side, you can check for the existance of the "incfile.php.lock" and wait until it's disappeared, need some looping and sleeping in the unlikely case of a concurrent access.
Basically, you should consider another solution by just writing the data which is rendered in to that file to a database (locks etc are available) and render that in a module which then gets included in your page. Solutions like yours are hardly to maintain on the long run ...
This question is old, but I add this answer because the other answers have no code.
function write_to_file(string $fp, string $string) : bool {
$timestamp_before_fwrite = date("U");
$stream = fopen($fp, "w");
fwrite($stream, $string);
while(is_resource($stream)) {
fclose($stream);
}
$file_last_changed = filemtime($fp);
if ($file_last_changed < $timestamp_before_fwrite) {
//File not changed code
return false;
}
return true;
}
This is the function I use to write to file, it first gets the current timestamp before making changes to the file, and then I compare the timestamp to the last time the file was changed.
I'm facing a very weird problem! I'm using the method below to extract a .zip file's contents into a new folder. It works perfectly fine one my computer but does not work on another one! I have Windows XP on both computers and have installed the same wampServer on both. Everything between the two computers is the same except their CPU and RAM! My computer is a powerful one and the one where the extract process fails is a very slow computer. Is that why? How can I make sure the PHP code runs perfectly even in a slow environment?
One thing to add: the zip archive to be extracted contains one directory and some files in that directory. If I test the process with a zip file that has no directories in it, it works fine on both computers. Any ideas?!
public function extract($pluginName, $pasteLocation) {
$zip = new ZipArchive();
$plugin = $pasteLocation.$pluginName.".zip";
if ($zip->open($plugin) === TRUE) {
$zip->extractTo($pasteLocation);
$zip->close();
unlink($pasteLocation.$pluginName.'.zip');
$status = "true";
$msg = "success";
} else {
$status = "false";
$msg = "error";
}
$result["status"] = $status;
$result["msg"] = $msg;
return $result;
}
You said it does not work in one system. Can you tell what is not working, like, are the files extracted partially? or are the files getting corrupted?
Did you tried using different directories. Does the target directory contain a file with the same name as the directory in the zip? then I guess directory creation will not work.
Also what version of php are you using?
EDIT: Did you use ZipArchive::getStatusString function to get any generated errors ? Are you using the same source archive in both machines?
You can also try the procedure explained in comment by 'hardcorevenom' here.
You can also try this class as shown here if nothing works.