I am writing some json results in files in PHP on shared hosting (fwrite).
Then I read those files to extract json results (file_get_contents).
It happens some times (maybe one out of more than one thousand) that when I read this file it appears truncated: I can only read a multiple of the first 32768 bytes of the file.
I added some code to copy/paste the file I am reading in case the json string is not valid, and I then get 2 different files: the original one was correctly written as it contains a valid json string and the copied one contains only the beginning of the original one and has a size of x*32768 bytes.
Would you have any idea of what could be the problem and how to solve this? (I don't know how to investigate further)
Thank you
Without example code it is impossible to give a 'fix my code' answer, but when doing file write/read sort of programming, you should follow a simple process (which, from the description, is missing one fairly critical step!)
First, write to a TEMP file (you are writing to a file, but it is important here to write to a TEMP file - otherwise, you could have race conditions....... ;);
an easy way to do that in php
$yourData = "whateverYourDataIs....";
$goodfilename = 'whateverYourGoodFileNameIsForYourData.json';
$tempfilename = 'tempfile' . time(); // MANY ways to do this (lots of SO posts on it - just get a unique name every time you write ('unique' may not be needed if you only occasionally do a write, but it is a good safety measure to avoid collisions and time() works for many programs.)
// Now, use $tempfilename in your fwrite.
$fwrite = fwrite($tempfilename,$yourData);
if ($fwrite === false) {
// the write failed, so do whatever 'error' function you may need
// since it failed, there should be no file, but not a bad idea to attempt to delete it
unlink ($tempfile);
}
else {
// the write succeeded, so let's do a 'sanity check' on the file to make sure it is good JSON (this is a 'paranoid' check, but "better safe than sorry", right?)
if(json_decode($tempfile)){
// we know the file is good JSON, so now RENAME (this is really fast, so collisions are almost impossible) NOTE: see http://php.net/manual/en/function.rename.php comments for some potential challenges and workarounds if you have trouble with rename.
rename($tempfilename,$goodfilename);
}
// Now, the GOOD file will contain your new data - and those read issues are gone! (though never say 'never' - it may be possible, but very unlikely!)
}
This may/not be your issue directly and you will have to suit this to fit your code, but as a safety factor - and a good way to avoid collisions, it should give you ~100% read success, which I believe is what you are after!)
If this doesn't help, then some direct code will be needed to provide a more complete answer.
As suggested by #UlrichEckhardt comment, it was due to read / write concurrency problem. I was trying to read a file that was being writen. I solved this by just waiting before trying to read the file again
Related
I found this simple function that simply clears the contents of a file:
function createEmptyFile($filename)
{
//
$fp = fopen($filename, 'w');
//
fclose($fp);
}
For example, resets the log, or something else.
But I have a question whether this function is really 100% safe, that is, its behavior is uniquely determined, and the result will be as expected.
I have a strange feeling that there is a possibility that the file: may not be cleaned, or may be damaged, or filled with random data in some case.
I understand that perhaps this is all prejudice, but I would like to know exactly how right it is. In particular, I am confused by the absence of any command between opening and closing the handle.
For example, I assume that if the PHP interpreter will perform optimization, then due to the lack of a command, between opening and closing the handle, it may skip this block, or something like that.
Sorry if I create a panic in vain.
I am confused by the absence of any command between opening and closing the handle.
Let me put something to the front:
The left '{' and the right '}' bracket are flow-controls and need commands in between. fopen() and fclose() are not. Though these are only 2 commands they inherit many more.
That said lets look into the tasks of them: fopen() does ...
(a) look for the specific file,
(b) if the file is not existing it creates the file
(c) opens the file for writing only,
(d) redirects the file-pointer to the beginning of the file,
(e) if the file is longer then zero it will truncate it to zero length.
In particular, I am confused by the absence of any command between opening and closing the handle.
You see there are many "commands" in between. So don't worry.
I have a strange feeling that there is a possibility that the file: may not be cleaned
To be exact, the file is not cleared, it is truncated to zero length. The earlier data of this file is still in the memory of your data-storage. But this is subject to your operation-system. There exist programs to delete these data-blocks entirely.
[...], or may be damaged
I don't understand the question. You are going to delete a file - what more damage do you expect?
[...], or filled with random data in some case.
That is C-style for creating variables by setting up a pointer to free memory but not clearing the data earlier was in it and giving this duty to you. But here it is just a truncate and not a redirecting of the file pointer. This fear could be ignored.
But I have a question whether this function is really 100% safe, that is, its behavior is uniquely determined, and the result will be as expected.
Yes, normally the behavior is uniquely determined. But you have to expect some side-effects:
if the file does not exist it will be created. You need to have write-access to the directory. (else it will come back with false)
if you have no write-access to the file it will come back with false.
if your php-environment uses "safe mode" the possible difference of owner and user leads to a fail of fopen(). You need to be sure that the file is worked only by you.
It can happen that you just write fopen() and don't check the return-parameter. That may cause a problem and lead to: not 100% safe if you don't react correctly.
So - yes fopen() and fclose() are sufficient, correct and inform you if the work is not done properly.
I want to delete a file by using PHP. I have used the unlink() function, but I was wondering about the security of unlink. Is the file completely deleted from the server? I want to make sure that there is no way to get the file back and the file is completely removed from the server.
open the file in binary mode for writing, write 1's over the entire file, close the file, and then unlink it. overwrites any data within the file so it cannot be recovered.
Personally i would say use 1's instead of 0's as 1's are actual data and will always write, where as 0's may not write, depending on several factors.
Edit: After some thought, and reading of comments, i would go with a hybrid approach, depending on "how deleted" you want the file to be, if you simply wish to make it so the data cannot be recovered, overwrite the entire files length with 1's as this is fast, and destroys the data, the problem with this, is it leaves a set length of uniform data on the disk which infers a file USED to be there and gives away the files length, giving vital pieces of forensic information. Simply writing random data will not avoid this also, as if all the drive sectors around this file are untouched, this will also leave a forensic trace.
The best solution factoring in forensic deletion, obfuscation and plausible deniability (again, this is overkill, but im adding it for the sake of adding it), overwrite the entire length of the file with 1's and then, for HALF the length of the file in bytes, write from mt_rand in random length sizes, from random starting points, leaving the impression that many files of varying lengths used to be in this area, thus creating a false trail. (again, this is completely overkill and is generally only needed by serial killers and the CIA, but im adding it for the sake of doing so).
the US government used to recommend a seven step wipe, for disks.
1) all '1's
2) all '0's
3) the pattern '01'
4) the pattern '10'
5) a random pattern
6) all '1'
7) a random pattern,
re the code sample, using a language like PHP is wrong for this type of wipe as your relaying on the OS really wipeing the file and not doing something cleaver like only wipeing it the last time or just unlinking it, however...
(untested)
$filename = "/usr/local/something.txt";
$size = filesize($filename);
$pat1 = chr(0);
$pat2 = chr(255);
$pat3 = chr(170);
$pat4 = chr(85);
$mask = str_repeat($pat1, $size);
file_put_contents($filename, $mask);
$mask = str_repeat($pat2, $size);
file_put_contents($filename, $mask);
$mask = str_repeat($pat3, $size);
file_put_contents($filename, $mask);
$mask = str_repeat($pat4, $size);
file_put_contents($filename, $mask);
This might not answer HOW to perfectly delete a file "with PHP", but it answers your question: "Is the file completely deleted from the server ?"
In some cases, No! (on UNIX/POSIX OS).
According to the highest voted comment on the offical PHP unlink() manual page, the unlink function does not really delete the file, it's deleting the system link to the file's content ! As files can have several files names (!) [symlinks?] the file will only be deleted when ALL file names are unlinked. So, if your file has 2 names, then unlink() will not really delete the file unless you unlink() both file names. Dear linux guys, please correct me here if necessary.
This might be why the function is called unLINK() and not delete() !!!
Here a full quote of the excellent comment:
Deleted a large file but seeing no increase in free space or decrease of disk usage? Using UNIX or other POSIX OS? The unlink() is not about removing file, it's about removing a file name. The manpage says: `unlink - delete a name and possibly the file it refers to''. Most of the time a file has just one name -- removing it will also remove (free, deallocate) thebody' of file (with one caveat, see below). That's the simple, usual case.
However, it's perfectly fine for a file to have several names (see the link() function), in the same or different directories. All the names will refer to the file body and keep it alive', so to say. Only when all the names are removed, the body of file actually is freed. The caveat: A file's body may *also* bekept alive' (still using diskspace) by a process holding the file open. The body will not be deallocated (will not free disk space) as long as the process holds it open. In fact, there's a fancy way of resurrecting a file removed by a mistake but still held open by a process...
Have a look on unlink()'s sister function link() here.
The (imo) best way to delete a file via PHP:
The way to go to really delete a file with PHP (in linux) is to use the exec() function, which executes real bash commands (doing things with linux bash feel correct btw). In this case, the file test.jpg would be deleted by doing:
exec("rm test.jpg);
More info on how to use rm (remove) correctly can be found for example here. Please note: PHP needs the right to delete the file!
UPDATE: Unfortunatly, the linux rm command ALSO does not really delete the file if it has two names/links. Look here for more info.
I'll have a deeper research on that and give feedback...
It is possible that because of some fragmentation on the disk some parts of file will stay, even if the file is totally overwritten.
The other way is to run (by shell_exec()) external program, that is system specific. Here is an example (for Windows), however I have not tested it.
You should do multiple passes of overwriting to deminish traces. For instance using the US DoD 5220-22.M : "Overwrite all addressable locations with a character, its complement, then a random character and verify" (from killdisk site)
Here's what the EFF recommends to permanently remove a file http://ssd.eff.org/tech/deletion.
In my embedded Ubuntu device, I use: echo exec('rm /usr/share/subdirectory/subdirectory/filename'); This works for me.
if you use rm -f (--force) then linux will
ignore nonexistent files and arguments, never prompt
rm -d will
remove empty directories
If you enter rm --help at the prompt you get the help screen. The last lines read:
Note that if you use rm to remove a file, it might be possible to recover some of its contents, given sufficient expertise and/or time. For greater assurance that the contents are truly unrecoverable, consider using shred.
Since my system is a "closed" system then I'm not concerned about violating security issues. My logic being that one must have the system password to SSH into the OS and the only user interface is via web pages.
#Sliq's comments are still true to date. You need to decide for your case.
So I have this script that parses through a file and inserts data into my DB. But now the files I'm looking at have a bunch of random text(basically a key for the rest of the log) before the stuff that I need. So I need to find the timestamp in the file and then operate as I usually would from there, ignoring all the stuff before the timestamp. The logs look like this-
(Blah blah, random stuff)
'2004-05-12 15:45:00',0,0,0,141713,,123.288,122.449,123.2...
And heres my code for once I 'hit' the timestamp, storing the values in an array-
// read and store the values in an array
while (($buffer = gzgets($fp, 8192)) !== false)
{
$val[$i] = $buffer;
$i++;
}
$qry = "insert into afeed
set time_stamp='".$val[0]."',
error_value='".$val[1]."',
firstThing='".$val[2]."',
...
otherstuff='".$val[12]."',
lastThing='".$val[13]."'";
}
So either look for timestamp and start from there, or I was thinking that since all the useless stuff in the beginning is the same every time I could find out the size of it and 'skip that many bytes'?
Any of this possible?
Both of your potential solutions are possible. The "skip that many bytes" solution would probably be the most straightforward. If you know ahead of time that the length of the useless stuff is going to be, for example, 64 bytes at the beginning of every file, you can use something like gzseek($fp, 64) to move the file pointer past the useless stuff.
Since you said the useless stuff is the same every time, this should work. If the useless stuff is not a predetermined length, this method could still be used once you determined how long the useless stuff is (I'd need to see what the useless stuff is to decide on the best method for doing this).
I have a csv file with records being sorted on the first field. I managed to generate a function that does binary search through that file, using fseek for random access through file.
However, this is still a pretty slow process, since when I seek some file position, I actually need to look left, looking for \n characted, so I can make sure I'm reading a whole line (once whole line is read, I can check for first field value mentioned above).
Here is the function that returns a line that contains character at position x:
function fgetLineContaining( $fh, $x ) {
if( $x 125145411) // 12514511 is the last pos in my file
return "";
// now go as much left as possible, until newline is found
// or beginning of the file
while( $x > 0 && $c != "\n" && $c != "\r") {
fseek($fh, $x);
$x--; // go left in the file
$c = fgetc( $fh );
}
$x+=2; // skip newline char
fseek( $fh, $x );
return fgets( $fh, 1024 ); // return the line from the beginning until \n
}
While this is working as expected, I have to sad that my csv file has ~1.5Mil lines, and these left-seeks are slowing thins down pretty much.
Is there a better way to seek a line containing position x inside a file?
Also, it would be much better if object of a class could be saved to a file without serializing it, thus enabling reading of a file object-by-object. Does php support that?
Thanks
I think you really should consider using SQLite or MySQL again (like others have suggested in the comments). Most of the suggestions about pre-calculating indexes are already implemented "properly" in these SQL engines.
You said the speed wasn't good enough in SQL. Did you have the fields indexed properly? How were you querying the data? Where you using bulk queries, where you using prepared statements? Did the SQL process have enough ram to store it's indexes in RAM?
One thing you can possibly try to speed under the current algorithm is to load the (~100MB ?) file onto a RAM disc. No matter what you chose to do, either CVS or SQLite, this WILL help speed things up, especially if the hard drive seek time is your bottleneck.
You could possibly even read the whole file into PHP array's (assuming your computer has enough RAM for that). That would allow you to do your search via index ($big_array[$offset]) lookups.
Also one thing to keep in mind, PHP isn't exactly super fast at doing low level things fast. You might want to consider moving away from PHP in favor of C or C++.
What is the best way to write to files in a large php application. Lets say there are lots of writes needed per second. How is the best way to go about this.
Could I just open the file and append the data. Or should i open, lock, write and unlock.
What will happen of the file is worked on and other data needs to be written. Will this activity be lost, or will this be saved. and if this will be saved will is halt the application.
If you have been, thank you for reading!
Here's a simple example that highlights the danger of simultaneous wites:
<?php
for($i = 0; $i < 100; $i++) {
$pid = pcntl_fork();
//only spawn more children if we're not a child ourselves
if(!$pid)
break;
}
$fh = fopen('test.txt', 'a');
//The following is a simple attempt to get multiple threads to start at the same time.
$until = round(ceil(time() / 10.0) * 10);
echo "Sleeping until $until\n";
time_sleep_until($until);
$myPid = posix_getpid();
//create a line starting with pid, followed by 10,000 copies of
//a "random" char based on pid.
$line = $myPid . str_repeat(chr(ord('A')+$myPid%25), 10000) . "\n";
for($i = 0; $i < 1; $i++) {
fwrite($fh, $line);
}
fclose($fh);
echo "done\n";
If appends were safe, you should get a file with 100 lines, all of which roughly 10,000 chars long, and beginning with an integer. And sometimes, when you run this script, that's exactly what you'll get. Sometimes, a few appends will conflict, and it'll get mangled, however.
You can find corrupted lines with grep '^[^0-9]' test.txt
This is because file append is only atomic if:
You make a single fwrite() call
and that fwrite() is smaller than PIPE_BUF (somewhere around 1-4k)
and you write to a fully POSIX-compliant filesystem
If you make more than a single call to fwrite during your log append, or you write more than about 4k, all bets are off.
Now, as to whether or not this matters: are you okay with having a few corrupt lines in your log under heavy load? Honestly, most of the time this is perfectly acceptable, and you can avoid the overhead of file locking.
I do have high-performance, multi-threaded application, where all threads are writing (appending) to single log file. So-far did not notice any problems with that, each thread writes multiple times per second and nothing gets lost. I think just appending to huge file should be no issue. But if you want to modify already existing content, especially with concurrency - I would go with locking, otherwise big mess can happen...
If concurrency is an issue, you should really be using databases.
If you're just writing logs, maybe you have to take a look in syslog function, since syslog provides an api.
You should also delegate writes to a dedicated backend and do the job in an asynchroneous maneer ?
These are my 2p.
Unless a unique file is needed for a specific reason, I would avoid appending everything to a huge file. Instead, I would wrap the file by time and dimension. A couple of configuration parameters (wrap_time and wrap_size) could be defined for this.
Also, I would probably introduce some buffering to avoid waiting the write operation to be completed.
Probably PHP is not the most adapted language for this kind of operations, but it could still be possible.
Use flock()
See this question
If you just need to append data, PHP should be fine with that as filesystem should take care of simultaneous appends.