Securely delete file in php - php

I would like to securely delete a file in PHP. I thought of the following options
Use shredthrough a system call. (exec, shell_exec, ....), but in most shared hostings, those functions are disabled and are forbidden if safe mode is on.
Open it through fopenand overwriting it with random data and unlinkit. Correct me if I'm wrong, but doing my research I found it was disabled on some servers.
The best option I thought of, is using file_put_contentto overwrite its data with zeros, and then deleting it.
Question is: is file_put_content guaranteed to overwrite the file ? I wrote a simplistic approach in the following example: would this code delete a file securely ? Would there be a considerable performance hit ? How do I make it more "large file friendly" ?
function secure_delete($file_path)
{
$file_size = filesize($file_path);
$new_content = str_repeat('0', $file_size);
file_put_contents($file_path, $new_content);
unlink($file_path);
}
UPDATE: The code I posted is more about demonstrating the overwriting of a file: the actual code would be an implementation of DoD 5220.22-M

It's not so simply. If you erase file with '0' values it can be still posible to restore. You should use any of algorihms for really secure deletion. For example: Gutmann method
It use 35 consecutive writes per file.

Found a PHP library that does this https://github.com/DanielRuf/secure-shred

Related

Find out whether fopen(..., 'a') created a new file

in my PHP project, I use some kind of counter that appends to an existing (or new) file very often:
$f = fopen($filename, 'ab');
fwrite($f);
fclose($f);
When a new file is created, I have to edit this file's permissions, so another user may access the file as well:
$existed = file_exists($filename);
// Do the append
$f = fopen($filename, 'ab');
fwrite($f);
fclose($f);
// Update permissions
if (!$existed) {
#chmod($filename, 0666);
}
Is there any way to find out, whether 'a' (append) created a new file or appended to an exiting one without using file_exists()? To my understanding, file_exists() retrieves the file stats, which causes some unnecessary overhead compared to a simple file-append. As the function is used very often, I wonder if there's an option to tell if fopen(..., 'a') created a new file without using file_exists()?
Note: This is mostly a question of style and interest, not a true performance issue. But if I am mistaken and fopen() already retrieves the file stats, please let me know!
Update
Okay, it really is a rather academic question. Here're some performance tests run on a windows system (Apache, Win8.1 - no UNIX file permissions) and a linux machine (Nginx, Ubuntu 14.04, virutal machine).
Each test run with 1000 repetitions, file deleted before the first repetition.
Win Linux
simply append one byte 1.8ms 9.4ms
append + clearstatcache() 1.8ms 9.3ms
test fileexists() + append 2.2ms 10.5ms
fileexists() + append + clear 2.2ms 11.0ms
append + chmod() 2.7ms 12.3ms
append + fileexists() -> chmod() 3.3ms 10.6ms
Note: The last one is the only one that uses and IF within the test loop.
The php fopen is just a call to the libc fopen, that automatically creates a file for the modes w,w+,a and a+. As far as I can see, there is no way to get the stat with the permission bits from the returned file pointer.
It seems that PHP stores the stat array for each opened file and you can access it with fstat($fp) with the opened file handle $fp. But the mode field contains inode permission bits. I can't immediately see how "inode permission bits" are related to the "UNIX file mode". The stat system call does not use this term.
You can use "r+" mode to open your file and create it if that fails. If not you need to SEEK to then end to achieve something similar.
But finally it's best to check for existence before you open the file.
No, fopen just returns the resource, it doesn't return or set a flag that indicates whether the file already existed - http://php.net/manual/en/function.fopen.php
EDIT: see the performance test in the edited question.
Why not calling chmod() every times?
Your file_exist() is probably (maybe a little performance test...) more expensive than a chmod().
// Do the append
$f = fopen($filename, 'ab');
fwrite($f);
fclose($f);
// Update persmissions
#chmod($this->filename, 0666);

About PHP parallel file read/write

Have a file in a website. A PHP script modifies it like this:
$contents = file_get_contents("MyFile");
// ** Modify $contents **
// Now rewrite:
$file = fopen("MyFile","w+");
fwrite($file, $contents);
fclose($file);
The modification is pretty simple. It grabs the file's contents and adds a few lines. Then it overwrites the file.
I am aware that PHP has a function for appending contents to a file rather than overwriting it all over again. However, I want to keep using this method since I'll probably change the modification algorithm in the future (so appending may not be enough).
Anyway, I was testing this out, making like 100 requests. Each time I call the script, I add a new line to the file:
First call:
First!
Second call:
First!
Second!
Third call:
First!
Second!
Third!
Pretty cool. But then:
Fourth call:
Fourth!
Fifth call:
Fourth!
Fifth!
As you can see, the first, second and third lines simply disappeared.
I've determined that the problem isn't the contents string modification algorithm (I've tested it separately). Something is messed up either when reading or writing the file.
I think it is very likely that the issue is when the file's contents are read: if $contents, for some odd reason, is empty, then the behavior shown above makes sense.
I'm no expert with PHP, but perhaps the fact that I performed 100 calls almost simultaneously caused this issue. What if there are two processes, and one is writing the file while the other is reading it?
What is the recommended approach for this issue? How should I manage file modifications when several processes could be writing/reading the same file?
What you need to do is use flock() (file lock)
What I think is happening is your script is grabbing the file while the previous script is still writing to it. Since the file is still being written to, it doesn't exist at the moment when PHP grabs it, so php gets an empty string, and once the later processes is done it overwrites the previous file.
The solution is to have the script usleep() for a few milliseconds when the file is locked and then try again. Just be sure to put a limit on how many times your script can try.
NOTICE:
If another PHP script or application accesses the file, it may not necessarily use/check for file locks. This is because file locks are often seen as an optional extra, since in most cases they aren't needed.
So the issue is parallel accesses to the same file, while one is writing to the file another instance is reading before the file has been updated.
PHP luckily has a mechanisms for locking the file so no one can read from it until the lock is released and the file has been updated.
flock()
can be used and the documentation is here
You need to create a lock, so that any concurrent requests will have to wait their turn. This can be done using the flock() function. You will have to use fopen(), as opposed to file_get_contents(), but it should not be a problem:
$file = 'file.txt';
$fh = fopen($file, 'r+');
if (flock($fh, LOCK_EX)) { // Get an exclusive lock
$data = fread($fh, filesize($file)); // Get the contents of file
// Do something with data here...
ftruncate($fh, 0); // Empty the file
fwrite($fh, $newData); // Write new data to file
fclose($fh); // Close handle and release lock
} else {
die('Unable to get a lock on file: '.$file);
}

PHP not writing to file

Ok so I have this thing setup to write things to text, but it will not actually write the txt to the file.
It deletes the file then creates it again with the data inside.
$_POST['IP']=$ip;
unlink('boot_ip.txt');
$fp = fopen($_SERVER['DOCUMENT_ROOT'] . "/boot/boot_ip.txt","wb");
fwrite($fp,$IP) ;
fclose($fp);
Your variables were not properly set and were done the other way around.
Quick note: wb means to write binary. Unless that is not your intention, I suggest you use only w.
Your filename ending in .txt is text, therefore use the w switch. That will overwrite previous content.
You had:
$_POST['IP']=$ip;
unlink('boot_ip.txt');
$fp = fopen($_SERVER['DOCUMENT_ROOT'] . "/boot/boot_ip.txt","wb");
fwrite($fp,$IP);
fclose($fp);
This => $_POST['IP']=$ip; where it should be $ip=$_POST['IP'];
and this fwrite($fp,$IP); should be fwrite($fp,$ip);
You had the $IP in uppercase when it should be in lowercase as you declared in your variable.
NOTE: The unlink part of the code may need to reflect your folder's place on your server.
However, I suggest you do not use unlink because using it will throw an error right away, because the file may not be found to begin with, since it would have already been unlinked.
You can either not use it, or use an if statement. See my example following my code below.
Plus, using the w switch, will automatically overwrite previously written content.
If you need to append/add to the file, then you will need to use the a or a+ switch.
If that is the case, then you will need to use the following:
$fp = fopen($_SERVER['DOCUMENT_ROOT'] . "/boot/boot_ip.txt","a");
fwrite($fp,$ip . "\n");
Reformatted (tested and working)
$ip=$_POST['IP'];
unlink('boot_ip.txt');
// use the one below here
// unlink($_SERVER['DOCUMENT_ROOT'] . "/boot/boot_ip.txt");
$fp = fopen($_SERVER['DOCUMENT_ROOT'] . "/boot/boot_ip.txt","wb");
fwrite($fp,$ip);
fclose($fp);
Using the following form:
<form action="handler.php" method="post">
<input type="text" name="IP">
<input type="submit" value="Submit">
</form>
Using an if statement method.
$ip=$_POST['IP'];
if(!file_exists($_SERVER['DOCUMENT_ROOT'] . "/boot/boot_ip.txt")) {
$fp = fopen($_SERVER['DOCUMENT_ROOT'] . "/boot/boot_ip.txt","wb");
fwrite($fp,$ip);
fclose($fp);
}
Traditionally, That's exactly how text files work. It's a sequential access file rather than a random access file. Everything needs to be re written every time you add new information to a file. That's why it's slow and inefficient for large scale projects.
There's no way around it. Either read the data from the file, and re-write it with the new information, or make a random access file. That's how it's taught in most languages and in classrooms. It's mostly so you understand the processes.
In practice though if you're only appending data to the end:
unlink(); in php deletes a file, so you don't need it.
ALSO
see: http://www.w3schools.com/php/php_file.asp
for how to write to a file and the parameters you can use for behaviour
specifically look at the parameters for write modes: r, w, rw+, etc....
a is probably the one you want.
It still re-creates the file like I said, but does all the reading and rewriting for you so you don't have to do it yourself.
the parameter you entered "wb" DOES contain a w. so i assume a part of it is the same as simply "w" which, like i said earlier, would clear the file if it exists before writing new data.
My solution for you is aka, TL;DR version:
$fp=fopen("boot_ip.txt","a");
(I didn't use the full form like you did, but the import change is the second parameter a rather than wb) and exclusion of unlink(); )
then do your writes. This should add new data to the end of the file.

PHP write to included file

I need to include one PHP file and execute function from it.
After execution, on end of PHP script I want to append something to it.
But I'm unable to open file. It's possible to close included file/anything similar so I'll be able to append info to PHP file.
include 'something.php';
echo $somethingFromIncludedFile;
//Few hundred lines later
$fh = fopen('something.php', 'a') or die('Unable to open file');
$log = "\n".'$usr[\''.$key.'\'] = \''.$val.'\';';
fwrite($fh, $log);
fclose($fh);
How to achieve that?
In general you never should modify your PHP code using PHP itself. It's a bad practice, first of all from security standpoint. I am sure you can achieve what you need in other way.
As Alex says, self-modifying code is very, VERY dangerous. And NOT seperating data from code is just dumb. On top of both these warnings, is the fact that PHP arrays are relatively slow and do not scale well (so you could file_put_contents('data.ser',serialize($usr)) / $usr=unserialize(file_get_contents('data.ser')) but it's only going to work for small numbers of users).
Then you've got the problem of using conventional files to store data in a multi-user context - this is possible but you need to build sophisticated locking queue management. This usually entails using a daemon to manage the queue / mutex and is invariably more effort than its worth.
Use a database to store data.
As you already know this attempt is not one of the good ones. If you REALLY want to include your file and then append something to it, then you can do it the following way.
Be aware that using eval(); is risky if you cannot be 100% sure if the content of the file does not contain harmful code.
// This part is a replacement for you include
$fileContent = file_get_contents("something.php");
eval($fileContent);
// your echo goes here
// billion lines of code ;)
// file append mechanics
$fp = fopen("something.php", "a") or die ("Unexpected file open error!");
fputs($fp, "\n".'$usr[\''.$key.'\'] = \''.$val.'\';');
fclose($fp);

Storing a scripts last run time in PHP

I need to store the time of the last run of a script to make sure it doesn't read old items (tweets in this case). Whats the best way to keep track of this?
thanks.
Generate a timestamp and save it to a logfile that can be read on the next iteration.
$time_ran = time();
function saveTimeRan(){
$fh = fopen('/path/to/a/new/log' 'w+');
fwrite($fh, $time_ran);
fclose($fh);
}
function getTimeRan(){
$fh = fopen('/path/to/a/new/log' 'r+');
$time = fgets($fh);
fclose($fh);
return $time;
}
Might I suggest you make this an object and put the contents of saveTimeRan in the magic __destruct function, so when your object is GC'ed it will save the time. Just a suggestion. You could put your tweet functions in other object methods and make one comprehensive interface. You could, alternatively, save the value to a database field and call it each iteration.
You gave zero information on specifics, so I will be general.
Store it in a database like MySQL
Write it to a local file with file_put_contents() and read it with file_get_contents()
Touch a local file with one of the os functions, and then stat the file to see it's mtime
Use sqlite to write it to a local database
Whats the best way to keep track of
this?
The best way is to keep track of that in memory(redis/apc/memcached/etc)(back upped by any persistent store(mysql/mongodb/redis/etc). You should try to avoid to touch the disc(I/O) because that's very very slow compared to speed of memory.
Redis
You can configure redis two ways:
write data back to persistent asynchronously(snapshotting) after certain time has passed and or keys have been modified.
write data back to persistent store immediately(append-only).
It's a trade-off(performance vs safety)
APC/Memcached
APC and Memcached don't have persistent storage so you have to do that using for example mysql.
You could store time stamps in a DB or a text file.
You can have the script write a timestamp/date to a text file every time it runs. Then you can optionally read the file before running the script to make sure you don't run it again before x amount of time has passed or only run if a certain condition is true.
Check out file_put_contents and file_get_contents.
-Ryan

Categories