In normal condition , everything is ok, an I can write and create new files with fopen() and fwrite() but under "heavy" DDOS attacks , when file pointer is located at 0 , i cant write anything to file.eg. using "w" mod ,result will be a blank file , but by using "a" or "c" mod , if file not exist or be empty, nothing will be written (and just create a blank file too) , but if file has some characters , it will writes after characters or will clear and rewrite new characters respectively.
and when DDOS stopped , everything would be Ok.
here is simple code that I'm using for test, what is the problem? Can I fix it?
I'm using php5 in ubuntu with apache and lighttpd...
<?php
$fp = fopen('data.txt', 'w');
fwrite($fp, '1');
fputs($fp, '23');
fclose($fp);
?>
The way I understood the question is that you have problems running this code when there are multiple requests accessing the .php file (and thus the file you are writing to) at the same time.
Now, while it is far from being foolproof, flock() is there to help with this. The basic concept is that you'd ask for a lock of the file before writing and only write to a file if you're able to get the lock to that file, like
$fp = fopen( $filename,"w"); // open it for WRITING ("w")
if (flock($fp, LOCK_EX | LOCK_NB)) {
// do your file writes here
// when you're done,
// flush your file writes to a file before unlocking
fflush($fp);
// unlock the file
flock($fp, LOCK_UN);
} else {
// flock() returned false, no lock obtained
print "Could not lock $filename!\n";
}
fclose($fp);
You can read some more details from the manual entry or this article.
Related
I have a JavaScript application which posts messages to server. I have to gather those messages on server side and analyse them later, so I'm simply writing them to file. The problem is, when I open the file for reading, ie. in Notepad, messages are not being written. Since flock() is blocking and the locks should be mandatory on Windows, I expected the script to simply wait until I close the file and then save all pending messages, but it doesn't seem to work this way. Is there a way to make sure that all messages will finally be saved to the file, even if other process got exclusive access to it? I can't lose any message, even if someone opens the file for reading or copies it. Can I achieve it with PHP, or maybe I should rather send messages to database instead? PHP version is 7.0.4, my script looks like this:
$f = fopen('log.csv', 'a+');
flock($f, LOCK_EX);
$text = date('Y-m-d H:i:s'). ";" .htmlspecialchars($_POST["message"]). PHP_EOL;
clearstatcache();
fwrite($f, $text);
fflush($f);
flock($f, LOCK_UN);
fclose($f);
?>
flock returns true if successful or false if failed.
while(!flock($f, LOCK_EX)) {
sleep(5);
}
This won't fix the problem of your script timing out if another process has it locked for a long time. In that case, you might want to close the file and try opening a different file name.
I was curious to do a test. My question is if is it possible to open file for both reading and writing, so if I have more read-write operations to do on one file I do not need to close the reading status, read, open for write status, write and so on in a loop.
$filename = "test.txt";
$handle = fopen($filename, "rwb");
fseek( $handle , 15360 );
$contents = fread($handle, 51200);
$start = microtime (true);
fseek( $handle , 1 );
fwrite ( $handle , $contents );
fclose($handle);
This test does not work. I expected, I will read the data and move the fseek pointer to begin of the file either 1 or 0 position and then I will write the data. But this action failed for some reason with a result 0 (int) bytes written. Hence my question is, is it possible to do it? Or I need to close file for reading first?
As a related sub-question - is it possible that more users can read or write from files simultaneously from different position. As this should simulate database read/write operations. You know how mysql works - more users can write same table - same file any time. I know this is not problem in C/C++ but is it possible to do it in php?
You can create multiple file handlers on the same file. Just fopen() it twice, one with read only, the other with read/write. Although I'm not sure why you'd want to do so unless you're reading and writing from two different point in the file.
$filename = "test.txt";
$rw_handle = fopen($filename, "c+"); //open for read/write, allow fseek
$r_handle = fopen($filename, "r");
If you want to have multiple processes reading and writing a file from different locations, you'll want to file lock with flock()
I have a cache file that is updated every hour or so. The file size ranges from 100KB to 1MB. The way the cache is updated is with the file_put_contents() method.
Only the server writes to the file. However, there is continuous access to the file. The file is accessed by users by a script that performs a one time read through readfile() to echo it to the user.
If the file is being read by the caching script, and the server runs the user reading script, or the other way around, would there be a problem? Or is this handled automatically by PHP>
Basically, you should lock the file while writing or reading. At least, it guarantees that there is no problem. It is the way of good programming!. The example is shown below.
<?php
$fp = fopen("/tmp/lock.txt", "w+");
if (flock($fp, LOCK_EX)) { // do an exclusive lock
fwrite($fp, "Write something here\n");
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't lock the file !";
}
fclose($fp);
?>
More information
I have a function which receives a filename and a json object to write to a text file.
The object is updated and needs to entirely replace the current contents of the file. Each site visitor has their own file. Multiple rapid changes create a situation where the file is truncated by fopen(file,w+), then not written to as it's locked. End result is empty file.
I'm sure there's a standard simply way to do this as it's such a usual activity. Ideally what I'm looking for is a way to check if a file has a lock before truncating the file with fopen in w+ mode or a way to switch modes.
It seems strange that you would have to truncate the file with fopen() to get a file handle to pass to flock() to check if it's locked -- but you just truncated it, so what's the point?
Here's the function I have so far:
function updateFile($filename, $jsonFileData) {
$fp = fopen($filename,"w+");
if (flock($fp, LOCK_EX)) {
fwrite($fp, $jsonFileData);
flock($fp, LOCK_UN);
fclose($fp);
return true;
} else {
fclose($fp);
return false;
}
}
Example #1 from the PHP manual will do what you want with a slight modification. Use the "c" mode to open the file for writing, create it if it doesn't exist, and don't truncate it.
$fp = fopen("/tmp/lock.txt", "c");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
ftruncate($fp, 0); // truncate file
fwrite($fp, "Write something here\n");
fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't get the lock!";
}
fclose($fp);
Full description of the "c" mode:
Open the file for writing. If the file does not exist, it is created. If it exists, it is neither truncated (as opposed to 'w'), nor the call to this function fails (as is the case with 'x'). The file pointer is positioned on the beginning of the file. This may be useful if it's desired to get an advisory lock (see flock()) before attempting to modify the file, as using 'w' could truncate the file before the lock was obtained (if truncation is desired, ftruncate() can be used after the lock is requested).
It doesn't look like you need it, but there's also a corresponding "c+" mode if you want to both read and write.
I had a newcomer (the next door teenager) write some php code to track some usage on my web site. I'm not familiar with php so I'm asking a bit about concurrent file access.
My native app (on Windows), occasionally logs some data to my site by hitting the URL that contains my php script. The native app does not examine the returned data.
$fh = fopen($updateFile, 'a') or die("can't open file");
fwrite($fh, $ip);
fwrite($fh, ', ');
fwrite($fh, $date);
fwrite($fh, ', ');
fwrite($fh, implode(', ', $_GET));
fwrite($fh, "\r\n");
fclose($fh);
This is a low traffic site, and the data is not critical. But what happens if two users collide and two instances of the script each try to add a line to the file? Is there any implicit file locking in php?
Is the code above at least safe from locking up and never returning control to my user? Can the file get corrupted? If I have the script above delete the file every month, what happens if another instance of the script is in the middle of writing to the file?
You should put a lock on the file:
$fp = fopen($updateFile, 'w+');
if (flock($fp, LOCK_EX)) {
fwrite($fp, 'a');
flock($fp, LOCK_UN);
} else {
echo 'can\'t lock';
}
fclose($fp);
For the record, I worked in a library that does that:
https://github.com/EFTEC/DocumentStoreOne
It allows to CRUD documents by locking the file. I tried 100 concurrent users (100 calls to the PHP script at the same time) and it works.
However, it doesn't use flock but mkdir:
while (!#mkdir("file.lock")) {
// use the file
fopen("file"...)
#rmdir("file.lock")
}
Why?
mkdir is atomic, so the lock is atomic: In a single step, you lock or you don't.
It's faster than flock(). Apparently flock requires several calls to the file system.
flock() depends on the system.
I did a stress test and it worked.
Since this is an append to the file, the best way would be to aggregate the data and write it to the file in one fwrite(), providing the data to be written is not bigger then the file buffer. Ofcourse you don't always know the size of the buffer, so flock(); is always a good option.