How can I make it so when a user clicks on a link on my web page, it writes to a .txt file named "Count.txt", which contains only a number and adds 1 to that number? Thank you.
If you forego any validity checking you could do it with something as simple as:
file_put_contents($theCounterFile, file_get_contents($theCounterFile)+1);
Addition:
There's talk about concurrency in this thread and it should be noted that it is a good idea to use a database and transactions to deal with concurrency, I'd highly recommend against writing a bunch of plumbing code to do this in a file.
If you've ever had, or think you might ever have two requests for the resource in the same second you should look into PDO with mysql, or PDO with SQLite instead of a file, use transactions (and InnoDB or better if you're going for mysql).
But really, even if you get two requests in the same microsecond (highly unlikely), chances of locking the file are slim as it will not be kept open and the two requests will probably not be handled parallel enough to lock anyway. Reality check: how many hits on the same resource do you get on average in the same minute?...
If you decide to do anything more advanced, like say two numbers, you may want to consider using SQLite. It's about as about as fast and as simple as opening and closing a file, but is much more flexible.
Open the file, lock the file (VERY important), read the number currently in there, add 1 to the number, write number back to file, release the lock and close the file.
ie. something like :
$fp = fopen("count.txt", "r+");
if (flock($fp, LOCK_EX)) { // do an exclusive lock
$num = fread($fp, 10);
$num++;
fseek($fp, 0);
fwrite($fp, $num);
flock($fp, LOCK_UN); // release the lock
} else {
// handle error
}
fclose($fp);
should work (not tested).
Generally this is quite easy:
$count = (int)file_get_contents('/path/to/Count.txt');
file_put_contents('/path/to/Count.txt', $count++, LOCK_EX);
But you'll run into concurrency problems using this code. One way to generate a lock safe from any race condition is:
$countFile = '/path/to/Count.txt';
$countTemp = tempnam(dirname($countFile), basename($countFile));
$countLock = $countFile . '.lock';
$f_lock = fopen($countLock, 'w');
if(flock($f_lock, LOCK_EX)) {
$currentCount = (int)file_get_contents($countFile);
$f_temp = fopen($countTemp, 'w');
if(flock($f_temp, LOCK_EX)) {
fwrite($f_temp, $currentCount++);
flock($f_temp, LOCK_UN);
fclose($f_temp);
if(!rename($countTemp, $countFile)) {
unlink($countTemp);
}
}
flock($f_lock, LOCK_UN);
fclose($f_lock);
}
Related
I have a separate php script file that saves to file a number csv values via a html contact form.
I would like a maximum of 2 duplicate rows based on mobile phone entries in csv file,
any more and I would want the current record deleted.
I am using the $_GET()(no $_POST() functions) function to record all entries, and then save to file.
Im just having issues with deleting duplicates if the mobile number is already TWICE in the file.
Any help would be greatly appreciated.
**ADDED MORE CODE AND COMMENT BELOW**
I have edited the code, but I am still running into trouble with removing duplicates, let alone check for 2 dupes first. I will do the sanitize and better code 'after' I have some function (help!).
Thanks again for your help :)
<?php
$filename = "input.csv";
$csv_output .= "\n";$title=$_GET[title];$fname=$_GET[fname];
$sname=$_GET[sname];$notes=$_GET[notes];$mobile=$_GET[mobile];
$string="$title,$fname,$sname,$mobile,$notes,$csv_output";
$file = fopen($filename, "c");
// see details on the 'c' mode here http://us3.php.net/manual/en/function.fopen.php - it will create a file if it does not exist.
// Now acquire an exclusive via flock() - http://us3.php.net/manual/en/function.flock.php
flock($file, LOCK_EX); // this will block till some other reader/writer has released the lock.
$stat = fstat($file)
if($stat['size'] == 0) {
// file created for the first time
fwrite($file, "Title,First Name,Last Name,MobileNumber,Notes\n$string");
flock($file, LOCK_UN);
fclose($file);
return;
}
// File not empty - scan thru line by line via fgets(), and detect duplicates
// If a duplicate is detected, just flock($file, LOCK_UN), close the file and return - ///// no need to fwrite the line.
while (($buffer = fgets($file, 2188)) !== false) {
if(!stripos($buffer, ",$mobile,") {
$mobile .= $buffer;
}
else {
flock($file, LOCK_UN);
fclose($file);
return;
}
}
?>
Are you running this on a Linux/Unix system? If so, the way you have accessed the file will lead to race-conditions and possible corruption of the file.
You need to ensure that the write to the file is done in a serialized manner if multiple processes are attempting to write to the same file.
As you don't want to explore other alternatives like a db (even key-value file-based dbs), a pseudo-code approach is:
$file = fopen($filename, "c"); // see details on the 'c' mode here http://us3.php.net/manual/en/function.fopen.php - it will create a file if it does not exist.
// Now acquire a exclusive via flock() - http://us3.php.net/manual/en/function.flock.php
flock($file, LOCK_EX); // this will block till some other reader/writer has released the lock.
$stat = fstat($file)
if($stat['size'] == 0)
{
// file created for the first time
fwrite($file, "Title,First Name,Last Name,MobileNumber,Notes\n$string");
flock($file, LOCK_UN);
fclose($file);
return;
}
// File not empty - scan thru line by line via fgets(), and detect duplicates
// If a duplicate is detected, just flock($file, LOCK_UN), close the file and return - no need to fwrite the line.
// Otherwise fwrite() the line
.
.
flock($file, LOCK_UN);
fclose($file);
You can fill in the details in the middle part - hope you got the gist of it.
You could potentially make it more 'scalable' by initially grabbing a read lock (this will allow multiple readers to run concurrently, and only the writer will block). Once the read portion is done, you need to release the lock, and if a write needs to be done (i.e. no duplicates detected), then grab a writer lock etc...
Clearly this is not a ideal solution but if your file contents are going to be small, it may suffice.
Stating the obvious, you would need to do better error handling with all file-based operations.
A tangential point: you should also sanitize the data from $_GET before going to the core logic to catch for invalid inputs.
Hope this helps.
I want to have a PHP file that is used as a counter. It will a) echo the current value of a txt file, and b) increment that file using an exclusive lock so no other scripts can read or write to it while it's being used.
User A will write and increment this number, while User B requests to read the file. Is it possible that User A can lock this file so no one can read or write to it until User A's write is finished?
I've used flock in the past, but I'm not sure how to get the file to wait until it is available, rather than quitting if it's already been locked
My goal is:
LOCK counter.txt; write to counter.txt;
while at the same time
Read counter.txt; realize it's locked so wait until that lock is finished.
//
$fp = fopen("counter.txt", 'w+');
if(flock($fp, LOCK_EX)) {
fwrite($fp, $counter + 1);
flock($fp, LOCK_UN);
} else {
// try again??
}
fclose($fp);
From documentation: By default, this function will block until the requested lock is acquired
So simply use flock in your reader (LOCK_SH) and writer (LOCK_EX), and it is going to work.
However I highly discourage use of blocking flock without timeout as this means that if something goes wrong then your program is going to hang forever. To avoid this use non-blocking request like this (again, it is in doc):
/* Activate the LOCK_NB option on an LOCK_EX operation */
if(!flock($fp, LOCK_EX | LOCK_NB)) {
echo 'Unable to obtain lock';
}
And wrap it in a for loop, with sleep and break after failed n-tries (or total wait time).
EDIT: You can also look for some examples of usage here. This class is a part of ninja-mutex library in which you may be interested too.
I am trying to save tokens to a php file using this code, but after 2kb the file mysteriously empties and I lose all the data. Why does this happen? how do I prevent it?
$fh = fopen('token.txt', 'a+');
fwrite($fh, $access_token . "\n");
fclose($fh);
As per comments elsewhere, using files to store data from multiple concurrent processes is a recipe for failure. You can minimise the damage (at the risk of introducing deadlocks and race conditions) by ensuring you get a valid flock() on the file before attempting to read or write from it.
$fh = fopen('token.txt', 'a');
if (flock($fh, LOCK_EX)) {
fwrite($fh, $access_token . "\n");
fflush($fh);
flock($fh, LOCK_UN);
} else {
trigger_error("failed to lock file");
}
fclose($fh);
If you're just logging then use the syslog facility. If you're performing the full set of CRUD operations, then use a DBMS.
Some ideas:
Multiple invocations of your PHP page are stomping over each other. If two processes/threads open the same file for append simultaneously, I would not be surprised if the result is an empty file.
Change open mode from a+ to a. From your code it appears that you need only to write, not to read/write.
Check your filesystem available space (df -h) and your user's disk quota (quota -h). Are you out of space?
Ok, I know that there's been similar questions on this site about this problem, but none of this questions and provided answers isn't exactly what I need.
I'm building flat-file based CMS.
What if, for example:
2, 3, 10..... fwrite in appending mode requestes come to same php file "contact_form_messages_container.php" at the same time?
2, 3, 10..... fwrite in "w" mode requestes come to same php file which holds the simpley nubmer of specific page visits, again at the same time?
I know about flock() function, but it could happen two or more flock() requests comes on the same time... Does anyone knows solution to this problem? Only thing I have on my mind is usleep()-ing the script using while looop for some amount of time, until the target file becomes availibile, but I do not have idea if it works, where and how to perform this?
Does anyone have practical expirience with this issue?
Thanks in advance!
The flock() function is designed to handle multiple concurrent readers and writers for file operations; by default flock() may suspend a process until a compatible lock can be obtained (i.e. shared or exclusive). Once obtained, a lock can later be released to allow other processes to operate on the file; locks are released implicitly when the file is closed or the process ends.
Unless your files are on NFS, I highly doubt you will ever run into a situation whereby two conflicting locks would be given simultaneously.
The following illustrates a basic example of using flock():
// open the file (take care to not use "w" mode)
$f = fopen('file.txt', 'r+');
// obtain an exlusive lock (may suspend the process)
if (flock($f, LOCK_EX)) {
// this process now holds the only exclusive lock
// make changes to the file
// release the lock
flock($f, LOCK_UN);
}
// don't perform any write operation on $f here
fclose($f);
Using the LOCK_NB flag together with LOCK_EX or LOCK_SH will prevent the process from being suspended; if the call returns false a third parameter can be passed to determine whether the process would have been suspended (not supported on Windows).
if (false === flock($f, LOCK_EX | LOCK_NB, $wouldblock)) {
if ($wouldblock) {
// the lock could not be obtained without suspending the process
} else {
// the lock could not be obtained due to an error
}
}
Fork the writing operation and put it into a while loop with a sleep while file is locked. As long as your application doesn't depend on making file system calls chronologically it should work.
This does however open for race conditions if your application depends on writing operations to happen in order.
I had a newcomer (the next door teenager) write some php code to track some usage on my web site. I'm not familiar with php so I'm asking a bit about concurrent file access.
My native app (on Windows), occasionally logs some data to my site by hitting the URL that contains my php script. The native app does not examine the returned data.
$fh = fopen($updateFile, 'a') or die("can't open file");
fwrite($fh, $ip);
fwrite($fh, ', ');
fwrite($fh, $date);
fwrite($fh, ', ');
fwrite($fh, implode(', ', $_GET));
fwrite($fh, "\r\n");
fclose($fh);
This is a low traffic site, and the data is not critical. But what happens if two users collide and two instances of the script each try to add a line to the file? Is there any implicit file locking in php?
Is the code above at least safe from locking up and never returning control to my user? Can the file get corrupted? If I have the script above delete the file every month, what happens if another instance of the script is in the middle of writing to the file?
You should put a lock on the file:
$fp = fopen($updateFile, 'w+');
if (flock($fp, LOCK_EX)) {
fwrite($fp, 'a');
flock($fp, LOCK_UN);
} else {
echo 'can\'t lock';
}
fclose($fp);
For the record, I worked in a library that does that:
https://github.com/EFTEC/DocumentStoreOne
It allows to CRUD documents by locking the file. I tried 100 concurrent users (100 calls to the PHP script at the same time) and it works.
However, it doesn't use flock but mkdir:
while (!#mkdir("file.lock")) {
// use the file
fopen("file"...)
#rmdir("file.lock")
}
Why?
mkdir is atomic, so the lock is atomic: In a single step, you lock or you don't.
It's faster than flock(). Apparently flock requires several calls to the file system.
flock() depends on the system.
I did a stress test and it worked.
Since this is an append to the file, the best way would be to aggregate the data and write it to the file in one fwrite(), providing the data to be written is not bigger then the file buffer. Ofcourse you don't always know the size of the buffer, so flock(); is always a good option.