i am creating an under construction page where users can leave their email to be notified when the site launch
i need to add those emails to a text file called list.txt
my question is two parts
how can i add user#example.com to the text file ?
and how i can later delete certain email from a text file ?
thanks for help
You'd be better off using a database because these operations can step on each other.. but:
Add:
$fp = fopen("list.txt","a"); //a is for append
fputs($fp,"user#example.com" . "\n");
fclose($fp);
Remove:
$file = file_get_contents("list.txt");
unlink("list.txt"); //delete existing file
$fp = fopen("list.txt","w"); //w is for write/new
$lines = split("\n",$file);
while (list(,$email) = each($lines)) {
if ($email != "user#example.com") fputs($fp,$email . "\n");
}
Again... highly recommended to use a database... this is not optimal.
As for saving, you can fopen() in appending mode and just fwrite() to it. As for deleting a certain email: you'll have to load the whole file as a string and save it to file (effectively replacing the entire contents). Without some elaborate locking mechanism a race condition can occur when saving the file, causing you to lose the / a latest signup(s).
I would recommend a simple drop-in sqlite database (or another database if you already have one in production), so you can easily save & delete certain emails, and locking / avoiding race conditions is done automatically for you. If you still need a text file for some other purpose, export the subscription list to that file before using it.
Related
I have a small web-page that delivers different content to a user based on a %3 (modulo 3) of a counter. The counter is read in from a file with php, at which point the counter is incremented and written back into the file over the old value.
I am trying to get an even distribution of the content in question which is why I have this in place.
I am worried that if two users access the page at a similar time then they could either both be served the same data or that one might fail to increment the counter since the other is currently writing to the file.
I am fairly new to web-dev so I am not sure how to approach this without mutex's. I considered having only one open and close and doing all of the operations inside of it but I am trying to minimize time where in which a user could fail to access the file. (hence why the read and write are in separate opens)
What would be the best way to implement a sort of mutual exclusion so that only one person will access the file at a time and create a queue for access if multiple overlapping requests for the file come in? The primary goal is to preserve the ratio of the content that is being shown to users which involves keeping the counter consistent.
The code is as follows :
<?php
session_start();
$counterName = "<path/to/file>";
$file = fopen($counterName, "r");
$currCount = fread($file, filesize($counterName));
fclose($file);
$newCount = $currCount + 1;
$file = fopen($counterName,'w');
if(fwrite($file, $newCount) === FALSE){
echo "Could not write to the file";
}
fclose($file);
?>
Just in case anyone finds themselves with the same issue, I was able to fix the problem by adding in
flock($fp, LOCK_EX | LOCK_NB) before writing into the file as per the documentation for php's flock function. I have read that it is not the most reliable, but for what I am doing it was enough.
Documentation here for convenience.
https://www.php.net/manual/en/function.flock.php
I want to create a registration system on my site where only limited users will be able to create their account. I want to use a .txt file for storing usernames and passwords.
I have the following code so far :
$uname=$_POST['usr'];
$pass=$_POST['pwd'];
if(empty($_POST["ok"])){echo "Could not insert data!";}
else
{$file=fopen("user.txt","w");
echo fwrite($file,$uname);
fclose($file);}
This receives the user data from a form and puts it in user.txt file.
My problem is that when new data is inserted to txt file the old data get deleted.
I want to keep the data in txt file like
foo:12345~bar:1111
username and password are seprated by : and new user is seprated by ~ ,later I will use regex to get the data from txt file.
How can i correct my code to keep both new and old data?
You need to open file in append mode
http://php.net/manual/en/function.fopen.php
<?php
$uname = $_POST['usr'];
$pass = $_POST['pwd'];
if (empty($_POST["ok"])) {
echo "Could not insert data!";
} else {
$file = fopen("user.txt", "a");
$srt="foo:".$uname."~bar:".$pass;// create your string
echo fwrite($file, $srt);
fclose($file);
}
If we want to add on to a file we need to open it up in append mode.
So you need to change from write only mode to append mode.
$file=fopen("user.txt","a");
To answer your question: you have to explicitly pass $mode argument to fopen() function equals to 'a'.
However, it looks like a bad idea to use plain files for this task. Mainly because of concurent writes troubles.
This is really a bad choice: there are a lot of drawbacks for security, for read/write times, for concurrent requests and a lot more.
Using a database isn't difficult, so my suggestion is to use one.
Anyway, your question is asked yet here: php create or write/append in text file
Simple way to append to a file:
file_put_contents("C:/file.txt", "this is a text line" . PHP_EOL, FILE_APPEND | LOCK_EX);
I have the following function to write to a log file:
function writelog($type, $text){
global $root;
$userid=$_SESSION['userid'];
$tdate=date('d/m/Y H:i:s');
$file = $root.'/logs/log.txt';
$input=$userid."^".$type."^".$tdate."^".$text."*";
file_put_contents($file, $input, FILE_APPEND);
}
This function is used to add entries to a log whenever an admin of my database does a certain activity.
What would happen if two admins perform an action at exactly the same time which writes to the log file? Is it possible that the strings appended to the file would then be jumbled or merged together or would it always be inserted sequentially?
This can cause the file to contain corrupted data due to race condition. It's not safe to rely on the fact that it only tries to append the data.
Instead, you should ensure that there is no concurrent access to the file. Fortunately enough file_put_contents has a special flag for this named LOCK_EX:
Acquire an exclusive lock on the file while proceeding to the writing.
You can use it like this:
file_put_contents($file, $input, FILE_APPEND | LOCK_EX);
Just make sure that your filesystem supports proper locking mechanisms. For example, I had trouble with my hosting company at some point due to the fact that they used NFS, whose support for exclusive locks is lacking.
For a project I was working on I need a queue which will be too large to hold in normal memory. I had been implementing it as a simple file where it would read the whole file take the first few (~100) lines, process them, then write back the updated queue with new instructions added and the old ones removed. However, since the queue became too large to hold in memory like this I need something different. Preferably someone can tell me a way to peel off just the first few lines of a file without having to look at the rest of the data. I had thought about using a database (MySQL probably with sorted insert timestamps) but I would heavily prefer to do it without for load and bandwidth reasons (several servers would have to all be sending and receiving a lot of data from the DB). The language I'm working in is PHP but really this question is more about unix files I suppose. Any help would be appreciated.
Sucking out the first line of a file is pretty trivial (fopen() followed by an fgets()). Re-writing the file to remove completed jobs would be very painful, especially if you've got multiple concurrent servers working off the same queue file.
One alternative would be to use a seperate file for each job. If you have some concurrency-safe method of generating an incrementing ID for these files, then it'd be a simple matter of picking out the file with the lowest id for the oldest job, and generating a new id for each new job. You'd have to figure out some file locking, though, to keep two+ servers grabbing the same file at the same time, however.
I had same problems while I was working on enqueue/fs transport. I failed to modify a small portion at the begging of the file without copying it to the memory and saving back. Instead, but that's possible to do that with the end of the file. You can read a portion and then truncate it. That's not really a queue but a stack. So if you rely on message ordering this would not be a solution. In my case, I lock the file when the file has been read from the file, the lock is released.
This is how you could write messages to a queue file:
<?php
$rawMessage = 'this your message to put to the queue as a string';
$queueFile = fopen('/path/to/queue/file', '+a');
// here it may add some spaces so the message length is multiples of modular.
// that make it easier to read messages from a file.
// lock file
$rawMessage = str_repeat(' ', 64 - (strlen($rawMessage) % 64)).$rawMessage;
fwrite($queueFile, $rawMessage);
// release lock
This is how you could read messages from a queue file:
<?php
$queueFile = fopen('/path/to/queue/file', '+c');
// lock file
$frame = readFrame($file, 1);
ftruncate($file, fstat($file)['size'] - strlen($frame));
rewind($file);
$rawMessage = substr(trim($frame), 1);
// release lock
function readFrame($file, $frameNumber)
{
$frameSize = 64;
$offset = $frameNumber * $frameSize;
fseek($file, -$offset, SEEK_END);
$frame = fread($file, $frameSize);
if ('' == $frame) {
return '';
}
if (false !== strpos($frame, '|{')) {
return $frame;
}
return readFrame($file, $frameNumber + 1).$frame;
}
For the locking I'd suggest using Symfony LockHandler or simply take enqueue/fs.
What's the cleanest way in php to open a file, read the contents, and subsequently overwrite the file's contents with some output based on the original contents? Specifically, I'm trying to open a file populated with a list of items (separated by newlines), process/add items to the list, remove the oldest N entries from the list, and finally write the list back into the file.
fopen(<path>, 'a+')
flock(<handle>, LOCK_EX)
fread(<handle>, filesize(<path>))
// process contents and remove old entries
fwrite(<handle>, <contents>)
flock(<handle>, LOCK_UN)
fclose(<handle>)
Note that I need to lock the file with flock() in order to protect it across multiple page requests. Will the 'w+' flag when fopen()ing do the trick? The php manual states that it will truncate the file to zero length, so it seems that may prevent me from reading the file's current contents.
If the file isn't overly large (that is, you can be confident loading it won't blow PHP's memory limit), then the easiest way to go is to just read the entire file into a string (file_get_contents()), process the string, and write the result back to the file (file_put_contents()). This approach has two problems:
If the file is too large (say, tens or hundreds of megabytes), or the processing is memory-hungry, you're going to run out of memory (even more so when you have multiple instances of the thing running).
The operation is destructive; when the saving fails halfway through, you lose all your original data.
If any of these is a concern, plan B is to process the file and at the same time write to a temporary file; after successful completion, close both files, rename (or delete) the original file and then rename the temporary file to the original filename.
Read
$data = file_get_contents($filename);
Write
file_put_contents($filename, $data);
One solution is to use a separate lock file to control access.
This solution assumes that only your script, or scripts you have access to, will want to write to the file. This is because the scripts will need to know to check a separate file for access.
$file_lock = obtain_file_lock();
if ($file_lock) {
$old_information = file_get_contents('/path/to/main/file');
$new_information = update_information_somehow($old_information);
file_put_contents('/path/to/main/file', $new_information);
release_file_lock($file_lock);
}
function obtain_file_lock() {
$attempts = 10;
// There are probably better ways of dealing with waiting for a file
// lock but this shows the principle of dealing with the original
// question.
for ($ii = 0; $ii < $attempts; $ii++) {
$lock_file = fopen('/path/to/lock/file', 'r'); //only need read access
if (flock($lock_file, LOCK_EX)) {
return $lock_file;
} else {
//give time for other process to release lock
usleep(100000); //0.1 seconds
}
}
//This is only reached if all attempts fail.
//Error code here for dealing with that eventuality.
}
function release_file_lock($lock_file) {
flock($lock_file, LOCK_UN);
fclose($lock_file);
}
This should prevent a concurrently-running script reading old information and updating that, causing you to lose information that another script has updated after you read the file. It will allow only one instance of the script to read the file and then overwrite it with updated information.
While this hopefully answers the original question, it doesn't give a good solution to making sure all concurrent scripts have the ability to record their information eventually.