Ok so i have a .txt file with a bunch of urls. I got a script that gets 1 of the lines randomly. I then included this into another page.
However I want the url to change every 15 minutes. So I'm guessing I'm gonna need to use a cron, however I'm not sure how I should put it all into place.
I found if you include a file, it's still going to give a random output so I'm guessing if I run the cron and the include file it's going to get messy.
So what I'm thinking is I have a script that randomly selects a url from my initial text file then it saves it to another .txt file and I include that file on the final page.
I just found this which is sort of in the right direction:
Include php code within echo from a random text
I'm not the best with writing php (can understand it perfectly) so all help is appreciated!
So what I'm thinking is I have a
script that randomly selects a url
from my initial text file then it
saves it to another .txt file and I
include that file on the final page.
That's pretty much what I would do.
To re-generate that file, though, you don't necessarily need a cron.
You could use the following idea :
If the file has been modified less that 15 minutes ago (which you can find out using filemtime() and comparing it with time())
then, use what in the file
else
re-generate the file, randomly choosing one URL from the big file
and use the newly generated file
This way, no need for a cron : the first user that arrives more than 15 minutes after the previous modification of the file will re-generate it, with a new URL.
Alright so I sorta solved my own question:
<?php
// load the file that contain thecode
$adfile = "urls.txt";
$ads = array();
// one line per code
$fh = fopen($adfile, "r");
while(!feof($fh)) {
$line = fgets($fh, 10240);
$line = trim($line);
if($line != "") {
$ads[] = $line;
}
}
// randomly pick an code
$num = count($ads);
$idx = rand(0, $num-1);
$f = fopen("output.txt", "w");
fwrite($f, $ads[$idx]);
fclose($f);
?>
However is there anyway I can delete the chosen line once it has been picked?
Related
I have a small web-page that delivers different content to a user based on a %3 (modulo 3) of a counter. The counter is read in from a file with php, at which point the counter is incremented and written back into the file over the old value.
I am trying to get an even distribution of the content in question which is why I have this in place.
I am worried that if two users access the page at a similar time then they could either both be served the same data or that one might fail to increment the counter since the other is currently writing to the file.
I am fairly new to web-dev so I am not sure how to approach this without mutex's. I considered having only one open and close and doing all of the operations inside of it but I am trying to minimize time where in which a user could fail to access the file. (hence why the read and write are in separate opens)
What would be the best way to implement a sort of mutual exclusion so that only one person will access the file at a time and create a queue for access if multiple overlapping requests for the file come in? The primary goal is to preserve the ratio of the content that is being shown to users which involves keeping the counter consistent.
The code is as follows :
<?php
session_start();
$counterName = "<path/to/file>";
$file = fopen($counterName, "r");
$currCount = fread($file, filesize($counterName));
fclose($file);
$newCount = $currCount + 1;
$file = fopen($counterName,'w');
if(fwrite($file, $newCount) === FALSE){
echo "Could not write to the file";
}
fclose($file);
?>
Just in case anyone finds themselves with the same issue, I was able to fix the problem by adding in
flock($fp, LOCK_EX | LOCK_NB) before writing into the file as per the documentation for php's flock function. I have read that it is not the most reliable, but for what I am doing it was enough.
Documentation here for convenience.
https://www.php.net/manual/en/function.flock.php
I need to remove various useless log rows from a huge log file (200 MB)
/usr/local/cpanel/logs/error_log
The useless log rows are in array $useless
The way I am doing is
$working_log="/usr/local/cpanel/logs/error_log";
foreach($useless as $row)
{
if ($row!="") {
file_put_contents($working_log,
str_replace("$row","", file_get_contents($working_log)));
}
}
I need to remove about 65000 rows from the log file;
the code above does the job but it works slow, about 0.041 sec to remove each row.
Do you know a faster way to do this job using php ?
If the file can be loaded in memory twice (it seems it can if your code works) then you can remove all the strings from $useless in a single str_replace() call.
The documentation of str_replace() function explains how:
If search is an array and replace is a string, then this replacement string is used for every value of search.
$working_log="/usr/local/cpanel/logs/error_log";
file_put_contents(
$working_log,
str_replace($useless, '', file_get_contents($working_log))
);
When the file becomes too large to be processed by the code above you have to take a different approach: create a temporary file, read each line from the input file and write it to the temporary file or ignore it. At the end, move the temporary file over the source file:
$working_log="/usr/local/cpanel/logs/error_log";
$tempfile = "/usr/local/cpanel/logs/error_log.new";
$fin = fopen($working_log, "r");
$fout = fopen($tempfile, "w");
while (! feof($fin)) {
$line = fgets($fin);
if (! in_array($line, $useless)) {
fputs($fout, $line);
}
}
fclose($fin);
fclose($fout);
// Move the current log out of the way (keep it as backup)
rename($working_log, $working_log.".bak");
// Put the new file instead.
rename($tempfile, $working_log);
You have to add error handling (fopen(), fputs() may fail for various reasons) and code or human intervention to remove the backup file.
I have log file (max size 50Mb) in which every second written user GET request with parameters (nginx make it).
I have cron php script, which starts every minute, should read next part of log file, calculate data and insert data in statistic mysql database.
What is the best way to read log file part by part every minute?
You can read log file on regular way, something like this:
$myFile = "log.txt";
$lines = file($myFile);
$last_line = 0;
#$last_line = file_get_contents('log-last-line.txt');
$i = 0;
foreach($lines as $line){
$i++;
if($i >= $last_line){
//this lines are new, save them in database
}
}
file_put_contents('log-last-line.txt', count($lines));
You can write to "log-last-line.txt" last line from log.
I have a script that re-writes a file every few hours. This file is inserted into end users html, via php include.
How can I check if my script, at this exact moment, is working (e.g. re-writing) the file when it is being called to user for display? Is it even an issue, in terms of what will happen if they access the file at the same time, what are the odds and will the user just have to wait untill the script is finished its work?
Thanks in advance!
More on the subject...
Is this a way forward using file_put_contents and LOCK_EX?
when script saves its data every now and then
file_put_contents($content,"text", LOCK_EX);
and when user opens the page
if (file_exists("text")) {
function include_file() {
$file = fopen("text", "r");
if (flock($file, LOCK_EX)) {
include_file();
}
else {
echo file_get_contents("text");
}
}
} else {
echo 'no such file';
}
Could anyone advice me on the syntax, is this a proper way to call include_file() after condition and how can I limit a number of such calls?
I guess this solution is also good, except same call to include_file(), would it even work?
function include_file() {
$time = time();
$file = filectime("text");
if ($file + 1 < $time) {
echo "good to read";
} else {
echo "have to wait";
include_file();
}
}
To check if the file is currently being written, you can use filectime() function to get the actual time the file is being written.
You can get current timestamp on top of your script in a variable and whenever you need to access the file, you can compare the current timestamp with the filectime() of that file, if file creation time is latest then the scenario occured when you have to wait for that file to be written and you can log that in database or another file.
To prevent this scenario from happening, you can change the script which is writing the file so that, it first creates temporary file and once it's done you just replace (move or rename) the temporary file with original file, this action would require very less time compared to file writing and make the scenario occurrence very rare possibility.
Even if read and replace operation occurs simultaneously, the time the read script has to wait will be very less.
Depending on the size of the file, this might be an issue of concurrency. But you might solve that quite easy: before starting to write the file, you might create a kind of "lock file", i.e. if your file is named "incfile.php" you might create an "incfile.php.lock". Once you're doen with writing, you will remove this file.
On the include side, you can check for the existance of the "incfile.php.lock" and wait until it's disappeared, need some looping and sleeping in the unlikely case of a concurrent access.
Basically, you should consider another solution by just writing the data which is rendered in to that file to a database (locks etc are available) and render that in a module which then gets included in your page. Solutions like yours are hardly to maintain on the long run ...
This question is old, but I add this answer because the other answers have no code.
function write_to_file(string $fp, string $string) : bool {
$timestamp_before_fwrite = date("U");
$stream = fopen($fp, "w");
fwrite($stream, $string);
while(is_resource($stream)) {
fclose($stream);
}
$file_last_changed = filemtime($fp);
if ($file_last_changed < $timestamp_before_fwrite) {
//File not changed code
return false;
}
return true;
}
This is the function I use to write to file, it first gets the current timestamp before making changes to the file, and then I compare the timestamp to the last time the file was changed.
I have code that runs once daily and fputs() appends a daily log entry to a flat file in the format:
yyyy-mm-dd|log entry
This file is then displayed by a web page that fgets() and displays all records from oldest to newest.
What I need to do is change this write/read process so that:
A. Only the x most recent records are kept in the log file.
B. The output order is reversed with the most recent log entry displayed first.
If the order of the log file can be reversed with the write operation, then the read operation can remain unchanged.
If there is a better way to do this other than fputs and fgets, I am open to it.
Thanks
The best way to do this, I think (although it is not the most memory efficient way) is this:
function writeLogEntry ($filePath, $str, $maxRecords) {
$fileData = file($filePath); // Get file contents as array
array_unshift($fileData, $str); // Add the log entry to the beginning
if (count($fileData) > $maxRecords) { // Strip old records off
$file = array_slice($fileData, 0, $maxRecords);
}
file_put_contents($filePath, $fileData); // Write file again
}
$logEntry = "yyyy-mm-dd|Something happened\n";
writeLogEntry('/path/to/file', $logEntry, 1000);
Using this approach, the file is kept in the order that you want it (newest first). However, if this file might be written to by more than one process at a time, you would need to implement some form of locking to avoid losing data.