I'm making a unique visitors counter for my website and I went for many tutorials, until I found this easy code but the problem is that the program never adds new ips or counts new visits . The values of ip.txt and count.txt never change :(
Here is the whole code :
<?php
function hit_count() {
$ip_address = $_SERVER ['REMOTE_ADDR'];
$ip_file = file ('ip.txt');
foreach($ip_file as $ip) {
$ip_single = ($ip);
if ($ip_address==$ip_single){
$found = true;
break;
} else {
$found = false;
}
}
if ($found==true){
$filename = 'count.txt';
$handle = fopen ($filename, 'r');
$current = fread($handle, filesize($filename));
fclose($handle);
$current_inc = $current = 1;
$handle = fopen($filename, 'w');
fwrite($handle, $current_inc);
fclose($handle);
$handle = fopen('ip.txt', 'a');
fwrite($handle, $ip_address."\n");
fclose($handle);
}
}
?>
This code is full of mistakes. It will never work.
Mistake number #1:
$ip_file = file('ip.txt');
Each element on $ip_file ends with a newline symbol, so even if your IP is in the list it will never match $_SERVER ['REMOTE_ADDR']. file() must be run with the FILE_IGNORE_NEW_LINES flag.
Mistake number #2:
if ($found==true){
The counter will only increase and try to add the IP in the list if it was already found in the list. If the list is empty it will never do jack. Invert this logic!
Mistake number #3:
$current_inc = $current = 1;
It will never count beyond 1.
Besides that, you must make sure that the PHP script has permission to change those files. Usually the scripts don't have permission to edit the site files for security reasons.
All that said, your script should be changed to something more like this:
if (!in_array($_SERVER['REMOTE_ADDR'], file('ip.txt', FILE_IGNORE_NEW_LINES)))
{
file_put_contents('ip.txt', $_SERVER['REMOTE_ADDR'] . "\n", FILE_APPEND);
$count = file_get_contents('count.txt');
$count++;
file_put_contents('count.txt', $count);
}
Clean, simple, direct. But you still have to make sure the PHP script has permission to edit those files.
Related
I am having this strange issue and can't figure it out.
On some websites I have this script works perfect... same code, same server settings...
With php, there is a simple page view hit counter that stores locally in a txt file.
Then I echo out the value on the footer copyright area of my websites to give the client a quick statistic... its pretty cool how fast it grows.
Anyway.. i have a client corner grill ny . com (seo purposes I added spaces )
On that website.. its been working great for years.
Now another website and a bunch more.. for example... savianos . com
This breaks.. and the text value is blank.
This is the counter.php code
<?php
session_start();
$counter_name = "counter/hits.txt";
//Check if a text file exists. If not create one and initialize it to zero.
if (!file_exists($counter_name)) {
$f = fopen($counter_name, "w");
fwrite($f,"0");
fclose($f);
}
// Read the current value of our counter file
$f = fopen($counter_name,"r");
$counterVal = fread($f, filesize($counter_name));
fclose($f);
// Has visitor been counted in this session?
// If not, increase counter value by one
if(!isset($_SESSION['hasVisited'])){
$_SESSION['hasVisited']="yes";
$counterVal++;
$f = fopen($counter_name, "w");
fwrite($f, $counterVal);
fclose($f);
}
?>
Now, if I add a value in the txt file.. like 1040... and go to the website it starts to work... then after a week or so I check it .. its blank again.
Any ideas?
I am thinking that this may be happening because the website might get a TON of views during dinner time friday night.. and the simple script can't handle it so.. while its trying to write a added a number it just breaks and go to blank.. and never starts back up again.
The structure is this.
/counter/ folder has
counter.php and a hits.txt file
Every page of the website the very first thing is
<?php include ('counter/counter.php'); ?>
and in the footer of the website we have
<?php echo $counterVal; ?>
Your code looks perfect, but let's understand the situation. You have a file which can be accessed concurrently for many users, because page visit can be done by multiple users on same time. This does't seem right you have to lock the file manipulation for another user while someone is modifying it, right?. Please have a look
Visits counter without database with PHP
It is most likely because you have two concurrent scripts that tried to open the file at one and one of them fail. You have to use flock() when there are multiple instances of the script that could operate at the same time. Counter are some of the heaviest things if you going to use file reading and writing. I wrote this wrapper to easily implement file locking.
If you want to check out one of my counters that in operation try http://ozlu.org. That dynamic counter image was self-built. The fileReadAll will read the entire file in one shot. The file writer only has two modes, write or append. You can pass the fileWriter an array or a string and it will write it to the file. The function will not add any \n to format your text so you would have to add that. The default mode for the fileWriteAll is w if you do not set the third argument.
function fileWriteAll($file, $content, $mode = "w"){
$mode = $mode === "w" || $mode === "a"? $mode : "w";
$FILE = fopen($file, $mode);
while (!flock($FILE, LOCK_EX)) { usleep(1); }
if( is_array($content) ){
for ($i = 0; $i < count($content); $i++){
fwrite($FILE, $content[$i]);
}
} else {
fwrite($FILE, $content);
}
flock($FILE, LOCK_UN);
fclose($FILE);
}
function fileReadAll($file){
$FILE = fopen($file, 'r');
while (!flock($FILE, LOCK_SH)) { usleep(1); }
$content = fread($FILE, filesize($file));
flock($FILE, LOCK_UN);
fclose($FILE);
return $content;
}
Your modified code:
session_start();
$counterName = './views.txt';
if (!file_exists($counterName)) {
$file = fopen($counterName, 'w');
fwrite($file, '0');
fclose($file);
}
$file = fopen($counterName, 'r');
$value = fread($file, filesize($counterName));
fclose($file);
if (! isset($_SESSION['visited'])) {
$_SESSION['visited'] = 'yes';
$value++;
$file = fopen($counterName, 'w');
fwrite($file, $value);
fclose($file);
}
session_unset();
echo $value;
I am writing a program in php to check ip's , now I know that are easier ways to do so, but i want to do it my way. This is what i have written so far
<?php
if($_POST) {
$file=fopen("names.txt","a") or exit("Unable to open file!");
$ipadres=fopen("ip.txt","a") or exit("Unable to open file!");
$name = $_POST['username'];
$file_content = $name. "|";
$ipadres_content = $_SERVER["REMOTE_ADDR"] . "|";
$iparray = array();
$i=0;
fputs($file,$file_content);
fputs($ipadres,$ipadres_content);
while(!feof($ipadres))
{
$iparray = explode("|", fgets($file));
}
fclose($file);
fclose($ipadres);
}
?>
As you can see i tried using a while loop to put the ip-adresses in to an array to check. but when I try to run it it just keeps running until it finally crashes in to this error= Fatal error: Maximum execution time of 30 seconds exceeded.Oh and yes i tried to put the max crash limit up a bit but still no sign of succes.
Your while loop is faulty:
while(!feof($ipadres))
{
$iparray = explode("|", fgets($file));
}
You're checking for feof($ipadres) and using fgets($file)
i.e. you keep checking end of file with file pointer $ipadres but reading from file pointer $file which will cause infinite loop and program will crash eventually.
Probably you meant:
while(!feof($ipadres)) {
$iparray = explode("|", fgets($ipadres));
}
OR else use file function which returns all the lines of a file in an array.
I'm trying to define an array with a list of file urls, and then have each file parsed and if a predefined string is found, for that string to be replaced. For some reason what I have isn't working, I'm not sure what's incorrect:
<?php
$htF = array('/home/folder/file.extension', '/home/folder/file.extension', '/home/folder/file.extension', '/home/folder/file.extension', '/home/folder/file.extension');
function update() {
global $htF;
$handle = fopen($htF, "r");
if ($handle) {
$previous_line = $content = '';
while (!feof($handle)) {
$current_line = fgets($handle);
if(stripos($previous_line,'PREDEFINED SENTENCE') !== FALSE)
{
$output = shell_exec('URL.COM');
if(preg_match('#([0-9]{1,3}\.){3}[0-9]{1,3}#',$output,$matches))
{
$content .= 'PREDEFINED SENTENCE '.$matches[0]."\n";
}
}else{
$content .= $current_line;
}
$previous_line = $current_line;
}
fclose($handle);
$tempFile = tempnam('/tmp','allow_');
$fp = fopen($tempFile, 'w');
fwrite($fp, $content);
fclose($fp);
rename($tempFile,$htF);
chown($htF,'admin');
chmod($htF,'0644');
}
}
array_walk($htF, 'update');
?>
Any help would be massively appreciated!
Do you have permissions to open the file?
Do you have permissions to write to /tmp ?
Do you have permissions to write to the destination file or folder?
Do you have permissions to chown?
Have you checked your regex? Try something like http://regexpal.com/ to see if it's valid.
Try adding error messages or throw Exceptions for all of the fail conditions for these.
there's this line:
if(stripos($previous_line,'PREDEFINED SENTENCE') !== FALSE)
and I think you just want a != in there. Yes?
You're using $htF within the update function as global, which means you're trying to fopen() an array.
$fh = fopen($htF, 'r');
is going to get parsed as
$fh = fopen('Array', 'r');
and return false, unless you happen to have a file named 'Array'.
You've also not specified any parameters for your function, so array_walk cannot pass in the array element it's dealing with at the time.
I'm trying to read a specific line from a text file using php.
Here's the text file:
foo
foo2
How would I get the content of the second line using php?
This returns the first line:
<?php
$myFile = "4-24-11.txt";
$fh = fopen($myFile, 'r');
$theData = fgets($fh);
fclose($fh);
echo $theData;
?>
..but I need the second.
Any help would be greatly appreciated
$myFile = "4-24-11.txt";
$lines = file($myFile);//file in to an array
echo $lines[1]; //line 2
file — Reads entire file into an array
omg I'm lacking 7 rep to make comments. This is #Raptor's & #Tomm's comment, since this question still shows up way high in google serps.
He's exactly right. For small files file($file); is perfectly fine. For large files it's total overkill b/c php arrays eat memory like crazy.
I just ran a tiny test with a *.csv with a file size of ~67mb (1,000,000 lines):
$t = -microtime(1);
$file = '../data/1000k.csv';
$lines = file($file);
echo $lines[999999]
."\n".(memory_get_peak_usage(1)/1024/1024)
."\n".($t+microtime(1));
//227.5
//0.22701287269592
//Process finished with exit code 0
And since noone mentioned it yet, I gave the SplFileObject a try, which I actually just recently discovered for myself.
$t = -microtime(1);
$file = '../data/1000k.csv';
$spl = new SplFileObject($file);
$spl->seek(999999);
echo $spl->current()
."\n".(memory_get_peak_usage(1)/1024/1024)
."\n".($t+microtime(1));
//0.5
//0.11500692367554
//Process finished with exit code 0
This was on my Win7 desktop so it's not representative for production environment, but still ... quite the difference.
If you wanted to do it that way...
$line = 0;
while (($buffer = fgets($fh)) !== FALSE) {
if ($line == 1) {
// This is the second line.
break;
}
$line++;
}
Alternatively, open it with file() and subscript the line with [1].
I would use the SplFileObject class...
$file = new SplFileObject("filename");
if (!$file->eof()) {
$file->seek($lineNumber);
$contents = $file->current(); // $contents would hold the data from line x
}
you can use the following to get all the lines in the file
$handle = #fopen('test.txt', "r");
if ($handle) {
while (!feof($handle)) {
$lines[] = fgets($handle, 4096);
}
fclose($handle);
}
print_r($lines);
and $lines[1] for your second line
$myFile = "4-21-11.txt";
$fh = fopen($myFile, 'r');
while(!feof($fh))
{
$data[] = fgets($fh);
//Do whatever you want with the data in here
//This feeds the file into an array line by line
}
fclose($fh);
This question is quite old by now, but for anyone dealing with very large files, here is a solution that does not involve reading every preceding line. This was also the only solution that worked in my case for a file with ~160 million lines.
<?php
function rand_line($fileName) {
do{
$fileSize=filesize($fileName);
$fp = fopen($fileName, 'r');
fseek($fp, rand(0, $fileSize));
$data = fread($fp, 4096); // assumes lines are < 4096 characters
fclose($fp);
$a = explode("\n",$data);
}while(count($a)<2);
return $a[1];
}
echo rand_line("file.txt"); // change file name
?>
It works by opening the file without reading anything, then moving the pointer instantly to a random position, reading up to 4096 characters from that point, then grabbing the first complete line from that data.
If you use PHP on Linux, you may try the following to read text for example between 74th and 159th lines:
$text = shell_exec("sed -n '74,159p' path/to/file.log");
This solution is good if your file is large.
You have to loop the file till end of file.
while(!feof($file))
{
echo fgets($file). "<br />";
}
fclose($file);
Use stream_get_line: stream_get_line — Gets line from stream resource up to a given delimiter
Source: http://php.net/manual/en/function.stream-get-line.php
You could try looping until the line you want, not the EOF, and resetting the variable to the line each time (not adding to it). In your case, the 2nd line is the EOF. (A for loop is probably more appropriate in my code below).
This way the entire file is not in the memory; the drawback is it takes time to go through the file up to the point you want.
<?php
$myFile = "4-24-11.txt";
$fh = fopen($myFile, 'r');
$i = 0;
while ($i < 2)
{
$theData = fgets($fh);
$i++
}
fclose($fh);
echo $theData;
?>
I like daggett answer but there is another solution you can get try if your file is not big enough.
$file = __FILE__; // Let's take the current file just as an example.
$start_line = __LINE__ -1; // The same with the line what we look for. Take the line number where $line variable is declared as the start.
$lines_to_display = 5; // The number of lines to display. Displays only the $start_line if set to 1. If $lines_to_display argument is omitted displays all lines starting from the $start_line.
echo implode('', array_slice(file($file), $start_line, lines_to_display));
I searched for a one line solution to read specific line from a file.
Here my solution:
echo file('dayInt.txt')[1]
I have a script which, each time is called, gets the first line of a file. Each line is known to be exactly of the same length (32 alphanumeric chars) and terminates with "\r\n".
After getting the first line, the script removes it.
This is done in this way:
$contents = file_get_contents($file));
$first_line = substr($contents, 0, 32);
file_put_contents($file, substr($contents, 32 + 2)); //+2 because we remove also the \r\n
Obviously it works, but I was wondering whether there is a smarter (or more efficient) way to do this?
In my simple solution I basically read and rewrite the entire file just to take and remove the first line.
I came up with this idea yesterday:
function read_and_delete_first_line($filename) {
$file = file($filename);
$output = $file[0];
unset($file[0]);
file_put_contents($filename, $file);
return $output;
}
There is no more efficient way to do this other than rewriting the file.
No need to create a second temporary file, nor put the whole file in memory:
if ($handle = fopen("file", "c+")) { // open the file in reading and editing mode
if (flock($handle, LOCK_EX)) { // lock the file, so no one can read or edit this file
while (($line = fgets($handle, 4096)) !== FALSE) {
if (!isset($write_position)) { // move the line to previous position, except the first line
$write_position = 0;
} else {
$read_position = ftell($handle); // get actual line
fseek($handle, $write_position); // move to previous position
fputs($handle, $line); // put actual line in previous position
fseek($handle, $read_position); // return to actual position
$write_position += strlen($line); // set write position to the next loop
}
}
fflush($handle); // write any pending change to file
ftruncate($handle, $write_position); // drop the repeated last line
flock($handle, LOCK_UN); // unlock the file
}
fclose($handle);
}
This will shift the first line of a file, you dont need to load the entire file in memory like you do using the 'file' function. Maybe for small files is a bit more slow than with 'file' (maybe but i bet is not) but is able to manage largest files without problems.
$firstline = false;
if($handle = fopen($logFile,'c+')){
if(!flock($handle,LOCK_EX)){fclose($handle);}
$offset = 0;
$len = filesize($logFile);
while(($line = fgets($handle,4096)) !== false){
if(!$firstline){$firstline = $line;$offset = strlen($firstline);continue;}
$pos = ftell($handle);
fseek($handle,$pos-strlen($line)-$offset);
fputs($handle,$line);
fseek($handle,$pos);
}
fflush($handle);
ftruncate($handle,($len-$offset));
flock($handle,LOCK_UN);
fclose($handle);
}
you can iterate the file , instead of putting them all in memory
$handle = fopen("file", "r");
$first = fgets($handle,2048); #get first line.
$outfile="temp";
$o = fopen($outfile,"w");
while (!feof($handle)) {
$buffer = fgets($handle,2048);
fwrite($o,$buffer);
}
fclose($handle);
fclose($o);
rename($outfile,$file);
I wouldn't usually recommend opening up a shell for this sort of thing, but if you're doing this infrequently on really large files, there's probably something to be said for:
$lines = `wc -l myfile` - 1;
`tail -n $lines myfile > newfile`;
It's simple, and it doesn't involve reading the whole file into memory.
I wouldn't recommend this for small files, or extremely frequent use though. The overhead's too high.
You could store positional info into the file itself. For example, the first 8 bytes of the file could store an integer. This integer is the byte offset of the first real line in the file.
So, you never delete lines anymore. Instead, deleting a line means altering the start position. fseek() to it and then read lines as normal.
The file will grow big eventually. You could periodically clean up the orphaned lines to reduce the file size.
But seriously, just use a database and don't do stuff like this.
Here's one way:
$contents = file($file, FILE_IGNORE_NEW_LINES);
$first_line = array_shift($contents);
file_put_contents($file, implode("\r\n", $contents));
There's countless other ways to do that also, but all the methods would involve separating the first line somehow and saving the rest. You cannot avoid rewriting the whole file. An alternative take:
list($first_line, $contents) = explode("\r\n", file_get_contents($file), 2);
file_put_contents($file, implode("\r\n", $contents));
My problem was large files. I just needed to edit, or remove the first line. This was a solution I used. Didn't require to load the complete file in a variable. Currently echos, but you could always save the contents.
$fh = fopen($local_file, 'rb');
echo "add\tfirst\tline\n"; // add your new first line.
fgets($fh); // moves the file pointer to the next line.
echo stream_get_contents($fh); // flushes the remaining file.
fclose($fh);
I think this is best for any file size
$myfile = fopen("yourfile.txt", "r") or die("Unable to open file!");
$ch=1;
while(!feof($myfile)) {
$dataline= fgets($myfile) . "<br>";
if($ch == 2){
echo str_replace(' ', ' ', $dataline)."\n";
}
$ch = 2;
}
fclose($myfile);
The solutions here didn't work performantly for me.
My solution grabs the last line (not the first line, in my case it was not relevant to get the first or last line) from the file and removes that from that file.
This is very quickly even with very large files (>150000000 lines).
function file_pop($file)
{
if ($fp = #fopen($file, "c+")) {
if (!flock($fp, LOCK_EX)) {
fclose($fp);
}
$pos = -1;
$found = 0;
while ($found < 2) {
if (fseek($fp, $pos--, SEEK_END) < 0) { // can not seek to position
rewind($fp); // rewind to the beginnung of the file
break;
};
if (ord(fgetc($fp)) == 10) { // newline
$found++;
}
}
$lastpos = ftell($fp); // get current position of file
$lastline = fgets($fp); // get current line
ftruncate($fp, $lastpos); // truncate file to last position
flock($fp, LOCK_UN); // unlock
fclose($fp); // close the file
return trim($lastline);
}
}
You could use file() method.
Gets the first line
$content = file('myfile.txt');
echo $content[0];