File reading, searching and looping problems - php

So i've been trying to write this little piece of code to read a file (status.txt), search it for 1 of 4 keywords and loop until either time runs out (5 minutes) or it finds one of the words. I've already written a few simple php scripts to write the words to a txt file, but I can't seem to get this part to work. It either doesn't clear the file in the beginning or seems to hang and never picks up the changes. Any advice would be hugely helpful.
<?php
//Variables
$stringG = "green";
$stringR = "red";
$stringB = "blue";
$stringO = "orange";
$clear = "";
$statusFile = "status.txt";
//erase file
$fh = fopen($statusFile, 'w'); //clear the file with "clear"
fwrite($fh, $clear);
fclose($fh);
//Insert LOOP
$counter = 0;
while ( $counter <= 10 ) {
//echo "loop begun";
// Read THE FILE
$fh = fopen($statusFile, 'r');
$data = fread($fh, filesize($statusFile));
fclose($fh);
//process the file
if(stristr($data,$stringG)) {
echo "Green!";
$counter = $counter + 30; //stop if triggered
}
elseif (stristr($data,$stringR)) {
echo "Red";
$counter = $counter + 30; //stop if triggered
}
elseif (stristr($data,$stringB)) {
echo "Blue";
$counter = $counter + 30; //stop if triggered
}
elseif (stristr($data,$stringO)) {
echo "Orange";
$counter = $counter + 30; //stop if triggered
}
else {
//increment loop counter
$counter = $counter + 1;
//Insert pause
sleep(10);
}
}
?>

You should open the file before your read loop, and close it after the loop. As in :
open the file
loop through the lines in the file
close the file
Also, if you clear the file before you read it, isn't it going to be empty every time?

Well, first of all, you don't need to "clear" the file this way... The "w" option in fopen will already do that for you.
Also, I wouldn't try to read the whole file at once, because, if it's very large, that won't work without intense memory usage.
What you should do is read the file sequentially, which means you always read a fixed amount of bytes and look for the keywords. To avoid losing keywords which are cut in half by your reading mechanism, you could make your reads overlay a bit (the length of your longest keyword-1), to solve that problem.
Then you should modify your while loop so that it also checks if you are at the end of the file ( while(!feof($fh)) ).
PS: It has been mentioned that you clear your file before reading it. What I understood is that your file gets a lot of input really fast, so you expect it to already have content again when you reopen it. If that's not the case, you really need to rethink your logic ;)
PPS: You don't need to abort your while loop by incrementing your counter variable past the boundaries you define. You can also use the break-keyword.

You haven't included the code which deletes the file in the while loop, so it only clears the file once. Also, I'd use unlink($statusFile); to delete the file.

You should rather use for cycle. And to your problem - you clear the file, then get the data from it. Try dumping this $data, you'll end up with string(0) "" for sure. First, save the data, then clear the file.
Edit: If you are changing the file in the loop itself in another thread, there's another problem. You should look after anatomic file stream. For example, you can use Nette SafeStream class.

Related

php - for loop repeating itself / going out of sequence

I'm very new to PHP, making errors and learning as I go. Please be gentle! :)
I want to access some data from Blizzard.com's API. For this particular data set, it's not a block of data in JSON, rather each object has it's own URL to access. I estimate that there are approx 150000 objects, however I don't know the start or end points of the number range. So I'm having to assume 1 and work past the highest number I know (269065)
To get the data, I need to access each object's data via a JSON file, which I read, get the contents of & drop in to a text file (this could be written as an insert in to a SQL db too, as I'm able to do this if it's the text file that's the issue). But to be honest, I would love to get to the bottom of why this is happening as much as anything!
I wasn't going to try and run ~250000 iterations in a for loop, I thought I'd try something I considered small, 2000.
The for loop starts with $a as 1, uses $a as part of the URL, loads & decodes the JSON, checks to see if the first field (ID) in the object is set, if it is, it writes a few fields to data.txt & if the first field (ID) isn't set it just writes $a to data.txt (so I know it's a null for other purposes not outlined here).
Simple! Or so I thought, after approx after 183 iterations, the data written to the text file goes awry as seen by the quote below. It is out of sequence and starts at 1 again, then back to 184 ad nauseam. The loop then seems to be locked in some kind of infinite loop of running, outputting in a random order until I close the page 10-20 minutes later.
I have obviously made a big mistake! But I have no idea what I have done wrong to have caused this. During my attempts I have rewritten the code with new variable names, so a new text does not conflict with code that could be running in memory.
I've tried resetting variables to blank at the end of the loop in case it something was being reused that was causing a problem.
If anyone could point out any errors in my code, or suggest something for me to look in to, to handle bigger loops that would be brilliant. I am assuming my issue may be a time out or memory problem. But I don't know where to start & was hoping I'd find some suggestions here.
If it's relevant, I am using 000webhostapp.com as my host provider for now, until I get some paid for hosting.
1 ... 182 183 1 184 2 3 185 4 186 5 187 6 188 7 189 190 8 191
for ($a = 1; $a <= 2000; $a++) {
$json = "https://eu.api.battle.net/wow/recipe/".$a."?locale=en_GB&<MYPRIVATEAPIKEY>";
$contents = file_get_contents($json);
$data = json_decode($contents,true);
if (isset($data['id'])) {
$file = fopen("data.txt","a");
fwrite($file,$data['id'].",'".$data['name']."'\n");
fclose($file);
} else {
$file = fopen("data.txt","a");
fwrite($file,$a."\n");
fclose($file);
}
}
The content of the file I'm trying to access is
{"id":33994,"name":"Precise Strikes","profession":"Enchanting","icon":"spell_holy_greaterheal"}
I scrapped the original plan and wrote this instead. Thank you again who took the time out of their day to help and offer suggestions!
$b = $mysqli->query("SELECT id FROM `static_recipes` order by id desc LIMIT 1;")->fetch_object()->id;
if (empty($b)) {$b=1;};
$count = $b+101;
$write = [];
for ($a = $b+1; $a < $count; $a++) {
$json = "https://eu.api.battle.net/wow/recipe/".$a."?locale=en_GB&apikey=";
$contents = #file_get_contents($json);
$data = json_decode($contents,true);
if (isset($data['id'])) {
$write [] = "(".$data['id'].",'".addslashes($data['name'])."','".addslashes($data['profession'])."','".addslashes($data['icon'])."')";
} else {
$write [] = "(".$a.",'a','a','a'".")";
}
}
$SQL = ('INSERT INTO `static_recipes` (id, name, profession, icon) VALUES '.implode(',', $write));
$mysqli->query($SQL);
$mysqli->close();
$write = [];
for ($a = 1; $a <= 2000; $a++) {
$json = "https://eu.api.battle.net/wow/".$a."?locale=en_GB&<MYPRIVATEAPIKEY>";
$contents = file_get_contents($json);
$data = json_decode($contents,true);
if (isset($data['id'])) {
$write [] = $data['id'].",'".$data['name']."'\n";
} else {
$write [] = $a."\n";
}
}
$file = fopen("data.txt","a");
fwrite($file, implode('', $write));
fclose($file);
Also, why you are think what some IDS isn't duplicated at several "https://eu.api.battle.net/wow/[N]" urls data?
Also if you are I wasn't going to try and run ~250000 think about curl_multi_init(): http://php.net/manual/en/function.curl-multi-init.php
I can't really see anything obviously wrong with your code, can't run it though as I don't have the JSON
It could be possible that there is some kind of race condition since you're opening and closing the same file hundreds of times very quickly.
File operations might seem atomic but not necessarily so - here's an interesting SO thread:
Does PHP wait for filesystem operations (like file_put_contents) to complete before moving on?
Like some others' suggested - maybe just open the file before you enter the loop then close the file when the loop breaks.
I'd try it first and see if it helps.
There's nothing in your original code that would cause that sort of behaviour. PHP will not arbitrarily change the value of a variable. You are opening this file in append mode, are you certain that you're not looking at old data? Maybe output some debug messages as you process the data. It's likely you'd run up against some rate limiting on the API server, so putting a pause in there somewhere may improve reliability.
The only substantive change I'd suggest to your code is opening the file once and closing it when you're done.
$file = fopen("data_1_2000.txt", "w");
for ($a = 1; $a <= 2000; $a++) {
$json = "https://eu.api.battle.net/wow/recipe/$a?locale=en_GB&<MYPRIVATEAPIKEY>";
$contents = file_get_contents($json);
$data = json_decode($contents, true);
if (!empty($data['id'])) {
$data["name"] = str_replace("'", "\\'", $data["name"]);
$record = "$data[id],'$data[name]'";
} else {
$record = $a;
}
fwrite($file, "$record\n");
sleep(1);
echo "$a "; if ($a % 50 === 0) echo "\n";
}
fclose($file);

how to use fopen, fgets, flose properly? It works fine but later sometime the count just goes down to random number

It works fine but later sometime the count just goes down to random
number. My guess is my code cannot process multiple visits at a time.
Where increment heppens
Where it displays the count
<?php
$args_loveteam = array('child_of' => 474);
$loveteam_children = get_categories($args_loveteam);
if(in_category('loveteams', $post->ID)){
foreach ($loveteam_children as $loveteam_child) {
$post_slug = $loveteam_child->slug;
echo "<script>console.log('".$post_slug."');</script>";
if(in_category($loveteam_child->name)){
/* counter */
// opens file to read saved hit number
if($loveteam_child->slug == "loveteam-mayward"){
$datei = fopen($_SERVER['DOCUMENT_ROOT']."/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-".$post_slug."-2.txt","r");
}else{
$datei = fopen($_SERVER['DOCUMENT_ROOT']."/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-".$post_slug.".txt","r");
}
$count = fgets($datei,1000);
fclose($datei);
$count=$count + 1 ;
// opens file to change new hit number
if($loveteam_child->slug == "loveteam-mayward"){
$datei = fopen($_SERVER['DOCUMENT_ROOT']."/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-".$post_slug."-2.txt","w");
}else{
$datei = fopen($_SERVER['DOCUMENT_ROOT']."/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-".$post_slug.".txt","w");
}
fwrite($datei, $count);
fclose($datei);
}
}
}
?>
I would at least change your code to this
foreach ($loveteam_children as $loveteam_child) {
$post_slug = $loveteam_child->slug;
echo "<script>console.log('".$post_slug."');</script>";
if($loveteam_child->slug == "loveteam-mayward"){
$filename = "{$_SERVER['DOCUMENT_ROOT']}/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-{$post_slug}.txt";
}else{
$filename = "{$_SERVER['DOCUMENT_ROOT']}/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-{$post_slug}-2.txt";
}
$count = file_get_contents($filename);
file_get_contents($filename, ++$count, LOCK_EX);
}
You could also try flock on the file to get a lock before modifying it. That way if another process comes along it has to wait on the first one. But file_put_contents works great for things like logging where you may have many processes competing for the same file.
Database should be ok, but even that may not be fast enough. It shouldn't mess up your data though.
Anyway hope it helps. This is kind of an odd question, concurrency can be a real pain if you have a high chance of process collisions and race conditions etc etc.
However as I mentioned (in the comments) using the filesystem is probably not going to provide the consistency you need. Probably the best for this may be some kind of in memory storage such as Redis. But that is hard to say without full knowing what you use it for. For example if it should persist on server reboot.
Hope it helps, good luck.

PHP Loop not breaking

I am using php loop in order to get to know any file changes. If the file changes occur , I am trying to stop the loop. But in my condition instead of stopping ..loop continues.
Route::get('api.chat.buffer/{job_id}',function($job_id){
$award= DB::table('job_awards')->where('job_id',$job_id)->first();
$dirname = base_path().'/files/chat/';
$filename = $award->client_id.'_'.$award->user_id.'.timelog.log';
$i =1;
for($z=0;$z<=20;$z++){
$current_file_time = filemtime($dirname.$filename);
if($_GET['timestamp']<$current_file_time){
echo json_encode(array('status'=>'success','data'=>$current_file_time,'node'=>1)); die;
break;
}
sleep(1); // this should halt for 3 seconds for every loop
}
echo json_encode(array('status'=>'success','data'=>$current_file_time,'node'=>0));
die;
});
I am creating chatting script and tracking changing while file change.
Thanks
can you use sleep function inside a for loop?. Have you tried with yield. Try assigning the numbers of loops statements to break; in this case it would be break 1; http://php.net/manual/en/control-structures.break.php

What is wrong with my fopen() PHP

I've got this variable which I want to save which is:
$counter = $counter['counter']++;
to make it so that it increments when I refresh the page.
Therefore, I decided to use the fopen, override the $counter variable and then save it.
I am still a beginner and I do not know how the fopen, fgets, fclose works.
So I wrote something like this.
$fp = fopen("file.php","r");
fwrite($fp, $counter);
fclose($fp);
So I wanted to open the file(file.php)(the file which is I am writing this code FYI), and then override the variable after it had been incremented and then save it by closing.
But the code doesn't seem to want to work and the variable does not seem to want to increment.
What am I doing wrong?
Do I want to have this $counter in a different file and pull the variable from there.
FYI: I don't want to use session_start() and $_SESSION because I am using cron job and it will not work.
EDIT
$result = mysql_query('SELECT MIN(ID) AS min, MAX(ID) AS max FROM ytable') or exit(mysql_error());
$row = mysql_fetch_assoc($result);
if($counter['counter'] < $row['max']){
if (isset($counter['counter'])){
$counter = $counter['counter']++;
}else{
$counter = $counter['counter'] = 0;
}
}
This is more of the code for those who are confused
You could read file like this. Open the file with previous counter & fetch the counter, increment it like this
$counter = readfile("file.php");
$counter++;
To write to the file, open the file in write mode like this
$fp = fopen("file.php","w");
fwrite($fp, $counter);
fclose($fp);
Is that result you are looking for ?
As chris85 suggested, database is used for this kind of cases. Better to use database instead of file handling.
So say you have file.txt and that contains 5. In that same directory you have script.php; this file would have
$counter = file_get_contents('file.txt'); // this takes in whatever value is in the file e.g in this case '5'
$counter++; // increment the value by 1 so we are now at 6
file_put_contents('file.txt', $counter); //write the value '6' back to the file
file.txt would then have 6, after the first load.

PHP using fopen to download txt file via URL. How to limit the amount to download?

I have written a php script which parses this text file
http://www.powerball.com/powerball/winnums-text.txt
Everything is good, but I wish to control the amount that is download i.e. I do not need every single result maybe max the first 5. At the moment I am downloading the entire file (which is a waste of memory / bandwidth).
I saw the fopen has a parameter which supposed to limit it but whatever value I placed in has no effect on the amount of text that is downloaded.
Can this be done? Thank you for reading.
Here is a small snippet of the code in question which is downloading the file.
<?php
$file = fopen("http://www.powerball.com/powerball/winnums-text.txt","rb");
$rows = array();
while(!feof($file))
{
$line = fgets($file);
$date = explode("Draw Date",$line);
array_push($rows,$date[0]);
}
fclose($file);
?>
Thanks everyone this is the code which just downloads the first row of results
while(!feof($file))
{
$line = fgets($file);
$date = explode("Draw Date",$line);
array_push($rows,$date[0]);
if(count($rows)>1)
{
break;
}
}
fclose($file);
You can break whenever you don't need more data. In this example when count($rows)>100
while(!feof($file)) {
$line = fgets($file);
$date = explode("Draw Date",$line);
array_push($rows,$date[0]);
if (count($rows)>100)
break;
}
The issue is that your while condition is only met once you've read through to the end of the file. If you only want to get the first N lines you'll need to change that condition. Something like this might help get you started:
$lineCountLimit = 5;
$currentLineCount = 0;
while($currentLineCount < $lineCountLimit)
{
$line = fgets($file);
$date = explode("Draw Date",$line);
array_push($rows,$date[0]);
$currentLineCount++;
}
Please try the following recipe to download only a part of the file, like 10 KBytes first, then split to lines and analyze them. How to partially download a remote file with cURL?

Categories