I have the following code
foreach($this->load as $count=>$test){
ob_start();
echo $count;
$cached = ob_get_contents();
ob_end_clean();
echo $cached
}
The foreach should loop 2 times. I want to get
12
but I get
11
I do not think it is clearing the ob_start.
Any Help please.
Related
When I do:
print_r(ob_list_handlers());
I get:
Array ( [0] => default output handler [1] => W3TC\Generic_Plugin::ob_callback [2] => Weglot::treatPage )
It seems that every time ob_start() is called it creates a new level or a new index or something in the ob stack.
How can I access the content of a specific level instead of just the default one?
yes you are exactly right with each ob_start() you're adding one level to the stack but if you want to accumulate the outputs and later you want to use that output, i recommend using an array to put the output in. at the end of each ob_start() you need to close the buffer with ob_end_...() either you add levels to the buffer or open and close the buffer at each time so
$output = array();
ob_start();
echo("<h1>hello</h1>");
array_push($output, ob_get_contents());
ob_end_clean();
ob_start();
echo("<h3> world</h3>");
array_push($output, ob_get_contents());
ob_end_clean();
echo $output[0]." ".$output[1];
or
$output = array();
ob_start();
echo("<h1>hello</h1>");
array_push($output, ob_get_contents());
ob_start();
echo("<h3> world</h3>");
array_push($output, ob_get_contents());
ob_end_clean();
ob_end_clean();
echo $output[0]." ".$output[1];
doesn't make any difference. i hope this could help
I have this code:
$target_url = "http://mysiteexmpl.com/";
$html = new simple_html_dom();
$html->load_file($target_url);
//here I find specific links and start looping each
foreach($html->find('a.link') as $link){
$newtarget_url = $link->href;
//here I open each url that I find as new
$newhtml = new simple_html_dom();
$newhtml->load_file($newtarget_url);
//getting price
foreach($newhtml->find('div.pprice') as $price){
$price=preg_replace("/[^0-9.]/", "",$price);echo '<br>';
}
//getting other info and so on
foreach($newhtml->find('div.prohead > h1') as $title){
$title= $title->innertext;echo '<br>';
}
//here I execute several queries and copying images from remote site to mine
}
The problem is that my target url-s are 21 if I echo $newtarget_url and dont execute queries, but when execute the full code and queries loop stops on the 7-th url and dont loop over all 21 url-s that it is supposed to loop
Is this a memory leak problem or something else? How to debug it? How can the code above be optimized?
Thank you in avance for your time
I have a project where I am using OB_START to gather output from a PHP file. The problem is that sometimes I need the contents of the PHP file 20 - 30 times in 1 call.
I'd like to do something like get_file_contents({file}) then use that string for the OB_START() call. However, all the examples I've seen use an include() call to get the script each time.
Is there a way to load the script one time but use it several times in OB_START() calls?
ob_start();
include "file.php";
$output = ob_get_clean();
What I would like to do:
$script = get_file_contents(file);
$output = '';
begin loop;
ob_start();
{somehow make $script execute as code}
$output .= ob_get_clean();
end loop;
You can just use include over-and-over again.
$file = "SomeScript.php"
$output = '';
begin loop;
ob_start();
include $file
$output .= ob_get_clean();
end loop;
If you're dead-set on doing it the way you're describing, you can do the following, but it's weird:
$script = get_file_contents(file);
$output = '';
begin loop;
ob_start();
eval($script);
$output .= ob_get_clean();
end loop;
In my opinion, a much better solution would be to make the script a function that outputs it's results instead of printing them. So let's say you put the "script" into a function call foo(), Then you could do this:
$file = "SomeScript.php"
include $file
$output = '';
begin loop;
$output .= foo();
end loop;
But if you cannot change the contents of the script, then my first two examples should work for you.
So I was hoping to be able to get by with a simple solution to read records from a database and save them to a text file that the user downloads. I have been doing this on the fly and for under 20,000 records, this works great. Over 20,000 records and I'm loading too much data into memory and PHP hits a fatal error.
My thought was to just grab everything in chunks. So I grab XX number of rows and echo them to the file and then loop to get the next XX rows until I'm done.
I am just echoing the results right now though, not building the file and then sending it for download, which I'm guessing I'll have to do.
The issue at this point succinctly is that with up to 20,000 rows, the file builds and downloads perfectly. With more than that, I get an empty file.
The code:
header('Content-type: application/txt');
header('Content-Disposition: attachment; filename="export.'.$file_type.'"');
header('Expires: 0');
header('Cache-Control: must-revalidate');
// I do other things to check for records before, hence the do-while loop
$this->items = $model->getItems();
do {
foreach ($this->items as $k => $item) {
$i=0;
$tables = count($this->data['column']);
foreach ($this->data['column'] as $table => $fields) {
$columns = count($fields);
$j = 0;
foreach ($fields as $field => $junk) {
if ($quote_output) {
echo '"'.ucwords(str_replace(array('"'), array('\"'), $item->$field)).'"';
} else {
echo ''.$item->$field.'';
}
$j++;
if ($j<$columns) {
echo $delim;
}
}
$i++;
if ($i<$tables) {
echo $delim;
}
}
echo "\n";
}
} while($this->items = $this->_model->getItems());
Very large tables won't work that way.
You have to output the data as you read it from the database. If you need to sorted, then use the database ORDER BY for that purpose.
So more or less
// assuming you use a var such as $query to handle the DB
while(!$query->eof())
{
$fields = $query->read_next();
echo $fields; // with your formatting, maybe call a function...
}
The empty result is normal. If the memory is exhausted before any echo happens then nothing was sent to the browser.
Note also that PHP has a time limit (a watchdog) that you may need to tweak. The default is defined in your php.ini. You may set it to zero if you expect the tables to grow very much.
You should change your str_replace for addslashes(). This will probably free some memory.
Then I suggest you to save a file and use php file functions to do so: fopen() or file_put_contents().
I hope that might help you!
Actually, this might be simple fix. If PHP is running out of memory it's probably because the output buffer is filling before the file is sent. If so, simply flush() at regular intervals.
This will flush after each line:
do {
foreach(...) {
// assemble your output line here
}
echo "\n";
flush();
}
} while($this->items = $this->_model->getItems());
Flushing after each line might prove too slow, in which case add a counter and flush after every hundred, or whatever works best.
As mentioned in the manual it's not working. i tried var_dump it too suffers from the same problem.
ob_start()
$debugdata=print_r ($var,true)
This prints the result on screen than storing to a variable
The second parameter of print_r is $return which allows the output to be returned as a string rather than outputting it:
$debugData = print_r($var, true);
There is no need to use output buffering for this, and in fact it cannot be used. You will need to end the output buffering before this and then restart the buffering after your print_r call:
ob_start();
// stuff
$output = ob_end_clean();
$debugData = print_r($var, true);
ob_start();
// more stuff
$output .= ob_end_clean();
EDIT: Another option is to nest output buffers and have an inner buffer do the print_r work:
ob_start(); // your original start
// stuff
ob_start();
print_r($var);
$debugData = ob_get_clean();
// more stuff
$output = ob_get_clean(); // your original end
ob_start() starts outbut buffering. But you also need to end and retrieve the contents of the buffer as well.
Here are the functions you could use:
ob_get_clean() - puts the contents of the output buffer in a variable, ends and cleans the buffer.
ob_start();
print_r($foo);
$output = ob_get_clean();
ob_get_contents() - fetches the contents of the output buffer without closing or cleaning it.
ob_end_clean() - closes and cleans the buffer.
ob_start();
print_r($foo);
$output = ob_get_contents();
ob_end_clean();
There are a few other possibilities. Please make yourself familiar with the output buffering functions.
Also, a remark. You don't just assign the output of print_r to a variable. You just print stuff as if you were printing it on the screen. With output buffering on, all output will be buffered instead of being sent to stdout immediately. So, first you print_r, then retrieve the contents of the buffer.
[EDIT]
In the light of the conversation going on in the comments, I recommend having a look at the notes section of the print_r() manual. As #RomiHalasz and #cbuckley observe, due to print_r's internal output buffering, it cannot be used in conjunction with ob_start()
while the second parametre, return, is used, as the two will collide.
You have to EITHER use output buffering and plain print_r (with only the first parametre), or end the output buffering before you use print_r with the second parametre.
Either this:
ob_start();
print_r($foo);
$output = ob_get_clean();
or this:
ob_start();
// blah
$output = ob_get_clean();
$output .= print_r($foo,true);
ob_start();
// blah
$output .= ob_get_clean();