PHP Output Buffer "Millenium" Bug? - php

I faced a strange issue today.
For several months I used buffer flushing in PHP to send small string sizes to the client without problems.
Today I returned to the project and it turned out that my server won't send strings smaller than 512 bytes.
Here is my code:
<?php
echo "length:".$myUpcomingStringSize;
ob_flush();
flush();
sleep(1);
for($i = 0; $i < count($allLines); $++) {
echo $allLines[$i];
ob_flush();
flush();
}
?>
This code worked like charm the whole last year. And now it doesn't anymore. I played around a bit and added some random characters. As the string size gets equal or greater 512, the server sends the buffer content.
Can anybody imagine the issue I have to solve here? Anyone else facing this issue? Or does
someone know how to configure this minimum packet size?

If you changed neither the program nor the server, you should assume that the program never worked as intended. Especially Windows systems are known to buffer the output until a certain number of Bytes is in the output buffer. This buffering is at system-level and thus can not be affected by any PHP configuration.
If you know that 512 Bytes is the minimum required for the output buffer to send, then you could use something like
define('MIN_OUTPUT_LENGTH', 512);
echo str_pad("length: $myUpcomingStringSize", MIN_OUTPUT_LENGTH, "\0"), '\n';
// (If you run into trouble with the null-bytes, use space character instead)
Notes
If you do not use "userspace" output buffering, then ob_flush(); is redundant.
If there is no delay in your for loop, then flushing between lines is not a good idea. Especially for mobile applications where the network tries to pack as much data as possible into a single packet.
There is a syntax error in your for loop header (The expression $++ is missing a variable identifier, probably i)

Related

ob_flush takes long time to be executed

In my website(running with drupal) the ob_flush function takes a long time(between 10 - 100 secs) to be executed. How do I find out why? What can cause this so long time?
Try this:
ob_start();
//Your code to generate the output
$result = ob_get_contents(); //save the contents of output buffer to a string
ob_end_clean();
echo $result;
It is run quick for me.
[You may want to tag your question with Drupal, since this feels like it might be a Drupal issue. Specifically, I suspect that when you flush the buffer, you're writing to an outer buffer, which triggers a ton of hooks to be called to filter the data you've just written.]
I suspect that your problem is nested buffers. Drupal really likes buffers and buffers everything all over the place. Check the result of:
echo "<pre>\nBuffering level: ";
. ob_get_level() .
. "\nBuffer status:\n"
. var_dump(ob_get_status(TRUE))
. "\n</pre>";
If you've got nested buffers, then I suspect ob_flush() will do nothing for you: it just appends the contents of your inner buffer into the next outermost layer of buffering.
Nested buffers can come from Drupal itself (which the above will show), or from the settings for zlib-output-compression and output_buffering (try twiddling those, see if it changes anything).
If your buffers are not nested, and the above settings do not help, then you may also want to split the operation into pieces, and run the profiler there, to see which part is taking the time:
$data = ob_get_contents(); // Return the contents of the output buffer.
ob_clean(); // Clean (erase) the output buffer.
ob_end(); // Close the buffer.
echo($data); // Output our data - only works if there's no outer buffer!
ob_start(); // Start our buffer again.
The question then becomes, though, "what are you trying to accomplish?" What do you think ob_flush() is doing here? Because if the answer is "I want to push everything I've done so far to the browser"... then I'm afraid that ob_flush() just isn't the right way.
SET
output_buffering = Off
in php.ini
use
<?ob_start();?>
at the beginning of the page and
<?ob_flush();?>
at the end of the page, to solve this problem.

Large print/echo break http response

I am having a problem with an echo/print that returns a large amount of data. The response is broken and is as follows:
end of data
http response header printed in body
start of data
I am running the following script to my browser to replicate the problem:
<?php
// Make a large array of strings
for($i=0;$i<10000;$i++)
{
$arr[] = "testing this string becuase it is must longer than all the rest to see if we can replicate the problem. testing this string becuase it is must longer than all the rest to see if we can replicate the problem. testing this string becuase it is must longer than all the rest to see if we can replicate the problem.";
}
// Create one large string from array
$var = implode("-",$arr);
// Set HTTP headers to ensure we are not 'chunking' response
header('Content-Length: '.strlen($var));
header('Content-type: text/html');
// Print response
echo $var;
?>
What is happening here?
Can someone else try this?
You might have automatic output buffering activated on your server. If the buffer overflows, it just starts pooping out the rest of the data instead.
Note that something like gzip compression also implicitly buffers the output. If it's the case, an ob_end_flush() call after the headers should solve it.
Browsers often limits the characters you're allowed to pass through a get variable.
To work around this, you could base 64 encode the string, and then decode it, once you recievede the respone.
I think there's javascript base 64 encode libraries available.
Like this one:
http://www.webtoolkit.info/javascript-base64.html

How to overwrite php memory for security reason?

I am actually working on a security script and it seems that I meet a problem with PHP and the way PHP uses memory.
my.php:
<?php
// Display current PID
echo 'pid= ', posix_getpid(), PHP_EOL;
// The user type a very secret key
echo 'Fill secret: ';
$my_secret_key = trim(fgets(STDIN));
// 'Destroty' the secret key
unset($my_secret_key);
// Wait for something
echo 'waiting...';
sleep(60);
And now I run the script:
php my.php
pid= 1402
Fill secret: AZERTY <= User input
waiting...
Before the script end (while sleeping), I generate a core file sending SIGSEV signal to the script
kill -11 1402
I inspect the corefile:
strings core | less
Here is an extract of the result:
...
fjssdd
sleep
STDIN
AZERTY <==== this is the secret key
zergdf
...
I understand that the memory is just released with the unset and not 'destroyed'. The data are not really removed (a call to the free() function)
So if someone dumps the memory of the process, even after the script execution, he could read $my_secret_key (until the memory space will be overwritten by another process)
Is there a way to overwrite this memory segment of the full memory space after the PHP script execution?
Thanks to all for your comments.
I already now how memory is managed by the system.
Even if PHP doesn't use malloc and free (but some edited versions like emalloc or efree), it seems (and I understand why) it is simply impossible for PHP to 'trash' after freeing disallowed memory.
The question was more by curiosity, and every comments seems to confirm what I previously intend to do: write a little piece of code in a memory aware language (c?) to handle this special part by allocating a simple string with malloc, overwriting with XXXXXX after using THEN freeing.
Thanks to all
J
You seem to be lacking a lot of understanding about how memory management works in general, and specifically within PHP.
A discussion of the various salient points is redundant when you consider what the security risk is here:
So if someone dumps the memory of the process, even after the script execution
If someone can access the memory of a program running under a different uid then they have root access and can compromise the target in so many other ways - and it doesn't matter if it's PHP script, ssh, an Oracle DBMS....
If someone can access the memory previously occupied by a process which has now terminated, then not only have they got root, they've already compromised the kernel.
You seem to have missed an important lesson in what computers mean by "delete operations".
See, it's never feasible for computer to zero-out memory, but instead they just "forget" they were using that memory.
In other words, if you want to clear memory, you most definitely need to overwrite it, just as #hakre suggested.
That said, I hardly see the point of your script. PHP just isn't made for the sort of thing you are doing. You're probably better off with a small dedicated solution rather than using PHP. But this is just my opinion. I think.
I dunno if that works, but if you can in your tests, please add these lines to see the outcome:
...
// Overwrite it:
echo 'Overwrite secret: ';
for($l = strlen($my_secret_key), $i = 0; $i < $l; $i++)
{
$my_secret_key[$i] = '#';
}
And I wonder whether or not running
gc_collect_cycles();
makes a difference. Even the values are free'ed, they might still be in memory (of the scripts pid or even somewhere else in memory space).
I would try whether overwriting memory with some data would eventually erase your original locations of variables:
$buffer = '';
for ($i = 0; $i < 1e6; $i++) {
$buffer .= "\x00";
}
As soon as php releases the memory, I suppose more allocations might be given the same location. It's hardly fail proof though.

This loop won't echo my string

I have an infinite loop that I want to sleep for a second but the browser just hangs and nothing echos. Can someone tell me why?
while(1)
{
sleep(1);
echo 'test'.'<br>';
}
It won't work until the server stops processing.
You could try flushing the output buffer with flush(), but it's not guaranteed to work.
Try this function from PHP.net:
function flush_buffers() {
ob_end_flush();
ob_flush();
flush();
ob_start();
}
That's because PHP sends data to browser in big chunks. It is all about optimization, sending small data is bad for performance, you want it to be sent in huge enough blocks so overhead from transferring it (both in speed and actual traffic) would be relatively low. Couple of small strings is just not big enough. Try add a flush() right after the echo, it will force PHP send this string to the browser.
Couse it's a infinite loop it never will send response or be break

Read constant UDP stream in php

how do i go about reading data being sent almost constantly to my server.
the protocol is udp. if i try to read in a while(1) loop, i dont get anything. it seems like the read will only echo once all the reading is done. so it waits till the loop is done reading which it will never be. i want the socket_read to echo immediately when it gets the data. here is the code that doesnt work. thanks in advance.
<?php
$sock = socket_create(AF_INET, SOCK_DGRAM, SOL_UDP);
socket_bind($sock, $local, $port) or die('Could not bind to address');
//this is where the reading loop should go.
while(1)
{
echo socket_read($sock,1024);
}
socket_close($sock);
?>
Try calling flush() immediately after that echo statement.
Something like this might help:
do {
echo socket_read($handle,1024);
$status = socket_get_status($handle);
} while($status['unread_bytes']);
OR
while ( $buffer = #socket_read($sock,512,PHP_NORMAL_READ) )
echo $buffer;
The PHP manual entry on socket_read() is a little vague when it comes to how much (if any) internal buffering it's doing. Given that you are passing 1024 in for the length, that specifies that it should return after receiving no more than 1024 bytes of data.
Disclaimer: the following is just speculation, as I have no knowledge of the internal implementation of socket_read().
If the socket_read() function is using its length parameter as a hint for an internal buffer size, you might see bad performance with small UDP packets. For example, if socket_read() waits for 1024 bytes of data regardless of the size of the packets, if you are constantly receiving 60 byte UDP packets it'll take a while for the buffer to fill and the function to return.
(Note: after looking up the "unread_bytes" field mentioned by Tim, it looks like PHP does keep internal buffers, but it makes no mention of how large or small those might be.)
In this case, socket_read() will return larger chunks of data once its buffers fill to reduce processing resource consumption, but at the expense of higher latency. If you need the packets as past as possible, perhaps setting a lower length field would work. That would force socket_read() to return sooner, albeit at the expense of executing your loop more often. Also if you set the length too low, your socket_read()'s might start returning incomplete packets, so you'll have to account for that in your code. (If that matters for your application, of course.)
I needed to call ob_flush();. Never even heard of it before. turns out my problem wasn't the loop, but the fact that php naturally waits till script is done before actually sending the internal buffer to the web browser. calling flush(); followed by ob_flush(); will force php to send whatever buffer it has stored to the browser immediately. This is needed for scripts that will not stop (infinite loops) and want to echo data to the browser. Sometimes flush() doesn't work as it didn't in this case. Hope that helps.

Categories