I don't know is it right or not. I have a PHP file with contents like this:
$x = explode("\n", $y); // Making $x has length 65000
foreach ($x as $k) {
//Some code here
}
And often my script auto-stopping after ~25000 loops.
Why? Is it PHP default configuration?
This behaviour can be because of 2 reasons
The script execution time is more than allocated to it ... Try increasing max_execution_time in php.ini .
The memory limit of script may be more than allocated .For this try changing the value of
memory_limit in php.ini
The default memory limit of PHP is 8MB (I mean standard distro's, not a default PHP compile from source, because that is limitless).
When I do this code:
$x = array();
for ($i = 0; $i < 65000; $i++) {
$x[$i] = $i;
}
echo (memory_get_peak_usage()/1024).'<br />';
echo (memory_get_usage()/1024).'<br />';
echo count($x);
It outputs:
9421.9375
9415.875
65000
To test this, I've increased my memory limit tho. But it would abort with an error if you can't allocate more memory;
for ($i = 0; $i < 1e6; $i++) { // 1 Million
$x[$i] = $i;
}
It reports back;
Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 32 bytes) in /Applications/MAMP/htdocs/run_app.php on line 5
For personal use (I have 16GB RAM, so it's no issue) I use these starting codes:
// Settings
ini_set('error_reporting', E_ALL); // Shows all feedback from the parser for debugging
ini_set('max_execution_time', 0); // Changes the 30 seconds parser exit to infinite
ini_set('memory_limit', '2048M'); // Sets the memory that may be used to 512MegaBytes
This way you can increase your limit the way you want it. This won't work with online hosts unless you have a dedicated server tho. This is VERY dangerous tho, if you don't know what you're doing. Infinite loops will crash your browser or even your OS if it starts to lack RAM/resources.
In the foreach loop, pass the array as a reference. In PHP, foreach makes a copy of the array before it begins looping. Thus if you have an array that is 100K then foreach will allocate at the very least another 100K for processing. By passing it as a reference, you are only concern about the size of an address.
Related
I'm creating a script that needs to exit when it hits a certain amount of memory usage. Here's my test script:
<?php
$memory_sapper = array();
$i = 0;
while (True){
array_push($memory_sapper, $i);
print($i);
print(PHP_EOL);
print((memory_get_usage()/1024)/1024 . " Mb");
print(PHP_EOL);
print(memory_get_usage() . " bytes");
print(PHP_EOL);
if ($i == 10000){
print("Memory Limit: " . ini_get('memory_limit'));
print(PHP_EOL);
exit;
}
$i++;
}
When I run php -d memory_limit=0.5M test.php, memory usage starts at 0.3M and grows to 0.8M. I expected the script to error around $i == 5000 but it's not, it goes past the memory limit of 0.5M.
print("Memory Limit: " . ini_get('memory_limit')); in the script shows that the memory limit is set to 0.5M.
Even php -d memory_limit=0 test.php doesn't keep the script from running.
If I increase $i to 100k, the script finally errors at 1.38 Mb or 1447328 bytes according to memory_get_usage. I'm unsure why it does't error sooner. Also, the number of bytes according to memory_get_usage isn't very close to the number of bytes recorded in the error. Here's the error message:
PHP Fatal error: Allowed memory size of 2097152 bytes exhausted (tried to allocate 2097160 bytes) in ~/test.php on line 7
Any insights would be valuable. Thank you!
"0.5M" is not a valid value for memory_limit. If you look up memory_limit in the PHP manual, it links to this FAQ about the shorthand notation it accepts, which gives exactly that example:
Note that the numeric value is cast to int; for instance, 0.5M is interpreted as 0.
So to set to "half a megabyte", you need to specify it in a whole number of kilobytes or bytes: memory_limit=500k or memory_limit=512000
Addtitional info:
I'm running this from the command line. CentOS 6, 32GB Ram total, 2GB Memory for PHP.
I tried increasing the memory limit to 4GB, but now I get a Fatal error: String size overflow. PHP maximum string size is 2GB.
My code is very simple test code:
$Reader = new SpreadsheetReader($_path_to_files . 'ABC.xls');
$i = 0;
foreach ($Reader as $Row)
{ $i++;
print_r($Row);
if($i>10) break;
}
And it is only to print 10 rows. And that is taking 2 Gigabytes of memory?
The error is occuring at line 253 in excel_reader2.php
Inside class OLERead, inside function read($sFilenName)
Here is the code causing my exhaustion:
if ($this->numExtensionBlocks != 0) {
$bbdBlocks = (BIG_BLOCK_SIZE - BIG_BLOCK_DEPOT_BLOCKS_POS)/4;
}
for ($i = 0; $i < $bbdBlocks; $i++) { // LINE 253
$bigBlockDepotBlocks[$i] = GetInt4d($this->data, $pos);
$pos += 4;
}
I solved the problem. It turned out to be somewhat unrelated to the php code.
The program I am writing downloads .xls, .xlsx, and .csv files from email and FTP. The .xls file that was causing the memory overflow was downloaded in ASCII mode instead of Binary.
I changed my default to binary mode, and added a check that changes it to ASCII mode for .csv files.
I still find it strange that the program creates a 2GB string because of that. If there are no line breaks in the binary file, then I can see perhaps how the entire file might end up in one string. But the file is only 286KB. So, that's strange.
When decompressing with gzinflate, I found that - under certain
circumstances - the following code results in out-of-memory errors. Tested with PHP 5.3.20 on an 32bit Linux (Amazon Linux AMI on EC2).
$memoryLimit = Misc::bytesFromShorthand(ini_get('memory_limit')); // 256MB
$memoryUsage = memory_get_usage(); // 2MB in actual test case
$remaining = $memoryLimit - $memoryUsage;
$factor = 0.9;
$maxUncompressedSize = max(1, floor($factor * $remaining) - 1000);
$uncompressedData = gzinflate($compressedData, $maxUncompressedSize);
Although, I calculated the size of $maxUncompressedSize conservatively, hoping to give gzinflate sufficient memory, I still get:
Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 266143484 bytes) in foo.php on line 123
When changing the value of $factor from 0.9 to 0.4, then the error goes away, in this case. In other cases 0.9 is OK.
I wonder:
Is the reason for the error really that gzinflate needs more than double the space of uncompressed data? Is there possibly some other reason? Is $remaining really the remaining memory at disposal to the application?
It is indeed possible. IMHO, the issue lies with memory_get_usage(true).
Using true should give a higher memory usage value, because should take everything into account.
I post a strange behavior that could be reproduced (at least on apache2+php5).
I don't know if I am doing wrong but let me explain what I try to achieve.
I need to send chunks of binary data (let's say 30) and analyze the average Kbit/s at the end :
I sum each chunk output time, each chunk size, and perform my Kbit/s calculation at the end.
<?php
// build my binary chunk
$var= '';
$o=10000;
while($o--)
{
$var.= pack('N', 85985);
}
// get the size, prepare the memory.
$size = strlen($var);
$tt_sent = 0;
$tt_time = 0;
// I send my chunk 30 times
for ($i = 0; $i < 30; $i++)
{
// start time
$t = microtime(true);
echo $var."\n";
ob_flush();
flush();
$e = microtime(true);
// end time
// the difference should reprenent what it takes to the server to
// transmit chunk to client right ?
// add this chuck bench to the total
$tt_time += round($e-$t,4);
$tt_sent += $size;
}
// total result
echo "\n total: ".(($tt_sent*8)/($tt_time)/1024)."\n";
?>
In this example above, it works so far ( on localhost, it oscillate from 7000 to 10000 Kbit/s through different tests).
Now, let's say I want to shape the transmission, because I know that the client will have enough of a chunk of data to process for a second.
I decide to use usleep(1000000), to mark a pause between chunk transmission.
<?php
// build my binary chunk
$var= '';
$o=10000;
while($o--)
{
$var.= pack('N', 85985);
}
// get the size, prepare the memory.
$size = strlen($var);
$tt_sent = 0;
$tt_time = 0;
// I send my chunk 30 times
for ($i = 0; $i < 30; $i++)
{
// start time
$t = microtime(true);
echo $var."\n";
ob_flush();
flush();
$e = microtime(true);
// end time
// the difference should reprenent what it takes to the server to
// transmit chunk to client right ?
// add this chuck bench to the total
$tt_time += round($e-$t,4);
$tt_sent += $size;
usleep(1000000);
}
// total result
echo "\n total: ".(($tt_sent*8)/($tt_time)/1024)."\n";
?>
In this last example, I don't know why, the calculated bandwidth can jump from 72000 Kbit/s to 1,200,000, it is totally inaccurate / irrelevant.
Part of the problem is that the time measured to output my chunk is ridiculously low each time a chunk is sent (after the first usleep).
I am doing something wrong ? Does the buffer output is not synchronous ?
I'm not sure how definitive these tests are, but I found it interesting. On my box I'm averaging around 170000 kb/s. From a networked box this number goes up to around 280000 kb/s. I guess we have to assume microtime(true) is fairly accurate even though I read it's operating system dependent. Are you on a Linux based system? The real question is how do we calculate the kilobits transferred in a 1 second time period? I try to project how many chunks can be sent in 1 second and then store the calculated Kb/s to be averaged at the end. I added a sleep(1) before flush() and this results in a negative kb/s as to be expected.
Something don't feel right and I would be interested in knowing if you have improved your testing method. Good Luck!
<?php
// build my binary chunk
$var= '';
$o=10000;
//Alternative to get actual bytes
$m1 = memory_get_usage();
while($o--)
{
$var.= pack('N', 85985);
}
$m2 = memory_get_usage();
//Your size estimate
$size = strlen($var);
//Calculate alternative bytes
$bytes = ($m2 - $m1); //40108
//Convert to Kilobytes 1 Kilobyte = 1024 bytes
$kilobytes = $size/1024;
//Convert to Kilobits 1 byte = 8 bits
$kilobits = $kilobytes * 8;
//Display our data for the record
echo "<pre>size: $size</pre>";
echo "<pre>bytes: $bytes</pre>";
echo "<pre>kilobytes: $kilobytes</pre>";
echo "<pre>kilobits: $kilobits</pre>";
echo "<hr />";
//The test count
$count = 100;
//Initialize total kb/s variable
$total = 0;
for ($i = 0; $i < $count; $i++)
{
// Start Time
$start = microtime(true);
// Utilize html comment to prevent browser from parsing
echo "<!-- $var -->";
// End Time
$end = microtime(true);
// Seconds it took to flush binary chunk
$seconds = $end - $start;
// Calculate how many chunks we can send in 1 second
$chunks = (1/$seconds);
// Calculate the kilobits per second
$kbs = $chunks * $kilobits;
// Store the kbs and we'll average all of them out of the loop
$total += $kbs;
}
//Process the average (data generation) kilobits per second
$average = $total/$count;
echo "<h4>Average kbit/s: $average</h4>";
Analysis
Even though I arrive at some arbitrary value in the test, it is still a value that can be measured. Using a networked computer adds insight into what is really going on. I would have thought the localhost machine would have a higher value then the networked box, but tests prove otherwise in a big way. When on the localhost we have to both send the raw binary data and receive it. This of course shows that two threads are sharing cpu cycles and therefore the supposed kb/s value is in fact lower when testing in a browser on the same machine. We are therefore really measuring cpu cycles and we obtain higher values when the server is allowed to be a server.
Some interesting things start to show up when you increase the test count to 1000. First don't make the browser parse the data. It takes alot of cpu to attempt to render raw data at such high test cases. We can manually watch what is going on with say system monitor and task manager. In my case the local is a linux server and the network box is xp. You can obtain some real kb/s speeds this way and it makes it obvious that we are dynamically serving up data using mainly the cpu and network interfaces. The server doesn't replicate the data and, therefore no matter how high we set the test count, we only need 40 kilobytes of memory space. So 40 kilobytes can generate 40 megabytes dynamically at a 1000 test cases and 400mb at 10000 cases.
I crashed firefox in xp with a virtual memory to low error after running the test case 10000 several times. System monitor on the linux server showed some understandable spikes in cpu and network, but overall pushed out a large amount of data really quick and had plenty of room to spare. Running 10000 cases on linux a few times actually spun up the swap drive and pegged the server cpu cycle. The most interesting fact though is that my values that I obtained above only changed when I was both receiving in firefox and transmitting in apache when testing locally. I practically locked up the xp box, yet my network value of ~280000 kb/s did not change on the print out.
Conclusion:
The kb/s value we arrive at above is virtually useless other then to prove its useless. The test itself however shows some interesting things. At high test cases I beleive I could actually see some physical buffering going on in both the server and the client . Our test script actually dumps the data to apache and it gets released to continue its work. Apache handles the details of the transfer of course. This is actually nice to know, but proves we can't measure the actual transmission rate from our server to browser this way. We can measure our servers data generation rate I guess if thats meaningful in some way. Guess what! Flushing actually slowed down the speeds. Theres a penalty for flushing. In this case, there is no reason for it and removing flush() actually speeds up our data generation rate. Since we are not dealing with networking, our value above is actually more meaningful kept as Kilobytes. It's useless any ways so I'm not changing.
I'm making a little benchmark class to display page load time and memory usage.
Load time is already working, but when I display the memory usage, it doesn't change
Example:
$conns = array();
ob_start();
benchmark::start();
$conns[] = mysql_connect('localhost', 'root', '');
benchmark::stop();
ob_flush();
uses the same memory as
$conns = array();
ob_start();
benchmark::start();
for($i = 0; $i < 1000; $i++)
{
$conns[] = mysql_connect('localhost', 'root', '');
}
benchmark::stop();
ob_flush();
I'm using memory_get_usage(true) to get the memory usage in bytes.
memory_get_usage(true) will show the amount of memory allocated by the php engine, not actually used by the script. It's very possible that your test script hasn't required the engine to ask for more memory.
For a test, grab a large(ish) file and read it into memory. You should see a change then.
I've successfully used memory_get_usage(true) to track the memory usage of web crawling scripts, and it's worked fine (since the goal was to slow things down before hitting the system memory limit). The one thing to remember is that it doesn't change based on actual usage, it changes based on the memory requested by the engine. So what you end up seeing is sudden jumps instead of slowing growing (or shrinking).
If you set the real_usage flag to false, you may be able to see very small memory changes - however, this won't help you monitor the true amount of memory php is requesting from the system.
(Update: To be clear the difference I describe is between memory used by the variables of your script, compared to the memory the engine requested to run your script. All the same script, different way of measuring.)
I'm no Guru in PHP's internals, but I could imagine an echo does not affect the amount of memory used by PHP, as it just outputs something to the client.
It could be different if you enable output buffering.
The following should make a difference:
$result = null;
benchmark::start()
for($i = 0; $i < 10000; $i++)
{
$result.='test';
}
Look at:
for($i = 0; $i < 1000; $i++)
{
$conns[] = mysql_connect('localhost', 'root', '');
}
You could have looped to 100,000 and nothing would have changed, its the same connection. No resources are allocated for it because the linked list remembering them never grew. Why would it grow? There's already (assumingly) a valid handle at $conns[0]. It won't make a difference in memory_get_usage(). You did test $conns[15] to see if it worked, yes?
Can root#localhost have multiple passwords? No. Why would PHP bother to handle another connection just because you told it to? (tongue in cheek).
I suggest running the same thing via CLI through Valgrind to see the actual heap usage:
valgrind /usr/bin/php -f foo.php .. or something similar. At the bottom, you'll see what was allocated, what was freed and garbage collection at work.
Disclaimer: I do know my way around PHP internals, but I am no expert in that deliberately obfuscated maze written in C that Zend calls PHP.
echo won't change the allocated number of bytes (unless you use output buffers).
the $i-variable is being unset after the for-loop, so it's not changing the number of allocated bytes either.
try to use a output buffering example:
ob_start();
benchmark::start();
for($i = 0; $i < 10000; $i++)
{
echo 'test';
}
benchmark::stop();
ob_flush();