About PHP’s memory usage - php

My PHP application on Windows+Apache has stopped with showing “Out of memory (allocated 422313984) (tried to allocate 45792935 bytes)”.
I can’t understand why it’s stopped because my machine has 4GB physical memory and I’ve set memory_limit directive for -1 in PHP.ini file. I’ve also restarted Apache.
I think 4GB is enough to allocate more than 422313984+45792935 byte memories.
Is there another setting to use memory for PHP or Apache?
I also summarize performance counter .It shows MAX memory usage was 2GB in total of machine. And the httpd process used 1.3GB.
I can’t show the code but actually the code fetches 30000 rows, 199 byte each, from DBMS and parsese into XML using simplexml_load_string() in a loop.
The code is normally finished if its data is small or shorten looping term like 30000 to 1000.
Another case is the first run after starting Apache will be succeeded.
I think some memory leak happen.
Actually I did echo PHP_INT_SIZE and PHP shows 4. So perhaps my PHP is 32-bit version.
If memory usage problem is from this version of PHP as Álvaro G. Vicario points at bellow, can I fix it by changing for 64-bit version of PHP? And how can I get to 64-bit version of PHP for Windows? I can’t find it in http://windows.php.net

«Out of memory» messages (not to be confused with «Allowed memory size exhausted» ones) always indicate that the PHP interpreter literally ran out of memory. There's no PHP or Apache setting you can tweak—the computer is just no able to feed PHP with more RAM. Common causes include:
Scripts that use too much memory.
Memory leaks or bugs in the PHP interpreter.
SimpleXML is a by no means a lightweight extension. On the contrary, its easy of use and handy features come at a cost: high resource consumption. Even without seeing a single line of code, I can assure that SimpleXML is totally unsuitable to create an XML file with 30,000 items. A PHP script that uses 2GB of RAM can only take down the whole server.
Nobody likes changing a base library in the middle of a project but you'll eventually need to do so. PHP provides a pull parser called XMLWriter. It's really not much harder to use and it provides two benefits:
It's way less resource intensive, since it doesn't create the complex object that SimpleXML uses.
You can flush partial results to file.
Can even write to file directly.
With it, I'm sure your 2 GB script can run with a few MB.

Related

PHP memory_get_usage() on empty PHP script

I decided to take a look at how much memory was being allocated to a few of my PHP scripts, and found it to be peaking at about 130KiB. Not bad, I thought, considering what was going on in the script.
Then, I decided to see what the script started at. I expected something around 32KiB.
I got 121952 bytes instead. After that, I tried testing a completely devoid script:
<?php
echo memory_get_usage();
It also started with the same amount of memory allocated.
Now, obviously, PHP is going to allocate some memory to the script before it is run, but this seems a bit excessive.
However, it doesn't seem to be dynamic at all based on how much memory is available to the system at the time. I tried consuming more system memory by opening other processes, but the pre-allocated memory amount stayed the same exact number of bytes.
Is this at all configurable on a per script basis, and how does PHP determine how much it will allocate to each script?
Using PHP Version 5.4.7
Thanks.
The memory_get_usage function directly queries PHP's memory allocator to get this information. It reports how much memory is used by PHP itself, not how much the whole process or even the system as a whole is using.
If you do not pass in an additional true argument what you get back is the exact amount of memory that your code uses; this will never be more than what memory_get_usage(true) reports.
If you do call memory_get_usage(true) you will get back the size of the heap the allocator has reserved from the system, which includes memory that has not been actually used by your code but is directly available to your code.
When your script needs more memory than what is already available to the allocator, the latter will reserve another big chunk from the OS and you will see memory_get_usage(true) jump up sharply while memory_get_usage() might only increase by a few bytes.
The exact strategy the allocator uses to decide when and how much memory to allocate is baked into PHP at compilation time; you would have to edit the source and compile PHP to change this behavior.

Understanding Xdebug memory delta increase

I have the following line in my trace file:
0.5927 12212144 2780040.00 -> require_once(E:\web\lib\nusoap\nusoap.php) E:\web\some_path\file.php:28
I know that requiring this file will cost 2.7MB of memory. Is it normal that simply requiring the file will cost that much? What impacts the memory cost when requiring a file?
I have another 13 lines that are requires and that cost at least 350 000KB of memory each. I have two more lines that cost 1MB each. Again, is this sort of thing normal?
Edit #1:
I started to look into this due to a memory leak. We have a script that will have the memory usage spike but when it comes down, there will be an increase of 10MB+ (ish) of RAM.
At one point, when Apache reaches 450 000 MB used, we start getting out of memory errors like these:
PHP Fatal error: Out of memory (allocated x) (tried to allocate y bytes) in/path_to/file.php(1758) on line z
Yes. This is quite normal. The nusoap library is quite large, but internally in PHP it is stored as a blown up binary representation. You need to realize that the require itself isn't taking up the space, but rather the included file.
I don't quite understand where your ".00" at the end comes from though. I've just checked the code and it does not create a floating point number.
cheers,
Derick
Again, is this sort of thing normal?
Yes, that is normal. If you want to understand the delta, look into the xdebug source-code it explains it fairly well. Also read the xdebug documentation first, IIRC the website tells you that you should not take these numbers for real (and it looks like in your question you do somehow).
Also take care that xdebug is not for production use. If you need production memory usage, you need other tools.

Memory leakage in php unrelated to GC?

I have a php script which takes an image, processes it and then writes the new image to file. I'm using imagick/imagemagick with php 5.3.8 with fastcgi. After reading around I thought maybe the garbage collecting function might help but it hasn't stopped php's memory usage in TOP from growing to triple digits. I used to run this script in cron.
<?php
var_dump(gc_enabled()); // true
var_dump(gc_collect_cycles()); // number comes out to 0
?>
Not sure what to do. So far the only thing that helps keep php in check is by doing a 'service php-fpm reload' every hour or so. Would using imagick as a shared ext instead of statically compiled one help? Any suggestions or insight is greatly appreciated.
Two options:
Farm out the work through gearman or the like to a script that will die completely. Generally I'll run my workers through a certain number of jobs, then have them die. They'll be restarted by supervisor in my setup so it's not a problem. The death after N requests just avoids memory issues.
As of 5.4 this might help: http://ca3.php.net/manual/en/function.apache-child-terminate.php
A note about built in vs external libraries. I haven't played with this aspect of image magick, but I saw it with GD. You get a much lower memory value from the PHP functions when you're using the external library, but the actual memory usage is nearly equal.
A good start to check for memory leaks is valgrind.
If PHP has lots of available memory to use then it doesn't bother to wipe the memory since it doesn't think it needs to. As it uses more, or if other applications start to use more memory, then it will clear the memory of what it can.
You can force the memory to be cleared for a variable by setting it to NULL, but unset() is recommended because you shouldn't need to force it to use less memory as PHP will clean up by itself.
But otherwise, a snippet of your code is required to answer your question.

Zend php memory memory_limit

All,
I am working on a Zend Framework based web application. We keep encountering out of memory errors on our dev server:
Allowed memory size of XXXX bytes exhausted (tried YYYY...
We keep increasing memory_limit in php.ini, but it is now up over 1000 megs. What is a normal memory_limit value? What are the usual suspects in php/Zend for running out of memory? We are using the Propel ORM.
Thanks for all of the help!
Update
I cannot reproduce this error in my windows environment. If I set memory_limit low (say 16M), I get the same error, but the "tried to allocate" amount is always something reasonable. For example:
(tried to allocate 13344 bytes)
If I set the memory very low on the (Fedora 9) server (such as 16M), I get the same thing. consistent, reasonable out of memory errors. However, even when the memory limit is set very high on our server (128M, for example), maybe once a week, I will get an crazy huge memory error: (tried to allocate 1846026201 bytes). I don't know if that might shed any more light onto what is going on. We are using propel 1.5. It sounds like the actual release is going to come out later this month, but it doesn't look like anyone else is having this problem with it anyway. I don't know that Propel is the problem. We are using Zend Server with php 5.2 on the Linux box, and 5.3 locally.
Any more ideas? I have a ticket out to get Xdebug installed on the Linux box.
Thanks,
-rep
Generally speaking, with PHP 5.2 and/or PHP 5.3, I tend to consider than more than 32M for memory_limit is "too much" :
Using Frameworks / ORM and stuff like this, 16M is often not enough
Using 32M is generally enough for the kind of web-applications I'm working on (typical websites)
Using more than 64M means the server will not be able to handle as many users as we'd like.
When, it comes to a script reaching memory_limit, the usual problem is trying to load too much data into memory ; a couple of examples :
Loading a big file in memory, with functions such as file or file_get_contents, or XML-related functions/classes
Creating a too big array of data
Creating too many objects
Considering you are using an ORM, you might be in a situation where :
You are doing some SQL query that returns a lot of rows
Your ORM is converting each row in objects, putting those in an array
In which case a solution would be to load less data
using pagination, for instance
or trying to load data as arrays instead of objects (I don't know if this is possible with Propel -- but it is with Doctrine ; so maybe Propel has some way of doing that too ? )
What exactly is your application doing at the time it runs out of memory. There can be a lot of causes for this. I'd say most common would be allocating too much data to an array. Is your application doing anything along those lines.
You have one of two things happening, perhaps both:
You have a runaway process somewhere that isn't ending when it should be.
You have algorithms that throw lots of data around, such as huge strings or arrays or objects, and are making needless copies instead of processing just what they need and discarding what they don't.
I think this has something to do with deployment from cruise control. I only get the very high (on the order of gigs) memory error when someone is deploying new code (or just after new code has been deployed). This makes a little bit of sense too since the error always points to a line that is a "require_once." Each time I get an error:
Fatal error: Out of memory (allocated 4456448) (tried to allocate 3949907977 bytes) in /directory/file.php on line 2
I have replaced the "require_once" line with:
class_exists('Ingrain_Security_Auth') || require('Ingrain/Security/Auth.php');
I have replaced that line in 3 files so far, and have not had any more memory issues. Can anyone shed some light into what might be going on? I am using Cruise Control to deploy.

Increasing PHP memory_limit. At what point does it become insane?

In a system I am currently working on, there is one process that loads large amount of data into an array for sorting/aggregating/whatever. I know this process needs optimising for memory usage, but in the short term it just needs to work.
Given the amount of data loaded into the array, we keep hitting the memory limit. It has been increased several times, and I am wondering is there a point where increasing it becomes generally a bad idea? or is it only a matter of how much RAM the machine has?
The machine has 2GB of RAM and the memory_limit is currently set at 1.5GB. We can easily add more RAM to the machine (and will anyway).
Have others encountered this kind of issue? and what were the solutions?
The configuration for the memory_limit of PHP running as an Apache module to server webpages has to take into consideration how many Apache process you can have at the same time on the machine -- see the MaxClients configuration option for Apache.
If MaxClients is 100 and you have 2,000 MB of RAM, a very quick calculation will show that you should not use more than 20 MB *(because 20 MB * 100 clients = 2 GB or RAM, ie the total amount of memory your server has)* for the memory_limit value.
And this is without considering that there are probably other things running on the same server, like MySQL, the system itself, ... And that Apache is probably already using some memory for itself.
Or course, this is also a "worst case scenario", that considers that each PHP page is using the maximum amount of memory it can.
In your case, if you need such a big amount of memory for only one job, I would not increase the memory_limit for PḦP running as an Apache module.
Instead, I would launch that job from command-line (or via a cron job), and specify a higher memory_limit specificaly in this one and only case.
This can be done with the -d option of php, like :
$ php -d memory_limit=1GB temp.php
string(3) "1GB"
Considering, in this case, that temp.php only contains :
var_dump(ini_get('memory_limit'));
In my opinion, this is way safer than increasing the memory_limit for the PHP module for Apache -- and it's what I usually do when I have a large dataset, or some really heavy stuff I cannot optimize or paginate.
If you need to define several values for the PHP CLI execution, you can also tell it to use another configuration file, instead of the default php.ini, with the -c option :
php -c /etc/phpcli.ini temp.php
That way, you have :
/etc/php.ini for Apache, with low memory_limit, low max_execution_time, ...
and /etc/phpcli.ini for batches run from command-line, with virtually no limit
This ensures your batches will be able to run -- and you'll still have security for your website (memory_limit and max_execution_time being security measures)
Still, if you have the time to optimize your script, you should ; for instance, in that kind of situation where you have to deal with lots of data, pagination is a must-have ;-)
Have you tried splitting the dataset into smaller parts and process only one part at the time?
If you fetch the data from a disk file, you can use the fread() function to load smaller chunks, or some sort of unbuffered db query in case of database.
I haven't checked up PHP since v3.something, but you also could use a form of cloud computing. 1GB dataset seems to be big enough to be processed on multiple machines.
Given that you know that there are memory issues with your script that need fixing and you are only looking for short-term solutions, then I won't address the ways to go about profiling and solving your memory issues. It sounds like you're going to get to that.
So, I would say the main things you have to keep in mind are:
Total memory load on the system
OS capabilities
PHP is only one small component of the system. If you allow it to eat up a vast quantity of your RAM, then the other processes will suffer, which could in turn affect the script itself. Notably, if you are pulling a lot of data out of a database, then your DBMS might be require a lot of memory in order to create result sets for your queries. As a quick fix, you might want to identify any queries you are running and free the results as soon as possible to give yourself more memory for a long job run.
In terms of OS capabilities, you should keep in mind that 32-bit systems, which you are likely running on, can only address up to 4GB of RAM without special handling. Often the limit can be much less depending on how it's used. Some Windows chipsets and configurations can actually have less than 3GB available to the system, even with 4GB or more physically installed. You should check to see how much your system can address.
You say that you've increased the memory limit several times, so obviously this job is growing larger and larger in scope. If you're up to 1.5Gb, then even installing 2Gb more RAM sounds like it will just be a short reprieve.
Have others encountered this kind of
issue? and what were the solutions?
I think you probably already know that the only real solution is to break down and spend the time to optimize the script soon, or you'll end up with a job that will be too big to run.

Categories