I have a relatively large PHP-codebase (thousands of lines) for an application built with CodeIgniter. To be able to trace preformance issues and to be able to generate statistics and such, the developement-servers run Xhprof. This works fine, provided that the script actually comes to the logging part.
However, I've now come at a situation where the script just times out. On the developement-servers, it just gives a timeout ("Server not found") error, and sometimes even crashes the Apache process. No Xhprof file is generated, the CodeIgniter logging system generates nothing. Error reporting IS enabeled.
On a live enviroment (well, mirror of the live-server), the application actually generates an error:
Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 261900 bytes) in /home/www/application/system/database/drivers/mysql/mysql_driver.php on line 493
This, and having a clue on how to reproduce the error, gives me somewhat of a hunch where to start haunting for a solution. But it's a time consuming job to do.
I'm looking for a way to be able to trace WHERE the actual "memory leak" is happening.
Instead of manually debugging line after line. Any suggestions are greatly appreciated.
UPDATE : THE LOW MEMORY OF THE SERVER IS NOT THE PROBLEM. On a developement-server with more memory, the same problem occurs. The problem was an infinite loop, allocating more memory then my server could handle. The question remains: how can these errors be tracked down quickly?
Use xdebug. As opposed to xhprof, profiling with xdebug produces output as the script runs which means even if the script hangs or times out you will be able to dissect the trace generated up to that point.
See also Profiling with xdebug to get started.
A common problem with programs that intend to keep statistics is that the keep pointers to everything they touch, which prevents the memory management from reclaiming them. If that's not the case, this probably isn't a leak, it's probably just a runaway allocation. You probably have only a few places that might be doing it. A good start is to replace direct calls to system functions
that allocate chunks of memory with your own functions, then instrument those functions, looking
for unexpectedly large array allocations.
The quoted limit (32mb) isn't very big by modern standards. It could easily be the case that there's nothing wrong except the process has an unreasonably low limit.
Related
We use Apache (32-bit) on Windows in combination with PHP 7.2.9. It works most of the time but after a lot of refreshing (a lot is a random number of times each time apache gets restarted) we get this error: Fatal error: Out of memory (allocated 27262976) tried to allocate 4096 bytes in [random file, always a different one] on line x.
Weird thing is that it keeps giving the exact same error until we restart Apache, then it works for a couple of hours.
Also weird is that we set 512M as memory limit in the php.ini, but it says allocated 27262976 which is (exactly) 26MB. We have 2GB+ RAM free, so that isn't the problem.
It would be great if anyone knows how to solve this.
Thanks,
Lars
Most probably the memory just gets fragmented. (I had similar issues before.) You have to let garbage collection work more while your code is running.
One way to do that
You have to identify the part of the whole process where you create the biggest arrays or objects, and split it into multiple smaller steps. I don't know what your code does, but the important part is that PHP does garbage collection at certain steps, for example when a function returns and frees up its own environment. So if you, let's say, process 10000 files in a loop, it would be helpful to implement a queue system where you put in 100 files, call a function to deal with them, then go on processing the queue. Sounds silly, I know, but makes sense if you think about it.
Another way
You can allocate same-size structures for variable-length data, like 50-100k bricks that you only partially use; this way the memory won't get as fragmented. But garbage collection is a lot better and this would typically be his job.
Last resort
When your memory is about halfway exhausted - which you can check by calling memory_get_usage(true) - serialize the big structure you're using, unset the variable, then unserialize it back. This should sort out the allocation problems.
Hope some of the above helps.
I have written a Script in PHP which takes 2+ Hour to Execute Completely, Actually I am scrapping a Website completely.
I have used PHP Laravel 5.0 Framework to write that Script, even after ini_set("memory_limit",-1), I am getting same Out Of Memory Error.
My RDP have 1 GB RAM, Is there ANY WAY TO PERMANENTLY FIX THIS?
In my PHP.ini, I have also increased the limit still it says :(
Unset any arrays/objects you might use during your script running time. This is most probably the case, you're extracting a lot of data and once you compute whatever you need to compute, you don't "free" the memory
I decided to take a look at how much memory was being allocated to a few of my PHP scripts, and found it to be peaking at about 130KiB. Not bad, I thought, considering what was going on in the script.
Then, I decided to see what the script started at. I expected something around 32KiB.
I got 121952 bytes instead. After that, I tried testing a completely devoid script:
<?php
echo memory_get_usage();
It also started with the same amount of memory allocated.
Now, obviously, PHP is going to allocate some memory to the script before it is run, but this seems a bit excessive.
However, it doesn't seem to be dynamic at all based on how much memory is available to the system at the time. I tried consuming more system memory by opening other processes, but the pre-allocated memory amount stayed the same exact number of bytes.
Is this at all configurable on a per script basis, and how does PHP determine how much it will allocate to each script?
Using PHP Version 5.4.7
Thanks.
The memory_get_usage function directly queries PHP's memory allocator to get this information. It reports how much memory is used by PHP itself, not how much the whole process or even the system as a whole is using.
If you do not pass in an additional true argument what you get back is the exact amount of memory that your code uses; this will never be more than what memory_get_usage(true) reports.
If you do call memory_get_usage(true) you will get back the size of the heap the allocator has reserved from the system, which includes memory that has not been actually used by your code but is directly available to your code.
When your script needs more memory than what is already available to the allocator, the latter will reserve another big chunk from the OS and you will see memory_get_usage(true) jump up sharply while memory_get_usage() might only increase by a few bytes.
The exact strategy the allocator uses to decide when and how much memory to allocate is baked into PHP at compilation time; you would have to edit the source and compile PHP to change this behavior.
I have the following line in my trace file:
0.5927 12212144 2780040.00 -> require_once(E:\web\lib\nusoap\nusoap.php) E:\web\some_path\file.php:28
I know that requiring this file will cost 2.7MB of memory. Is it normal that simply requiring the file will cost that much? What impacts the memory cost when requiring a file?
I have another 13 lines that are requires and that cost at least 350 000KB of memory each. I have two more lines that cost 1MB each. Again, is this sort of thing normal?
Edit #1:
I started to look into this due to a memory leak. We have a script that will have the memory usage spike but when it comes down, there will be an increase of 10MB+ (ish) of RAM.
At one point, when Apache reaches 450 000 MB used, we start getting out of memory errors like these:
PHP Fatal error: Out of memory (allocated x) (tried to allocate y bytes) in/path_to/file.php(1758) on line z
Yes. This is quite normal. The nusoap library is quite large, but internally in PHP it is stored as a blown up binary representation. You need to realize that the require itself isn't taking up the space, but rather the included file.
I don't quite understand where your ".00" at the end comes from though. I've just checked the code and it does not create a floating point number.
cheers,
Derick
Again, is this sort of thing normal?
Yes, that is normal. If you want to understand the delta, look into the xdebug source-code it explains it fairly well. Also read the xdebug documentation first, IIRC the website tells you that you should not take these numbers for real (and it looks like in your question you do somehow).
Also take care that xdebug is not for production use. If you need production memory usage, you need other tools.
All,
I am working on a Zend Framework based web application. We keep encountering out of memory errors on our dev server:
Allowed memory size of XXXX bytes exhausted (tried YYYY...
We keep increasing memory_limit in php.ini, but it is now up over 1000 megs. What is a normal memory_limit value? What are the usual suspects in php/Zend for running out of memory? We are using the Propel ORM.
Thanks for all of the help!
Update
I cannot reproduce this error in my windows environment. If I set memory_limit low (say 16M), I get the same error, but the "tried to allocate" amount is always something reasonable. For example:
(tried to allocate 13344 bytes)
If I set the memory very low on the (Fedora 9) server (such as 16M), I get the same thing. consistent, reasonable out of memory errors. However, even when the memory limit is set very high on our server (128M, for example), maybe once a week, I will get an crazy huge memory error: (tried to allocate 1846026201 bytes). I don't know if that might shed any more light onto what is going on. We are using propel 1.5. It sounds like the actual release is going to come out later this month, but it doesn't look like anyone else is having this problem with it anyway. I don't know that Propel is the problem. We are using Zend Server with php 5.2 on the Linux box, and 5.3 locally.
Any more ideas? I have a ticket out to get Xdebug installed on the Linux box.
Thanks,
-rep
Generally speaking, with PHP 5.2 and/or PHP 5.3, I tend to consider than more than 32M for memory_limit is "too much" :
Using Frameworks / ORM and stuff like this, 16M is often not enough
Using 32M is generally enough for the kind of web-applications I'm working on (typical websites)
Using more than 64M means the server will not be able to handle as many users as we'd like.
When, it comes to a script reaching memory_limit, the usual problem is trying to load too much data into memory ; a couple of examples :
Loading a big file in memory, with functions such as file or file_get_contents, or XML-related functions/classes
Creating a too big array of data
Creating too many objects
Considering you are using an ORM, you might be in a situation where :
You are doing some SQL query that returns a lot of rows
Your ORM is converting each row in objects, putting those in an array
In which case a solution would be to load less data
using pagination, for instance
or trying to load data as arrays instead of objects (I don't know if this is possible with Propel -- but it is with Doctrine ; so maybe Propel has some way of doing that too ? )
What exactly is your application doing at the time it runs out of memory. There can be a lot of causes for this. I'd say most common would be allocating too much data to an array. Is your application doing anything along those lines.
You have one of two things happening, perhaps both:
You have a runaway process somewhere that isn't ending when it should be.
You have algorithms that throw lots of data around, such as huge strings or arrays or objects, and are making needless copies instead of processing just what they need and discarding what they don't.
I think this has something to do with deployment from cruise control. I only get the very high (on the order of gigs) memory error when someone is deploying new code (or just after new code has been deployed). This makes a little bit of sense too since the error always points to a line that is a "require_once." Each time I get an error:
Fatal error: Out of memory (allocated 4456448) (tried to allocate 3949907977 bytes) in /directory/file.php on line 2
I have replaced the "require_once" line with:
class_exists('Ingrain_Security_Auth') || require('Ingrain/Security/Auth.php');
I have replaced that line in 3 files so far, and have not had any more memory issues. Can anyone shed some light into what might be going on? I am using Cruise Control to deploy.