Will enabling XDebug on a production server make PHP slower? - php
The title pretty much says it all...is it a bad idea ? I'd like to have the enhanced debug messages that XDebug provides on the server.
[edit]
Just to make things clear. I'm aware there are security risks involved. Perhaps I should complement my question and give more precise reasons why I would want to do this.
Our production server hosts a testing platform also. Sometimes we use it to test things on a environment as close to production as possible. The main thing I'm looking for is using XDebug's enhanced var_dump().
This is not an app server for high traffic apps and performance is not that big of an issue. I was just curious if performance would be noticeably impacted by XDebug.
Besides, I guess I could enable it only for the VirtualHost that defines the testing sites.
Besides the obvious fact that debug messages cannot be displayed in a application that is already in production, and also the fact that I don't know why would you like that, there a couple of things really bad about it.
The first one is that when you add debugging behavior to your server, the debug engine "attaches" to the PHP process and receive messages of the engine to stop at breakpoints, and this is BAD, because introduces a high performance blow to have another process stopping or "retaining" the PHP parser.
Another big issue is that when a debugger is installed, at least most of them, they tend to have the nasty habit of opening ports in your server, because they are not intended for production environments, and as you may know, any software that opens ports in your server is opening a door for any hacker around.
If you need to have debugging in your code, then in your application, implement a debugging system, if is not available, since most frameworks have this built in. Set a configuration value, say DEBUG_ENABLED and when throwing exceptions, if is not enabled, redirect to a petty page, else to a ugly page with debugging information, but take good care of what debugging information you display in your server.
I hope this clarifies everything.
EDIT As apparently my response is not documented enough, you should check these sources
PHPs XDebug tracing overhead in production
Careful: XDebug can skew your performance numbers
Finally, there is one thing I didn't said as I thought it was sort of implicit: It's common sense not do it! You don't put debugging instruments on your production server for the same reason that you keep them on a different environment, because you need to keep unnecessary stuff away from it. Any process running on a server, no matter how light it is, will impact your performance.
Slow down by factor 4
I made some tests just enabling the module, without actually debugging, makes slows down a request on my development machine from 1 second to around 4 seconds
Removing xdebug completely (even when it was not enabled) gave us 50% in page load boost (down from 60ms to 30ms). We had xdebug sitting "dormant" (waiting for trigger). We thought that since it's dormant it won't cause any harm, but boy were we wrong.
We commented out the zend_extension line in the php config at around 21:43. Average load dropped from 0.4 to 0.2 per core as well:
Why on earth do you want something like that? Debug before you deploy to production. It will make the app slower.
You should never keep that on production.
Your application shoud never need to print out "those nice debug messages", as they are not nice at all to your users. They are a sign of poor testing and they will kill user's trust, especially in a enterprise/ecommerce environment.
Second, the more detailed technical information you reveal, the more you are likely to get hacked (especially if you are already revealing that there ARE in fact problems with your code!). Production servers should log errors to files, and never display them.
Speed of execution is your least concern, anyway it will be impacted by it, as will memory.
Xdebug is for adding full stack traces to error logs, that is the display_errors ini value, which of course should be Off (even in development I dont want this). It does not allow remote attachment to a debugger unless you enable the remote_attach ini setting. While it is slower, if you have a PHP mystery error like Max memory allocated or Segmentation fault, this is the only way you will see where it actually hapenned.
You could always clone your live server with the exactly same configuration, except that it wouldn't be public.
Then you can install XDebug on it and debug things with the almost exactly the same conditions (well, load will be different between real life and the clone, but the rest will be the same).
In that case you debug things on a live environment, but real live is not affected.
Note: Obviously it does not apply to anyone. Not everyone can easily clone a server. If you use cloud services like AWS etc. it would be very easy. If you use server configuration tools like Ansible, Chef, Puppet for building your server this is a piece of cake as well.
I know this is an old post, but since the issue with Xdebug is still there 10 years on, I'd like to point to the relevant bug report (closed as WONTFIX NOTABUG): https://bugs.xdebug.org/view.php?id=1668
Tl;dr:
Just installing xdebug will (on linux #least) slow all php on the site to a crawl, with hits anywhere from 2x to 20x, even if all flags are set to OFF. DO NOT INSTALL xdebug IN PRODUCTION - EVER. Better yet, investigate less intrusive debug options.
You should never display debug error messages on a production server. It's ugly for your users and also a security risk. I'm sure it will make it a little slower too.
I tested the performance impact using this php benchmark tool. Disclaimer I built the tool.
The answer is the xdebug module significantly slows down code execution: from 2x to 7x times depending on the test. Here are my results:
# env information
php version : 7.4.5
platform : WINNT x64
# disable xdebug extension in php.ini
$ php src/benchmark.php --iterations 1000 --time-per-iteration 50 --save xdebug_off
# enable xdebug extension
$ php src/benchmark.php --iterations 1000 --time-per-iteration 50 --save xdebug_on
# compare
$ php src/compare.php --file1 benchmark_xdebug_off_20201127-0946.txt --file2 benchmark_xdebug_on_20201127-0939.txt
------------------------------------------------
test_math OFF ON
mean : 3762 531 -85.9%
median : 4226 568 -86.6%
mode : 4655 596 -87.2%
minmum : 918 188 -79.5%
maximum : 4722 612 -87.0%
quartile 1 : 3081 490 -84.1%
quartile 3 : 4580 595 -87.0%
IQ range : 1498 105 -93.0%
std deviation : 984 87 -91.1%
normality : 11.0% 11.0%
------------------------------------------------
test_strings
mean : 1419 677 -52.3%
median : 1521 688 -54.7%
mode : 1580 974 -38.4%
minmum : 537 90 -83.2%
maximum : 1629 1071 -34.3%
quartile 1 : 1319 452 -65.7%
quartile 3 : 1582 892 -43.6%
IQ range : 262 440 67.8%
std deviation : 226 248 9.8%
normality : 6.6% 6.6%
------------------------------------------------
test_loops
mean : 8131 1208 -85.1%
median : 8617 1240 -85.6%
mode : 9109 1407 -84.6%
minmum : 3167 589 -81.4%
maximum : 9666 1435 -85.2%
quartile 1 : 7390 1116 -84.9%
quartile 3 : 9253 1334 -85.6%
IQ range : 1863 217 -88.3%
std deviation : 1425 164 -88.4%
normality : 5.6% 5.6%
------------------------------------------------
test_if_else
mean : 279630 31263 -88.8%
median : 293553 31907 -89.1%
mode : 303706 37696 -87.6%
minmum : 104279 12560 -88.0%
maximum : 322143 37696 -88.3%
quartile 1 : 261977 28386 -89.2%
quartile 3 : 307904 34773 -88.7%
IQ range : 45927 6387 -86.1%
std deviation : 39034 4405 -88.7%
normality : 4.7% 4.7%
------------------------------------------------
test_arrays
mean : 5705 3275 -42.6%
median : 5847 3458 -40.9%
mode : 6040 3585 -40.6%
minmum : 3366 1609 -52.2%
maximum : 6132 3645 -40.6%
quartile 1 : 5603 3098 -44.7%
quartile 3 : 5965 3564 -40.3%
IQ range : 361 465 28.8%
std deviation : 404 394 -2.5%
normality : 2.4% 2.4%
------------------------------------------------
You can use XDebug in production if you "do it right". You can enable the extension in a "dormant" mode that is only brought to live through requests that go through a specific HOSTS name. Se details here:
http://www.drupalonwindows.com/en/content/remote-debugging-production-php-applications-xdebug
Related
Absurdly high memory allocations with php7.4 under apache windows
What could be the cause of these very unlikely high memory allocations attempts, I notice lately on my production server: PHP Fatal error: Allowed memory size of 1006632960 bytes exhausted (tried to allocate 51002234388 bytes) in D:\wp\wp-includes\load.php on line 1466 This happened in Wordpress (see error message), but also in Lime Survey. I'm running PHP 7.4.27 on Windows Apache 2.4.21 on a Windows Server 2008. The error is consistent (same number of bytes, same script, same line) and remains after a server restart. Strangely I could get rid of the error in a Lime Survey installation by simply moving all the script files to a different folder. Edit: Same now: Downloading via FTP all the script files in D:\wp, creating a new directory D:\wp and uploading all the files fia FTP, the error vanished. What's going on here? Thank you!
The cause is most likely plugin related. I would check: wordpress error logs php error logs Apache error logs server error logs any pending cron jobs. isolate and debug any plugin DB queries. are there any heavy reports (DB generated) Isolating which plugin, can be done by using a live backup, and deleting/disabling plugins one at a time. Increasing the memory limit may be done through wordpress, but may not bite, unless configured on the server or php level.
1 - check PHP 7.4 memory limit (check php.ini -- check folder configuration in the windows) 2 - Insert memory limit in wp-config.php 3- insert the memory limit in .htaccess 4 - Check the plugins. 5 - Possibly have a hidden malicious code file ( 5.1 - look for files with the same name as the folder. Example: FOLDER = theme (when entering the folder) there is a file with the name = .theme.php 5.2 - Analyze your index.php, wp-config.php or .htaccess (if they were not auterated, enter codes) 6 - Analyze the logs
Now, after a while I strongly guess it was the opcache functionality, that caused the error. Maybe after updating some of the scripts did conflict with left untouched ones in the opcache. Turning off the opcache did the trick (till now :)).
PHPUnit dataProvider does not get arguments from provider
I have a test suite that runs perfectly OK on my local env but on the server i get errors like: Missing argument 1 for ngDateTime_Test::testSecondsSinceMidnightStr() Looks like the providers don't inject values. I use the same version of PHPUnit on local env and remote server. Does anyone have a clue why it might be happening?
As stated in doc: http://pl1.php.net/opcache opcache.save_comments boolean If disabled, all documentation comments will be discarded from the opcode cache to reduce the size of the optimised code. Disabling this configuration directive may break applications and frameworks that rely on comment parsing for annotations, including Doctrine, Zen Framework 2 and PHPUnit Set opcache.save_comments = 0 , to fix this or set opcache.enable_cli = 0 , to fix this on command line interpreter, where you probably invoke
PHP script throws an impossible error - on one server, not on another
I suffer a little error and don't find a cue where to begin... [11-Oct-2012 22:01:45] PHP Fatal error: Call to a member function getVariableIndices() on a non-object in /var/www/web1/html/inc/ProjectCache.php on line 545 Sound quite simple, but here is the code (with line numbers) 525 $itemData = $queryItems->fetchRows(); 525 foreach ($questionData as $qData) { 526 $qp = QuestionPack::createWithData($qData, $itemData); <snip> 538 $question = $qp->createGenerator(null); 539 if (!is_object($question)) { 540 trigger_error('Could not initialize generator ...', E_USER_WARNING); 541 continue; 542 } 543 544 // Question variables 545 $vix = $question->getVariableIndices(); <snip> 597 $question->destroy(); 598 } The method createGenerator() should always return an object. I added the subsequent IF statement for debugging reasons - never knows if my thinking is completely wrong... The destroy method sets some instance variables (references to other objects) null, so that the garbage collector won't stick in circular references. This problem only occurs in exactly one of about 10.000 projects on a productive system (Debian, PHP 5.3.3-7+squeeze14). Using the same script and the same data from the database, I fail to replicate it on my Windows developer system (PHP 5.3.1). I should note that hunderts of thousands of method calls are done before the script comes to this statement (about 14 sec. processing time, because a cache is initialized during this run). My best explanation for now is that anywhere in the megabytes of PHP script I trigger a buffer overflow or mislead the garbage collector, so that the object is released although still in use. However, the error triggers always in the same project, always with the same line (and it moves with this statement, if I place other debugging code before). This sounds quite untypical for my ideas. Restarting the webserver did (of course) not help. Dropping the previous cache file did not help, either. This is where I am stuck. Anyone else finds this weired? Anyone who has an idea where to start?? Thanks BurninLeo
I believe I have fixed a similar odd issue in php 5.3.2 by disabling the garbage collector via adding: zend.enable_gc = Off to php.ini. (and of course rebooting apache or whatever application server is parsing the php) If that doesn't work I would suggest comparing your phpinfo() from a working server to the one with an issue. Preferably comparing linux to linux or windows to windows. Maybe something in the configuration will jump out at you. I would also raise the error level to include notices. Finally I would use gettype and get_class on $question when it's not an object to see what it IS. http://www.php.net/manual/en/function.gettype.php http://www.php.net/manual/en/function.get-class.php Possibly call debug_backtrace as well when the error occurs. http://php.net/manual/en/function.debug-backtrace.php Good luck!
Erratic 500 error on Codeigniter app
My CI app has been working well so far. However I noticed that when a longer SQL query is requested (for example on the home page where around 50 blog posts are shown) there is a serious problem. Sometimes the page loads fine. Unpredictably, as I reload that same page - with no change in content - the browser keeps hanging until I get back an Apache 500 error. This happens on multiple browsers. CI error logs show nothing. PHP error logs show nothing. I've noticed this is not an issue with smaller queries (ie, 20 posts), but am unsure if it has anything to do with the problem, after all, it does download 50 posts on some attempts. I know this is hard to explain in detail, but if anyone could give me any pointers on how to debug I'd be very grateful. Glad to add any info. The app is running on a Plesk 9 RHEL server, PHP 5.3.8, MySQL 5.5.17, CI 2.1.0. php error log file -rw-rw-r-- 1 apache apache 0 May 19 10:46 php_errors.log php.ini info error_log /var/log/php_errors.log /var/log/php_errors.log log_errors On On
Use the sparks Debug-Toolbar here: http://getsparks.org/packages/Debug-Toolbar/versions/HEAD/show Then watch the times that your queries take to load, view your memory etc. Slowly increase your post count from 20 to 30 to 50 to 100 etc until the error occurs - and see if something sticks out. I suspect a PHP timeout is occuring, either because you have the timeout value configured to low (should be around 230), or your query is really poorly written and inefficient, causing the server to take too long to return the result with a larger query.
What does "zend_mm_heap corrupted" mean
All of the sudden I've been having problems with my application that I've never had before. I decided to check the Apache's error log, and I found an error message saying "zend_mm_heap corrupted". What does this mean. OS: Fedora Core 8 Apache: 2.2.9 PHP: 5.2.6
After much trial and error, I found that if I increase the output_buffering value in the php.ini file, this error goes away
This is not a problem that is necessarily solvable by changing configuration options. Changing configuration options will sometimes have a positive impact, but it can just as easily make things worse, or do nothing at all. The nature of the error is this: #include <stdio.h> #include <string.h> #include <stdlib.h> int main(void) { void **mem = malloc(sizeof(char)*3); void *ptr; /* read past end */ ptr = (char*) mem[5]; /* write past end */ memcpy(mem[5], "whatever", sizeof("whatever")); /* free invalid pointer */ free((void*) mem[3]); return 0; } The code above can be compiled with: gcc -g -o corrupt corrupt.c Executing the code with valgrind you can see many memory errors, culminating in a segmentation fault: krakjoe#fiji:/usr/src/php-src$ valgrind ./corrupt ==9749== Memcheck, a memory error detector ==9749== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al. ==9749== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info ==9749== Command: ./corrupt ==9749== ==9749== Invalid read of size 8 ==9749== at 0x4005F7: main (an.c:10) ==9749== Address 0x51fc068 is 24 bytes after a block of size 16 in arena "client" ==9749== ==9749== Invalid read of size 8 ==9749== at 0x400607: main (an.c:13) ==9749== Address 0x51fc068 is 24 bytes after a block of size 16 in arena "client" ==9749== ==9749== Invalid write of size 2 ==9749== at 0x4C2F7E3: memcpy##GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==9749== by 0x40061B: main (an.c:13) ==9749== Address 0x50 is not stack'd, malloc'd or (recently) free'd ==9749== ==9749== ==9749== Process terminating with default action of signal 11 (SIGSEGV): dumping core ==9749== Access not within mapped region at address 0x50 ==9749== at 0x4C2F7E3: memcpy##GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==9749== by 0x40061B: main (an.c:13) ==9749== If you believe this happened as a result of a stack ==9749== overflow in your program's main thread (unlikely but ==9749== possible), you can try to increase the size of the ==9749== main thread stack using the --main-stacksize= flag. ==9749== The main thread stack size used in this run was 8388608. ==9749== ==9749== HEAP SUMMARY: ==9749== in use at exit: 3 bytes in 1 blocks ==9749== total heap usage: 1 allocs, 0 frees, 3 bytes allocated ==9749== ==9749== LEAK SUMMARY: ==9749== definitely lost: 0 bytes in 0 blocks ==9749== indirectly lost: 0 bytes in 0 blocks ==9749== possibly lost: 0 bytes in 0 blocks ==9749== still reachable: 3 bytes in 1 blocks ==9749== suppressed: 0 bytes in 0 blocks ==9749== Rerun with --leak-check=full to see details of leaked memory ==9749== ==9749== For counts of detected and suppressed errors, rerun with: -v ==9749== ERROR SUMMARY: 4 errors from 3 contexts (suppressed: 0 from 0) Segmentation fault If you didn't know, you already figured out that mem is heap allocated memory; The heap refers to the region of memory available to the program at runtime, because the program explicitly requested it (with malloc in our case). If you play around with the terrible code, you will find that not all of those obviously incorrect statements results in a segmentation fault (a fatal terminating error). I explicitly made those errors in the example code, but the same kinds of errors happen very easily in a memory managed environment: If some code doesn't maintain the refcount of a variable (or some other symbol) in the correct way, for example if it free's it too early, another piece of code may read from already free'd memory, if it somehow stores the address wrong, another piece of code may write to invalid memory, it may be free'd twice ... These are not problems that can be debugged in PHP, they absolutely require the attention of an internals developer. The course of action should be: Open a bug report on http://bugs.php.net If you have a segfault, try to provide a backtrace Include as much configuration information as seems appropriate, in particular, if you are using opcache include optimization level. Keep checking the bug report for updates, more information may be requested. If you have opcache loaded, disable optimizations I'm not picking on opcache, it's great, but some of it's optimizations have been known to cause faults. If that doesn't work, even though your code may be slower, try unloading opcache first. If any of this changes or fixes the problem, update the bug report you made. Disable all unnecessary extensions at once. Begin to enable all your extensions individually, thoroughly testing after each configuration change. If you find the problem extension, update your bug report with more info. Profit. There may not be any profit ... I said at the start, you may be able to find a way to change your symptoms by messing with configuration, but this is extremely hit and miss, and doesn't help the next time you have the same zend_mm_heap corrupted message, there are only so many configuration options. It's really important that we create bugs reports when we find bugs, we cannot assume that the next person to hit the bug is going to do it ... more likely than not, the actual resolution is in no way mysterious, if you make the right people aware of the problem. USE_ZEND_ALLOC If you set USE_ZEND_ALLOC=0 in the environment, this disables Zend's own memory manager; Zend's memory manager ensures that each request has it's own heap, that all memory is free'd at the end of a request, and is optimized for the allocation of chunks of memory just the right size for PHP. Disabling it will disable those optimizations, more importantly it will likely create memory leaks, since there is a lot of extension code that relies upon the Zend MM to free memory for them at the end of a request (tut, tut). It may also hide the symptoms, but the system heap can be corrupted in exactly the same way as Zend's heap. It may seem to be more tolerant or less tolerant, but fix the root cause of the problem, it cannot. The ability to disable it at all, is for the benefit of internals developers; You should never deploy PHP with Zend MM disabled.
I was getting this same error under PHP 5.5 and increasing the output buffering didn't help. I wasn't running APC either so that wasn't the issue. I finally tracked it down to opcache, I simply had to disable it from the cli. There was a specific setting for this: opcache.enable_cli=0 Once switched the zend_mm_heap corrupted error went away.
If you are on Linux box, try this on the command line export USE_ZEND_ALLOC=0
Check for unset()s. Make sure you don't unset() references to the $this (or equivalents) in destructors and that unset()s in destructors don't cause the reference count to the same object to drop to 0. I've done some research and found that's what usually causes the heap corruption. There is a PHP bug report about the zend_mm_heap corrupted error. See the comment [2011-08-31 07:49 UTC] f dot ardelian at gmail dot com for an example on how to reproduce it. I have a feeling that all the other "solutions" (change php.ini, compile PHP from source with less modules, etc.) just hide the problem.
For me none of the previous answers worked, until I tried: opcache.fast_shutdown=0 That seems to work so far. I'm using PHP 5.6 with PHP-FPM and Apache proxy_fcgi, if that matters...
In my case, the cause for this error was one of the arrays was becoming very big. I've set my script to reset the array on every iteration and that sorted the problem.
As per the bug tracker, set opcache.fast_shutdown=0. Fast shutdown uses the Zend memory manager to clean up its mess, this disables that.
I don't think there is one answer here, so I'll add my experience. I seen this same error along with random httpd segfaults. This was a cPanel server. The symptom in question was apache would randomly reset the connection (No data received in chrome, or connection was reset in firefox). These were seemingly random -- most of the time it worked, sometimes it did not. When I arrived on the scene output buffering was OFF. By reading this thread, that hinted at output buffering, I turned it on (=4096) to see what would happen. At this point, they all started showing the errors. This was good being that the error was now repeatable. I went through and started disabling extensions. Among them, eaccellerator, pdo, ioncube loader, and plenty that looked suspicion, but none helped. I finally found the naughty PHP extension as "homeloader.so", which appears to be some kind of cPanel-easy-installer module. After removal, I haven't experienced any other issues. On that note, it appears this is a generic error message so your milage will vary with all of these answers, best course of action you can take: Make the error repeatable (what conditions?) every time Find the common factor Selectively disable any PHP modules, options, etc (or, if you're in a rush, disable them all to see if it helps, then selectively re-enable them until it breaks again) If this fails to help, many of these answers hint that it could be code releated. Again, the key is to make the error repeatable every request so you can narrow it down. If you suspect a piece of code is doing this, once again, after the error is repeatable, just remove code until the error stops. Once it stops, you know the last piece of code you removed was the culprit. Failing all of the above, you could also try things like: Upgrading or recompiling PHP. Hope whatever bug is causing your issue is fixed. Move your code to a different (testing) environment. If this fixes the issue, what changed? php.ini options? PHP version? etc... Good luck.
I wrestled with this issue, for a week, This worked for me, or atleast so it seems In php.ini make these changes report_memleaks = Off report_zend_debug = 0 My set up is Linux ubuntu 2.6.32-30-generic-pae #59-Ubuntu SMP with PHP Version 5.3.2-1ubuntu4.7 This didn’t work. So I tried using a benchmark script, and tried recording where the script was hanging up. I discovered that just before the error, a php object was instantiated, and it took more than 3 seconds to complete what the object was supposed to do, whereas in the previous loops it took max 0.4 seconds. I ran this test quite a few times, and every time the same. I thought instead of making a new object every time, (there is a long loop here), I should reuse the object. I have tested the script more than a dozen times so far, and the memory errors have disappeared!
Look for any module that uses buffering, and selectively disable it. I'm running PHP 5.3.5 on CentOS 4.8, and after doing this I found eaccelerator needed an upgrade.
I just had this issue as well on a server I own, and the root cause was APC. I commented out the "apc.so" extension in the php.ini file, reloaded Apache, and the sites came right back up.
I've tried everything above and zend.enable_gc = 0 - the only config setting, that helped me. PHP 5.3.10-1ubuntu3.2 with Suhosin-Patch (cli) (built: Jun 13 2012 17:19:58)
I had this error using the Mongo 2.2 driver for PHP: $collection = $db->selectCollection('post'); $collection->ensureIndex(array('someField', 'someOtherField', 'yetAnotherField')); ^^DOESN'T WORK $collection = $db->selectCollection('post'); $collection->ensureIndex(array('someField', 'someOtherField')); $collection->ensureIndex(array('yetAnotherField')); ^^ WORKS! (?!)
On PHP 5.3 , after lot's of searching, this is the solution that worked for me: I've disabled the PHP garbage collection for this page by adding: <? gc_disable(); ?> to the end of the problematic page, that made all the errors disappear. source.
I think a lot of reason can cause this problem. And in my case, i name 2 classes the same name, and one will try to load another. class A {} // in file a.php class A // in file b.php { public function foo() { // load a.php } } And it causes this problem in my case. (Using laravel framework, running php artisan db:seed in real)
I had this same issue and when I had an incorrect IP for session.save_path for memcached sessions. Changing it to the correct IP fixed the problem.
If you are using traits and the trait is loaded after the class (ie. the case of autoloading) you need to load the trait beforehand. https://bugs.php.net/bug.php?id=62339 Note: this bug is very very random; due to it's nature.
For me the problem was using pdo_mysql. Query returned 1960 results. I tried to return 1900 records and it works. So problem is pdo_mysql and too large array. I rewrote query with original mysql extension and it worked. $link = mysql_connect('localhost', 'user', 'xxxx') or die(mysql_error()); mysql_select_db("db", $link); Apache did not report any previous errors. zend_mm_heap corrupted zend_mm_heap corrupted zend_mm_heap corrupted [Mon Jul 30 09:23:49 2012] [notice] child pid 8662 exit signal Segmentation fault (11) [Mon Jul 30 09:23:50 2012] [notice] child pid 8663 exit signal Segmentation fault (11) [Mon Jul 30 09:23:54 2012] [notice] child pid 8666 exit signal Segmentation fault (11) [Mon Jul 30 09:23:55 2012] [notice] child pid 8670 exit signal Segmentation fault (11)
"zend_mm_heap corrupted" means problems with memory management. Can be caused by any PHP module. In my case installing APC worked out. In theory other packages like eAccelerator, XDebug etc. may help too. Or, if you have that kind of modules installed, try switching them off.
I am writing a php extension and also encounter this problem. When i call an extern function with complicated parameters from my extension, this error pop up. The reason is my not allocating memory for a parameter(char *) in the extern function. If you are writing same kind of extension, please pay attention to this.
A lot of people are mentioning disabling XDebug to solve the issue. This obviously isn't viable in a lot of instances, as it's enabled for a reason - to debug your code. I had the same issue, and noticed that if I stopped listening for XDebug connections in my IDE (PhpStorm 2019.1 EAP), the error stopped occurring. The actual fix, for me, was removing any existing breakpoints. A possibility for this being a valid fix is that PhpStorm is sometimes not that good at removing breakpoints that no longer reference valid lines of code after files have been changed externally (e.g. by git) Edit: Found the corresponding bug report in the xdebug issue tracker: https://bugs.xdebug.org/view.php?id=1647
The issue with zend_mm_heap corrupted boggeld me for about a couple of hours. Firstly I disabled and removed memcached, tried some of the settings mentioned in this question's answers and after testing this seemed to be an issue with OPcache settings. I disabled OPcache and the problem went away. After that I re-enabled OPcache and for me the core notice: child pid exit signal Segmentation fault and zend_mm_heap corrupted are apparently resolved with changes to /etc/php.d/10-opcache.ini I included the settings I changed here; opcache.revalidate_freq=2 remains commmented out, I did not change that value. opcache.enable=1 opcache.enable_cli=0 opcache.fast_shutdown=0 opcache.memory_consumption=1024 opcache.interned_strings_buffer=128 opcache.max_accelerated_files=60000
For me, it was the ZendDebugger that caused the memory leak and cuased the MemoryManager to crash. I disabled it and I'm currently searching for a newer version. If I can't find one, I'm going to switch to xdebug...
Because I never found a solution to this I decided to upgrade my LAMP environment. I went to Ubuntu 10.4 LTS with PHP 5.3.x. This seems to have stopped the problem for me.
In my case, i forgot following in the code: ); I played around and forgot it in the code here and there - in some places i got heap corruption, some cases just plain ol' seg fault: [Wed Jun 08 17:23:21 2011] [notice] child pid 5720 exit signal Segmentation fault (11) I'm on mac 10.6.7 and xampp.
I've also noticed this error and SIGSEGV's when running old code which uses '&' to explicitly force references while running it in PHP 5.2+.
Setting assert.active = 0 in php.ini helped for me (it turned off type assertions in php5UTF8 library and zend_mm_heap corrupted went away)
For me the problem was crashed memcached daemon, as PHP was configured to store session information in memcached. It was eating 100% cpu and acting weird. After memcached restart problem has gone.
Since none of the other answers addressed it, I had this problem in php 5.4 when I accidentally ran an infinite loop.