What does starting an Xdebug session do? - php

TL;DR: I have some PHP code that makes use of deprecated dynamic properties. If E_DEPRECATED is disabled in the .ini, the code:
Executes successfully if an xdebug session is triggered
Fails (in under a second) with a 502 if an xdebug session is not triggered
What does starting an xdebug session do, and why might this affect how silent deprecation warnings are handled?
A coworker has written a stack of PHP code that, when it runs, causes a 502 Bad Gateway error. When this happens, NGINX writes the following to its error log:
2023/02/01 15:16:46 [error] 405#0: *124 kevent() reported about an closed connection (54: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: *.example.org, request: "POST /a/fixtures/add_user?test_mode=true HTTP/1.1", upstream: "fastcgi://127.0.0.1:9010", host: "test.example.org"
Nothing gets written to the error log specified in php.ini. However, the following gets written to the FPM log:
[01-Feb-2023 16:33:52] WARNING: [pool www] child 21027 exited on signal 11 (SIGSEGV) after 71.459422 seconds from start
[01-Feb-2023 16:33:52] NOTICE: [pool www] child 21123 started
Similarly, in an Apache setup, the server ends up returning nothing, with no error log entries to speak of.
However, if an Xdebug session has been triggered, including if it's triggered via a call to xdebug_break(), the code runs fine and completes, error-free, without a 502.
The code is too lengthy and reliant on too many libraries to post, and we've had no success in identifying which actual standalone part of the code is failing. We can't replicate the error by running the problematic code by itself, only by running it in situ.
So, what I'm wondering is:
What does triggering an Xdebug session do? Technically, practically, etc.
Does it change any state for how PHP is executing, and if so, what might that be? Does it change any INI settings, for instance?
What could possibly explain code succeeding with Xdebug if it fails without?
Update
The code at fault here appears to be the following from an older version of the Respect/Validation library:
class Email extends AbstractRule
{
public function __construct(?EmailValidator $emailValidator = null)
{
$this->emailValidator = $emailValidator;
}
If this code is run with E_DEPRECATED, it should emit a deprecation warning regarding dynamic properties:
Deprecated: Creation of dynamic property Respect\Validation\Rules\Email::$emailValidator
In our case, we were running the code with E_DEPRECATED disabled. I would usually have expected PHP to just execute fine in this case, but for some reason it was having trouble with this deprecation and crashing, as described in the original question.
I'm guessing that starting an xdebug session changes how deprecation warnings are handled and so, despite not surfacing the deprecation warning to us, was correctly handling it behind the scenes?

Related

Log only real fatal errors from PHP-FPM in Docker container

I'm using NGINX with PHP-FPM in seperated Docker containers. I want to get errors only to stderr, so that I can collect them on a centralized log-server. Problem is: I'm using WP and seems to have some not well written plugins. They work but cause warnings like this:
2017/06/17 01:16:08 [error] 7#7: *1 FastCGI sent in stderr: "PHP
message: PHP Warning: Parameter 1 to wp_default_scripts() expected to
be a reference, value given in /www/wp-includes/plugin.php on line 601
Example script for testing, which should give me a fatal error in the stderr:
<?php
not_existing_func();
PHP-FPM was configured to log errors to stderr like this:
[global]
log_level = error
error_log = /proc/self/fd/2
I'm wondering that this gave me nothing in the script above. Just after I switched the log_level to at least notice, I got the exception on the console of the docker container:
[17-Jun-2017 01:45:35] WARNING: [pool www] child 8 said into stderr:
"NOTICE: PHP message: PHP Fatal error: Uncaught Error: Call to
undefined function not_existing_func() in /www/x.php:2"
Why the hell is this a notice? For me we've clearly a fatal error here like the message indicates, cause the script can't continue (and we get a 500 error in the browser, of course). It can't be true that I have to set log_level to notice so that I don't miss fatal errors which are delcared as warnings. And simultaneously, my logs get filled with trash warnings from wordpress themes, plugins and so on, that I haven't developed and I don't want to fix for update reasons...
I tried a bit and found out that log_errors in php.ini is essential for PHP-FPM to get any information. But the log level from error_reporting seems wired too. For testing purpose, I used the following configuration:
display_errors = Off
log_errors = On
error_log = /proc/self/fd/2
;error_reporting = E_COMPILE_ERROR|E_ERROR|E_CORE_ERROR
error_reporting = 0
Result: I got notices, but NO info about my fatal error...
First of all, I learned that I was wrong: Wordpress is the root cause for this issue, not PHP directly. It's a known fact that WP manipulates the error_reporting when debugging is enabled, so I tried to define WP_DEBUG as false in my config; BUT even having this set, the documentation says
[...]
Except for 'error_reporting', WordPress will set this to 4983 if WP_DEBUG is defined as false.
[...]
So my settings into php.ini were correct and sufficient. I don't even need the php-fpm settings, when errors are redirected to stdout in the php.ini file.
How to prevent WordPress from manipulating the error reporting?
This is not so easy, too. Although the WordPress documentation says, that the wp-config.php is a good place to set global wide settings like the error reporting, they got overwritten later to 4983. I don't know where; maybe it's not even the Core of Wordpress, but rather some poor developed plugin or theme.
We can handle this by adding error_reporting to the disabled functions:
disable_functions = error_reporting
Now it's not possible to overwrite our error_reporting. I think this is the best solution to make sure that we don't get any other error reporting by external influence from plugins or themes. Also in the future, since PHP allows such chaos, we need to reckon with such things.
Basically, we could criticize that this prevent us from getting more logs by setting WP_DEBUG to true, that's right, but as we're on a production system, it seems wrong for me to do troubleshooting in such a way there. We shouldn't do this on an application base, especially without display_errors! Instead, the workflow to find problems should be to look at the error logs.
Fatal errors should always be logged and checked on a regular basis. If that is not enough, error_reporting could be set on a higher level to get information about possible problems like warnings.

CodeIgniter returns a blank response, no error message. (Possibly related to filemtime(): stat failed)

I've written some code as part of an existing CodeIgniter 2 application, which is designed to extract some data from the database, generate preview images, transform the data, then send it to a third-party search service. It works perfectly on my development environment. However, after deploying it to the staging environment, I get an empty 200 response from the server.
The Apache error log shows nothing, the Apache access log just has a single entry with the 200 code. This is despite error_reporting being set to E_ALL and display_errors being set to true. The only error message I am seeing is in the .php log file generated by CodeIgniter within the application directory. Here it is:
ERROR - 2017-04-18 12:40:08 --> Severity: Warning --> filemtime(): stat failed for /var/www/vhosts/gvip.io/staging/current/wwwroot/cache/made/23f4b9ae97bceb7ea30e71bdc2a48ece_27_27_ffffff_all_2_s_c1.JPG /var/www/vhosts/gvip.io/staging/releases/20170417130504/.app/libraries/Ce_image.php 2236
Sure enough, line 2236 of the Ce_image library does include a call to filemtime:
$finaltime = #filemtime( $this->path_final ); //filetime of cached image
However,
I don't see how this error would cause the application to die (and not return the expected HTML response). It is listed as Warning, not Fatal.
If this is the problem, I'm honestly rather lost as to how to debug it. The line that's pointed to in the error message is part of a library, nested inside a long function, called from another function, etc.
It's certainly true that the file referenced in the error message doesn't exist.
Interestingly, the first time I ran the code, about 92 similar error messages were generated, each with a different file name. On subsequent runs, it only generated one error message each time (with the same file name each time).
I tried setting the CodeIgniter logging level to 4 (all messages), but it didn't really reveal anything new. Apart from some messages about initializing helpers and classes, it just had around 3300 lines stating XSS Filtering completed. (The query only has around 2600 records, so I'm not sure why 3300.)

mod_fcgid read data timeout - Premature end of script headers

The websites on one of my Plesk users can't be accessed. The server reports a 500 Internal server error, the error_log for that user shows a bunch of
[warn] mod_fcgid: read data timeout in 60 seconds
[error] Premature end of script headers: index.php
The DocumentRoot contains a normal WordPress installation. Other sites running the same WP version, using the same DB server and PHP+Extensions run fine. A <?php phpinfo(); ?> runs fine as well. Calling php index.php from cli returns the webpage, but is a bit too slow for an idle Xeon E5-2620 Server w/ 64GB RAM
Are there any known Problems? How can I debug further?
Some more system info:
PHP 5.6.24 (tried 5.4 as well)
Plesk 12.5.30
EDIT: The Problem occurs intermittently. Right now, no 500 Error is returned, site loads fine (a bit slow). I increased memory_limit, just to be sure it isn't a config limitation
You can try to increase FcgidIOTimeout as described here https://kb.plesk.com/en/121251
Since Plesk 11.5, "FcgidIOTimeout" parameter is set to the same value as max_execution_time php parameter in domain's PHP settings
Also you can try any of PHP-FPM handler instead of FastCGI, because of mod_fcgid has a lot of internal performance limitations which can't be avoided.
The problem was caused by a rogue file_get_contents in some scripts.
I looked through the error log for the 1st appearance of the error message, and found a file created exactly when the error message first appeared - only 2 years earlier...
WordPress Site hacked? Suspicious PHP file
So I removed the malware ( detailed write-up at https://talk.plesk.com/threads/debugging-premature-end-of-script-headers.338956/ ), rebooted the Server and the error is now gone.
Technical detail: The error turned up because the Server distributing the malware is offline. file_get_contents("http..." timed out, the local script failed and returned the error message.

Influence the exit code within a shutdown function

I am using a library which connects to a remote service. The library will generate a PHP Fatal error / Uncaught ErrorException in case the connection terminates unexpectedly.
My PHP script is running as a daemon (via Systemd), so I would like to automatically restart the daemon after a while to reconnect. All this is setup in Systemd, so if PHP exits with the status 4, the daemon will be restarted after some time.
PHP uses the exit code 255 by default for fatal errors, so I resolved to using a shutdown function similar to the following:
function shutdown() {
// Do a bit of this and that (e.g. assert
// there actually was the relevant error)
exit(4);
}
register_shutdown_function('shutdown');
trigger_error('Test', E_USER_ERROR);
But whatever I try, I cannot influence the exit code in any way. I have not found anything helpful in the documentation.
Is there any known way to manipulate the exit code within a shutdown function or by other means after the fatal error has already been generated?
This is a bug in PHP, which apparently has been filed multiple times over the years:
https://bugs.php.net/bug.php?id=65275
https://bugs.php.net/bug.php?id=62725
https://bugs.php.net/bug.php?id=62294
https://bugs.php.net/bug.php?id=23509
It is supposed to be fixed already, but as of PHP 5.5.30, I am still experiencing it myself.
I suppose you could try reverting to PHP 5.3, which was reportedly working as expected. Or you could check out one of the most recent releases (5.6.18 or 7.0.3 as of this answer), to see if the bug is indeed fixed.

What does "zend_mm_heap corrupted" mean

All of the sudden I've been having problems with my application that I've never had before. I decided to check the Apache's error log, and I found an error message saying "zend_mm_heap corrupted". What does this mean.
OS: Fedora Core 8
Apache: 2.2.9
PHP: 5.2.6
After much trial and error, I found that if I increase the output_buffering value in the php.ini file, this error goes away
This is not a problem that is necessarily solvable by changing configuration options.
Changing configuration options will sometimes have a positive impact, but it can just as easily make things worse, or do nothing at all.
The nature of the error is this:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main(void) {
void **mem = malloc(sizeof(char)*3);
void *ptr;
/* read past end */
ptr = (char*) mem[5];
/* write past end */
memcpy(mem[5], "whatever", sizeof("whatever"));
/* free invalid pointer */
free((void*) mem[3]);
return 0;
}
The code above can be compiled with:
gcc -g -o corrupt corrupt.c
Executing the code with valgrind you can see many memory errors, culminating in a segmentation fault:
krakjoe#fiji:/usr/src/php-src$ valgrind ./corrupt
==9749== Memcheck, a memory error detector
==9749== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==9749== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==9749== Command: ./corrupt
==9749==
==9749== Invalid read of size 8
==9749== at 0x4005F7: main (an.c:10)
==9749== Address 0x51fc068 is 24 bytes after a block of size 16 in arena "client"
==9749==
==9749== Invalid read of size 8
==9749== at 0x400607: main (an.c:13)
==9749== Address 0x51fc068 is 24 bytes after a block of size 16 in arena "client"
==9749==
==9749== Invalid write of size 2
==9749== at 0x4C2F7E3: memcpy##GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9749== by 0x40061B: main (an.c:13)
==9749== Address 0x50 is not stack'd, malloc'd or (recently) free'd
==9749==
==9749==
==9749== Process terminating with default action of signal 11 (SIGSEGV): dumping core
==9749== Access not within mapped region at address 0x50
==9749== at 0x4C2F7E3: memcpy##GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9749== by 0x40061B: main (an.c:13)
==9749== If you believe this happened as a result of a stack
==9749== overflow in your program's main thread (unlikely but
==9749== possible), you can try to increase the size of the
==9749== main thread stack using the --main-stacksize= flag.
==9749== The main thread stack size used in this run was 8388608.
==9749==
==9749== HEAP SUMMARY:
==9749== in use at exit: 3 bytes in 1 blocks
==9749== total heap usage: 1 allocs, 0 frees, 3 bytes allocated
==9749==
==9749== LEAK SUMMARY:
==9749== definitely lost: 0 bytes in 0 blocks
==9749== indirectly lost: 0 bytes in 0 blocks
==9749== possibly lost: 0 bytes in 0 blocks
==9749== still reachable: 3 bytes in 1 blocks
==9749== suppressed: 0 bytes in 0 blocks
==9749== Rerun with --leak-check=full to see details of leaked memory
==9749==
==9749== For counts of detected and suppressed errors, rerun with: -v
==9749== ERROR SUMMARY: 4 errors from 3 contexts (suppressed: 0 from 0)
Segmentation fault
If you didn't know, you already figured out that mem is heap allocated memory; The heap refers to the region of memory available to the program at runtime, because the program explicitly requested it (with malloc in our case).
If you play around with the terrible code, you will find that not all of those obviously incorrect statements results in a segmentation fault (a fatal terminating error).
I explicitly made those errors in the example code, but the same kinds of errors happen very easily in a memory managed environment: If some code doesn't maintain the refcount of a variable (or some other symbol) in the correct way, for example if it free's it too early, another piece of code may read from already free'd memory, if it somehow stores the address wrong, another piece of code may write to invalid memory, it may be free'd twice ...
These are not problems that can be debugged in PHP, they absolutely require the attention of an internals developer.
The course of action should be:
Open a bug report on http://bugs.php.net
If you have a segfault, try to provide a backtrace
Include as much configuration information as seems appropriate, in particular, if you are using opcache include optimization level.
Keep checking the bug report for updates, more information may be requested.
If you have opcache loaded, disable optimizations
I'm not picking on opcache, it's great, but some of it's optimizations have been known to cause faults.
If that doesn't work, even though your code may be slower, try unloading opcache first.
If any of this changes or fixes the problem, update the bug report you made.
Disable all unnecessary extensions at once.
Begin to enable all your extensions individually, thoroughly testing after each configuration change.
If you find the problem extension, update your bug report with more info.
Profit.
There may not be any profit ... I said at the start, you may be able to find a way to change your symptoms by messing with configuration, but this is extremely hit and miss, and doesn't help the next time you have the same zend_mm_heap corrupted message, there are only so many configuration options.
It's really important that we create bugs reports when we find bugs, we cannot assume that the next person to hit the bug is going to do it ... more likely than not, the actual resolution is in no way mysterious, if you make the right people aware of the problem.
USE_ZEND_ALLOC
If you set USE_ZEND_ALLOC=0 in the environment, this disables Zend's own memory manager; Zend's memory manager ensures that each request has it's own heap, that all memory is free'd at the end of a request, and is optimized for the allocation of chunks of memory just the right size for PHP.
Disabling it will disable those optimizations, more importantly it will likely create memory leaks, since there is a lot of extension code that relies upon the Zend MM to free memory for them at the end of a request (tut, tut).
It may also hide the symptoms, but the system heap can be corrupted in exactly the same way as Zend's heap.
It may seem to be more tolerant or less tolerant, but fix the root cause of the problem, it cannot.
The ability to disable it at all, is for the benefit of internals developers; You should never deploy PHP with Zend MM disabled.
I was getting this same error under PHP 5.5 and increasing the output buffering didn't help. I wasn't running APC either so that wasn't the issue. I finally tracked it down to opcache, I simply had to disable it from the cli. There was a specific setting for this:
opcache.enable_cli=0
Once switched the zend_mm_heap corrupted error went away.
If you are on Linux box, try this on the command line
export USE_ZEND_ALLOC=0
Check for unset()s. Make sure you don't unset() references to the $this (or equivalents) in destructors and that unset()s in destructors don't cause the reference count to the same object to drop to 0. I've done some research and found that's what usually causes the heap corruption.
There is a PHP bug report about the zend_mm_heap corrupted error. See the comment [2011-08-31 07:49 UTC] f dot ardelian at gmail dot com for an example on how to reproduce it.
I have a feeling that all the other "solutions" (change php.ini, compile PHP from source with less modules, etc.) just hide the problem.
For me none of the previous answers worked, until I tried:
opcache.fast_shutdown=0
That seems to work so far.
I'm using PHP 5.6 with PHP-FPM and Apache proxy_fcgi, if that matters...
In my case, the cause for this error was one of the arrays was becoming very big. I've set my script to reset the array on every iteration and that sorted the problem.
As per the bug tracker, set opcache.fast_shutdown=0. Fast shutdown uses the Zend memory manager to clean up its mess, this disables that.
I don't think there is one answer here, so I'll add my experience. I seen this same error along with random httpd segfaults. This was a cPanel server. The symptom in question was apache would randomly reset the connection (No data received in chrome, or connection was reset in firefox). These were seemingly random -- most of the time it worked, sometimes it did not.
When I arrived on the scene output buffering was OFF. By reading this thread, that hinted at output buffering, I turned it on (=4096) to see what would happen. At this point, they all started showing the errors. This was good being that the error was now repeatable.
I went through and started disabling extensions. Among them, eaccellerator, pdo, ioncube loader, and plenty that looked suspicion, but none helped.
I finally found the naughty PHP extension as "homeloader.so", which appears to be some kind of cPanel-easy-installer module. After removal, I haven't experienced any other issues.
On that note, it appears this is a generic error message so your milage will vary with all of these answers, best course of action you can take:
Make the error repeatable (what conditions?) every time
Find the common factor
Selectively disable any PHP modules, options, etc (or, if you're in a rush, disable them all to see if it helps, then selectively re-enable them until it breaks again)
If this fails to help, many of these answers hint that it could be code releated. Again, the key is to make the error repeatable every request so you can narrow it down. If you suspect a piece of code is doing this, once again, after the error is repeatable, just remove code until the error stops. Once it stops, you know the last piece of code you removed was the culprit.
Failing all of the above, you could also try things like:
Upgrading or recompiling PHP. Hope whatever bug is causing your issue is fixed.
Move your code to a different (testing) environment. If this fixes the issue, what changed? php.ini options? PHP version? etc...
Good luck.
I wrestled with this issue, for a week, This worked for me, or atleast so it seems
In php.ini make these changes
report_memleaks = Off
report_zend_debug = 0
My set up is
Linux ubuntu 2.6.32-30-generic-pae #59-Ubuntu SMP
with PHP Version 5.3.2-1ubuntu4.7
This didn’t work.
So I tried using a benchmark script, and tried recording where the script was hanging up.
I discovered that just before the error, a php object was instantiated, and it took more than 3 seconds to complete what the object was supposed to do, whereas in the previous loops it took max 0.4 seconds. I ran this test quite a few times, and every time the same. I thought instead of making a new object every time, (there is a long loop here), I should reuse the object. I have tested the script more than a dozen times so far, and the memory errors have disappeared!
Look for any module that uses buffering, and selectively disable it.
I'm running PHP 5.3.5 on CentOS 4.8, and after doing this I found eaccelerator needed an upgrade.
I just had this issue as well on a server I own, and the root cause was APC. I commented out the "apc.so" extension in the php.ini file, reloaded Apache, and the sites came right back up.
I've tried everything above and zend.enable_gc = 0 - the only config setting, that helped me.
PHP 5.3.10-1ubuntu3.2 with Suhosin-Patch (cli) (built: Jun 13 2012 17:19:58)
I had this error using the Mongo 2.2 driver for PHP:
$collection = $db->selectCollection('post');
$collection->ensureIndex(array('someField', 'someOtherField', 'yetAnotherField'));
^^DOESN'T WORK
$collection = $db->selectCollection('post');
$collection->ensureIndex(array('someField', 'someOtherField'));
$collection->ensureIndex(array('yetAnotherField'));
^^ WORKS! (?!)
On PHP 5.3 , after lot's of searching, this is the solution that worked for me:
I've disabled the PHP garbage collection for this page by adding:
<? gc_disable(); ?>
to the end of the problematic page, that made all the errors disappear.
source.
I think a lot of reason can cause this problem. And in my case, i name 2 classes the same name, and one will try to load another.
class A {} // in file a.php
class A // in file b.php
{
public function foo() { // load a.php }
}
And it causes this problem in my case.
(Using laravel framework, running php artisan db:seed in real)
I had this same issue and when I had an incorrect IP for session.save_path for memcached sessions. Changing it to the correct IP fixed the problem.
If you are using traits and the trait is loaded after the class (ie. the case of autoloading) you need to load the trait beforehand.
https://bugs.php.net/bug.php?id=62339
Note: this bug is very very random; due to it's nature.
For me the problem was using pdo_mysql. Query returned 1960 results. I tried to return 1900 records and it works. So problem is pdo_mysql and too large array. I rewrote query with original mysql extension and it worked.
$link = mysql_connect('localhost', 'user', 'xxxx') or die(mysql_error());
mysql_select_db("db", $link);
Apache did not report any previous errors.
zend_mm_heap corrupted
zend_mm_heap corrupted
zend_mm_heap corrupted
[Mon Jul 30 09:23:49 2012] [notice] child pid 8662 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:50 2012] [notice] child pid 8663 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:54 2012] [notice] child pid 8666 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:55 2012] [notice] child pid 8670 exit signal Segmentation fault (11)
"zend_mm_heap corrupted" means problems with memory management. Can be caused by any PHP module.
In my case installing APC worked out. In theory other packages like eAccelerator, XDebug etc. may help too. Or, if you have that kind of modules installed, try switching them off.
I am writing a php extension and also encounter this problem. When i call an extern function with complicated parameters from my extension, this error pop up.
The reason is my not allocating memory for a parameter(char *) in the extern function. If you are writing same kind of extension, please pay attention to this.
A lot of people are mentioning disabling XDebug to solve the issue. This obviously isn't viable in a lot of instances, as it's enabled for a reason - to debug your code.
I had the same issue, and noticed that if I stopped listening for XDebug connections in my IDE (PhpStorm 2019.1 EAP), the error stopped occurring.
The actual fix, for me, was removing any existing breakpoints.
A possibility for this being a valid fix is that PhpStorm is sometimes not that good at removing breakpoints that no longer reference valid lines of code after files have been changed externally (e.g. by git)
Edit:
Found the corresponding bug report in the xdebug issue tracker:
https://bugs.xdebug.org/view.php?id=1647
The issue with zend_mm_heap corrupted boggeld me for about a couple of hours. Firstly I disabled and removed memcached, tried some of the settings mentioned in this question's answers and after testing this seemed to be an issue with OPcache settings. I disabled OPcache and the problem went away. After that I re-enabled OPcache and for me the
core notice: child pid exit signal Segmentation fault
and
zend_mm_heap corrupted
are apparently resolved with changes to
/etc/php.d/10-opcache.ini
I included the settings I changed here; opcache.revalidate_freq=2 remains commmented out, I did not change that value.
opcache.enable=1
opcache.enable_cli=0
opcache.fast_shutdown=0
opcache.memory_consumption=1024
opcache.interned_strings_buffer=128
opcache.max_accelerated_files=60000
For me, it was the ZendDebugger that caused the memory leak and cuased the MemoryManager to crash.
I disabled it and I'm currently searching for a newer version. If I can't find one, I'm going to switch to xdebug...
Because I never found a solution to this I decided to upgrade my LAMP environment. I went to Ubuntu 10.4 LTS with PHP 5.3.x. This seems to have stopped the problem for me.
In my case, i forgot following in the code:
);
I played around and forgot it in the code here and there - in some places i got heap corruption, some cases just plain ol' seg fault:
[Wed Jun 08 17:23:21 2011] [notice] child pid 5720 exit signal Segmentation fault (11)
I'm on mac 10.6.7 and xampp.
I've also noticed this error and SIGSEGV's when running old code which uses '&' to explicitly force references while running it in PHP 5.2+.
Setting
assert.active = 0
in php.ini helped for me (it turned off type assertions in php5UTF8 library and zend_mm_heap corrupted went away)
For me the problem was crashed memcached daemon, as PHP was configured to store session information in memcached. It was eating 100% cpu and acting weird. After memcached restart problem has gone.
Since none of the other answers addressed it, I had this problem in php 5.4 when I accidentally ran an infinite loop.

Categories