What does "zend_mm_heap corrupted" mean - php

All of the sudden I've been having problems with my application that I've never had before. I decided to check the Apache's error log, and I found an error message saying "zend_mm_heap corrupted". What does this mean.
OS: Fedora Core 8
Apache: 2.2.9
PHP: 5.2.6

After much trial and error, I found that if I increase the output_buffering value in the php.ini file, this error goes away

This is not a problem that is necessarily solvable by changing configuration options.
Changing configuration options will sometimes have a positive impact, but it can just as easily make things worse, or do nothing at all.
The nature of the error is this:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main(void) {
void **mem = malloc(sizeof(char)*3);
void *ptr;
/* read past end */
ptr = (char*) mem[5];
/* write past end */
memcpy(mem[5], "whatever", sizeof("whatever"));
/* free invalid pointer */
free((void*) mem[3]);
return 0;
}
The code above can be compiled with:
gcc -g -o corrupt corrupt.c
Executing the code with valgrind you can see many memory errors, culminating in a segmentation fault:
krakjoe#fiji:/usr/src/php-src$ valgrind ./corrupt
==9749== Memcheck, a memory error detector
==9749== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==9749== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==9749== Command: ./corrupt
==9749==
==9749== Invalid read of size 8
==9749== at 0x4005F7: main (an.c:10)
==9749== Address 0x51fc068 is 24 bytes after a block of size 16 in arena "client"
==9749==
==9749== Invalid read of size 8
==9749== at 0x400607: main (an.c:13)
==9749== Address 0x51fc068 is 24 bytes after a block of size 16 in arena "client"
==9749==
==9749== Invalid write of size 2
==9749== at 0x4C2F7E3: memcpy##GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9749== by 0x40061B: main (an.c:13)
==9749== Address 0x50 is not stack'd, malloc'd or (recently) free'd
==9749==
==9749==
==9749== Process terminating with default action of signal 11 (SIGSEGV): dumping core
==9749== Access not within mapped region at address 0x50
==9749== at 0x4C2F7E3: memcpy##GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9749== by 0x40061B: main (an.c:13)
==9749== If you believe this happened as a result of a stack
==9749== overflow in your program's main thread (unlikely but
==9749== possible), you can try to increase the size of the
==9749== main thread stack using the --main-stacksize= flag.
==9749== The main thread stack size used in this run was 8388608.
==9749==
==9749== HEAP SUMMARY:
==9749== in use at exit: 3 bytes in 1 blocks
==9749== total heap usage: 1 allocs, 0 frees, 3 bytes allocated
==9749==
==9749== LEAK SUMMARY:
==9749== definitely lost: 0 bytes in 0 blocks
==9749== indirectly lost: 0 bytes in 0 blocks
==9749== possibly lost: 0 bytes in 0 blocks
==9749== still reachable: 3 bytes in 1 blocks
==9749== suppressed: 0 bytes in 0 blocks
==9749== Rerun with --leak-check=full to see details of leaked memory
==9749==
==9749== For counts of detected and suppressed errors, rerun with: -v
==9749== ERROR SUMMARY: 4 errors from 3 contexts (suppressed: 0 from 0)
Segmentation fault
If you didn't know, you already figured out that mem is heap allocated memory; The heap refers to the region of memory available to the program at runtime, because the program explicitly requested it (with malloc in our case).
If you play around with the terrible code, you will find that not all of those obviously incorrect statements results in a segmentation fault (a fatal terminating error).
I explicitly made those errors in the example code, but the same kinds of errors happen very easily in a memory managed environment: If some code doesn't maintain the refcount of a variable (or some other symbol) in the correct way, for example if it free's it too early, another piece of code may read from already free'd memory, if it somehow stores the address wrong, another piece of code may write to invalid memory, it may be free'd twice ...
These are not problems that can be debugged in PHP, they absolutely require the attention of an internals developer.
The course of action should be:
Open a bug report on http://bugs.php.net
If you have a segfault, try to provide a backtrace
Include as much configuration information as seems appropriate, in particular, if you are using opcache include optimization level.
Keep checking the bug report for updates, more information may be requested.
If you have opcache loaded, disable optimizations
I'm not picking on opcache, it's great, but some of it's optimizations have been known to cause faults.
If that doesn't work, even though your code may be slower, try unloading opcache first.
If any of this changes or fixes the problem, update the bug report you made.
Disable all unnecessary extensions at once.
Begin to enable all your extensions individually, thoroughly testing after each configuration change.
If you find the problem extension, update your bug report with more info.
Profit.
There may not be any profit ... I said at the start, you may be able to find a way to change your symptoms by messing with configuration, but this is extremely hit and miss, and doesn't help the next time you have the same zend_mm_heap corrupted message, there are only so many configuration options.
It's really important that we create bugs reports when we find bugs, we cannot assume that the next person to hit the bug is going to do it ... more likely than not, the actual resolution is in no way mysterious, if you make the right people aware of the problem.
USE_ZEND_ALLOC
If you set USE_ZEND_ALLOC=0 in the environment, this disables Zend's own memory manager; Zend's memory manager ensures that each request has it's own heap, that all memory is free'd at the end of a request, and is optimized for the allocation of chunks of memory just the right size for PHP.
Disabling it will disable those optimizations, more importantly it will likely create memory leaks, since there is a lot of extension code that relies upon the Zend MM to free memory for them at the end of a request (tut, tut).
It may also hide the symptoms, but the system heap can be corrupted in exactly the same way as Zend's heap.
It may seem to be more tolerant or less tolerant, but fix the root cause of the problem, it cannot.
The ability to disable it at all, is for the benefit of internals developers; You should never deploy PHP with Zend MM disabled.

I was getting this same error under PHP 5.5 and increasing the output buffering didn't help. I wasn't running APC either so that wasn't the issue. I finally tracked it down to opcache, I simply had to disable it from the cli. There was a specific setting for this:
opcache.enable_cli=0
Once switched the zend_mm_heap corrupted error went away.

If you are on Linux box, try this on the command line
export USE_ZEND_ALLOC=0

Check for unset()s. Make sure you don't unset() references to the $this (or equivalents) in destructors and that unset()s in destructors don't cause the reference count to the same object to drop to 0. I've done some research and found that's what usually causes the heap corruption.
There is a PHP bug report about the zend_mm_heap corrupted error. See the comment [2011-08-31 07:49 UTC] f dot ardelian at gmail dot com for an example on how to reproduce it.
I have a feeling that all the other "solutions" (change php.ini, compile PHP from source with less modules, etc.) just hide the problem.

For me none of the previous answers worked, until I tried:
opcache.fast_shutdown=0
That seems to work so far.
I'm using PHP 5.6 with PHP-FPM and Apache proxy_fcgi, if that matters...

In my case, the cause for this error was one of the arrays was becoming very big. I've set my script to reset the array on every iteration and that sorted the problem.

As per the bug tracker, set opcache.fast_shutdown=0. Fast shutdown uses the Zend memory manager to clean up its mess, this disables that.

I don't think there is one answer here, so I'll add my experience. I seen this same error along with random httpd segfaults. This was a cPanel server. The symptom in question was apache would randomly reset the connection (No data received in chrome, or connection was reset in firefox). These were seemingly random -- most of the time it worked, sometimes it did not.
When I arrived on the scene output buffering was OFF. By reading this thread, that hinted at output buffering, I turned it on (=4096) to see what would happen. At this point, they all started showing the errors. This was good being that the error was now repeatable.
I went through and started disabling extensions. Among them, eaccellerator, pdo, ioncube loader, and plenty that looked suspicion, but none helped.
I finally found the naughty PHP extension as "homeloader.so", which appears to be some kind of cPanel-easy-installer module. After removal, I haven't experienced any other issues.
On that note, it appears this is a generic error message so your milage will vary with all of these answers, best course of action you can take:
Make the error repeatable (what conditions?) every time
Find the common factor
Selectively disable any PHP modules, options, etc (or, if you're in a rush, disable them all to see if it helps, then selectively re-enable them until it breaks again)
If this fails to help, many of these answers hint that it could be code releated. Again, the key is to make the error repeatable every request so you can narrow it down. If you suspect a piece of code is doing this, once again, after the error is repeatable, just remove code until the error stops. Once it stops, you know the last piece of code you removed was the culprit.
Failing all of the above, you could also try things like:
Upgrading or recompiling PHP. Hope whatever bug is causing your issue is fixed.
Move your code to a different (testing) environment. If this fixes the issue, what changed? php.ini options? PHP version? etc...
Good luck.

I wrestled with this issue, for a week, This worked for me, or atleast so it seems
In php.ini make these changes
report_memleaks = Off
report_zend_debug = 0
My set up is
Linux ubuntu 2.6.32-30-generic-pae #59-Ubuntu SMP
with PHP Version 5.3.2-1ubuntu4.7
This didn’t work.
So I tried using a benchmark script, and tried recording where the script was hanging up.
I discovered that just before the error, a php object was instantiated, and it took more than 3 seconds to complete what the object was supposed to do, whereas in the previous loops it took max 0.4 seconds. I ran this test quite a few times, and every time the same. I thought instead of making a new object every time, (there is a long loop here), I should reuse the object. I have tested the script more than a dozen times so far, and the memory errors have disappeared!

Look for any module that uses buffering, and selectively disable it.
I'm running PHP 5.3.5 on CentOS 4.8, and after doing this I found eaccelerator needed an upgrade.

I just had this issue as well on a server I own, and the root cause was APC. I commented out the "apc.so" extension in the php.ini file, reloaded Apache, and the sites came right back up.

I've tried everything above and zend.enable_gc = 0 - the only config setting, that helped me.
PHP 5.3.10-1ubuntu3.2 with Suhosin-Patch (cli) (built: Jun 13 2012 17:19:58)

I had this error using the Mongo 2.2 driver for PHP:
$collection = $db->selectCollection('post');
$collection->ensureIndex(array('someField', 'someOtherField', 'yetAnotherField'));
^^DOESN'T WORK
$collection = $db->selectCollection('post');
$collection->ensureIndex(array('someField', 'someOtherField'));
$collection->ensureIndex(array('yetAnotherField'));
^^ WORKS! (?!)

On PHP 5.3 , after lot's of searching, this is the solution that worked for me:
I've disabled the PHP garbage collection for this page by adding:
<? gc_disable(); ?>
to the end of the problematic page, that made all the errors disappear.
source.

I think a lot of reason can cause this problem. And in my case, i name 2 classes the same name, and one will try to load another.
class A {} // in file a.php
class A // in file b.php
{
public function foo() { // load a.php }
}
And it causes this problem in my case.
(Using laravel framework, running php artisan db:seed in real)

I had this same issue and when I had an incorrect IP for session.save_path for memcached sessions. Changing it to the correct IP fixed the problem.

If you are using traits and the trait is loaded after the class (ie. the case of autoloading) you need to load the trait beforehand.
https://bugs.php.net/bug.php?id=62339
Note: this bug is very very random; due to it's nature.

For me the problem was using pdo_mysql. Query returned 1960 results. I tried to return 1900 records and it works. So problem is pdo_mysql and too large array. I rewrote query with original mysql extension and it worked.
$link = mysql_connect('localhost', 'user', 'xxxx') or die(mysql_error());
mysql_select_db("db", $link);
Apache did not report any previous errors.
zend_mm_heap corrupted
zend_mm_heap corrupted
zend_mm_heap corrupted
[Mon Jul 30 09:23:49 2012] [notice] child pid 8662 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:50 2012] [notice] child pid 8663 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:54 2012] [notice] child pid 8666 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:55 2012] [notice] child pid 8670 exit signal Segmentation fault (11)

"zend_mm_heap corrupted" means problems with memory management. Can be caused by any PHP module.
In my case installing APC worked out. In theory other packages like eAccelerator, XDebug etc. may help too. Or, if you have that kind of modules installed, try switching them off.

I am writing a php extension and also encounter this problem. When i call an extern function with complicated parameters from my extension, this error pop up.
The reason is my not allocating memory for a parameter(char *) in the extern function. If you are writing same kind of extension, please pay attention to this.

A lot of people are mentioning disabling XDebug to solve the issue. This obviously isn't viable in a lot of instances, as it's enabled for a reason - to debug your code.
I had the same issue, and noticed that if I stopped listening for XDebug connections in my IDE (PhpStorm 2019.1 EAP), the error stopped occurring.
The actual fix, for me, was removing any existing breakpoints.
A possibility for this being a valid fix is that PhpStorm is sometimes not that good at removing breakpoints that no longer reference valid lines of code after files have been changed externally (e.g. by git)
Edit:
Found the corresponding bug report in the xdebug issue tracker:
https://bugs.xdebug.org/view.php?id=1647

The issue with zend_mm_heap corrupted boggeld me for about a couple of hours. Firstly I disabled and removed memcached, tried some of the settings mentioned in this question's answers and after testing this seemed to be an issue with OPcache settings. I disabled OPcache and the problem went away. After that I re-enabled OPcache and for me the
core notice: child pid exit signal Segmentation fault
and
zend_mm_heap corrupted
are apparently resolved with changes to
/etc/php.d/10-opcache.ini
I included the settings I changed here; opcache.revalidate_freq=2 remains commmented out, I did not change that value.
opcache.enable=1
opcache.enable_cli=0
opcache.fast_shutdown=0
opcache.memory_consumption=1024
opcache.interned_strings_buffer=128
opcache.max_accelerated_files=60000

For me, it was the ZendDebugger that caused the memory leak and cuased the MemoryManager to crash.
I disabled it and I'm currently searching for a newer version. If I can't find one, I'm going to switch to xdebug...

Because I never found a solution to this I decided to upgrade my LAMP environment. I went to Ubuntu 10.4 LTS with PHP 5.3.x. This seems to have stopped the problem for me.

In my case, i forgot following in the code:
);
I played around and forgot it in the code here and there - in some places i got heap corruption, some cases just plain ol' seg fault:
[Wed Jun 08 17:23:21 2011] [notice] child pid 5720 exit signal Segmentation fault (11)
I'm on mac 10.6.7 and xampp.

I've also noticed this error and SIGSEGV's when running old code which uses '&' to explicitly force references while running it in PHP 5.2+.

Setting
assert.active = 0
in php.ini helped for me (it turned off type assertions in php5UTF8 library and zend_mm_heap corrupted went away)

For me the problem was crashed memcached daemon, as PHP was configured to store session information in memcached. It was eating 100% cpu and acting weird. After memcached restart problem has gone.

Since none of the other answers addressed it, I had this problem in php 5.4 when I accidentally ran an infinite loop.

Related

Absurdly high memory allocations with php7.4 under apache windows

What could be the cause of these very unlikely high memory allocations attempts, I notice lately on my production server:
PHP Fatal error: Allowed memory size of 1006632960 bytes exhausted (tried to allocate 51002234388 bytes) in D:\wp\wp-includes\load.php on line 1466
This happened in Wordpress (see error message), but also in Lime Survey.
I'm running PHP 7.4.27 on Windows Apache 2.4.21 on a Windows Server 2008.
The error is consistent (same number of bytes, same script, same line) and remains after a server restart.
Strangely I could get rid of the error in a Lime Survey installation by simply moving all the script files to a different folder.
Edit: Same now: Downloading via FTP all the script files in D:\wp, creating a new directory D:\wp and uploading all the files fia FTP, the error vanished. What's going on here?
Thank you!
The cause is most likely plugin related.
I would check:
wordpress error logs
php error logs
Apache error logs
server error logs
any pending cron jobs.
isolate and debug any plugin DB queries.
are there any heavy reports (DB generated)
Isolating which plugin, can be done by using a live backup, and deleting/disabling plugins one at a time.
Increasing the memory limit may be done through wordpress, but may not bite, unless configured on the server or php level.
1 - check PHP 7.4 memory limit
(check php.ini -- check folder configuration
in the windows)
2 - Insert memory limit in wp-config.php
3- insert the memory limit in .htaccess
4 - Check the plugins.
5 - Possibly have a hidden malicious code file
( 5.1 - look for files with the same name as the folder. Example: FOLDER = theme (when entering the folder) there is a file with the name = .theme.php
5.2 - Analyze your index.php, wp-config.php or .htaccess (if they were not auterated, enter codes)
6 - Analyze the logs
Now, after a while I strongly guess it was the opcache functionality, that caused the error. Maybe after updating some of the scripts did conflict with left untouched ones in the opcache.
Turning off the opcache did the trick (till now :)).

imagefill() causing 'Premature end of script headers'

I'm posting because after hours of searching I'm utterly confounded. Here's the deal. My Laravel application uses the PHP Image Workshop bundle. Everything seems to be working fine, except if I try to make a resizeInPixel() call or a cropInPixel() call (or similar calls) the server throws an internal server error. If I investigate the error log I see:
Premature end of script headers: index.php
This only occurs when I use the resize and crop related methods (i.e. image processing). I can initFromPath() with no issue, and I can use the save() method without issue. Only the image processing methods cause the internal server error.
I've also read online that this can be the result of a suphp_log file exceeding 2GB. I've tracked down and cleaned out that file, but to no avail.
Any thoughts are most welcome! Even if they're just general "have you tried...".
UPDATE
I've narrowed it down to a particular line in the Image Workshop code. This line is causing the error:
imagefill($image, 0, 0, $color);
Additionally, this error only occurs when the color is created using imagecolorallocatealpha, NOT when it is created using only imagecolorallocate.
There are some great hints for solving this issue at Liquidweb.com. My money is on #2 (see bold text) because you are getting the error when doing image manipulations:
Sometimes when executing a script you will see an error similar to the following:
Premature end of script headers: /home/directory/public_html/index.php
This error occurs because the server is expecting a complete set of HTTP headers (one or more followed by a blank line), and it doesn’t get them. This can be caused by several things:
Upgrading or downgrading to a different version of PHP can leave residual options in the httpd.conf. Check the current version of PHP using php -v on the command line and search for any lines mentioning another version in the httpd.conf. If you find them, comment them out, distill the httpd.conf and restart apache.
The RLimitCPU and RLimitMEM directives in the httpd.conf may also be responsible for the error if a script was killed due to a resource limit.
A configuration problem in suEXEC, mod_perl, or another third party module can often interfere with the execution of scripts and cause the error. If these are the cause, additional information relating to specifics will be found in the apache error_log.
If suphp’s log reaches 2GB in size or larger you may see the premature end of scripts headers error. See what the log contains and either gzip it or null it. Restart apache and then deal with any issues that the suphp log brought to light. The suphp log is located at: /usr/local/apache/logs/suphp_log
The script’s permissions may also cause this error. CGI scripts can only access resources allowed for the User and Group specified in the httpd.conf. In this case, the error may simply be pointing out that an unauthorized user is attempting to access a script.
UPDATE:
After some more info in the comments, I still feel this is a memory related thing.
According to this SO wiki: About gdlib
Warning: Image functions are very memory intensive. Be sure to set memory_limit high enough
What is your PHP memory_limit? Can you crank it up a bit?

PHP Redis timeout, read error on connection?

"PHP Fatal error: Uncaught exception 'RedisException' with message 'read error on connection'"
The driver here is phpredis
$redis->blpop('a', 0);
This always times out after ~1 minute. My redis.conf says timeout 0 and $redis->getOption(Redis::OPT_READ_TIMEOUT) returns double(0)
If I do this it has never timed out $redis->setOption(Redis::OPT_READ_TIMEOUT, -1);
Why do I need -1? Redis documentation says timeout 0 in redis.conf should never time me out.
"By default recent versions of Redis don't close the connection with the client if the client is idle for many seconds: the connection will remain open forever."
The current solution I know of is to disable persistent connections for phpredis, as they have been reported as buggy since October 2011. If you’re using php-fpm or other threaded models, the library specifically disables persistent connections.
Reducing the frequency of this error might be possible by adjusting the php.ini default_socket_timeout value.
Additionally, read timeout configurations in phpredis are not universally supported. The feature (look for OPT_READ_TIMEOUT) was introduced in tag 2.2.3.
$redis->connect(host, port, timeout1);
.....
$redis->blpop($key, timeout2);
In which timeout1 must be longer than timeout2.
After a lot of study of articles and doing my own strace's of redis and php, it seemed the issue was easily fixed by this solution. The main issue in my use case was that redis server is not able to fork a process towards saving the in-memory writes to the on-disk db.
I have left all the timeout values in php.ini and redis.conf as they were without making the hacky changes suggested and then tried the above solution alone, and this issue 'read error on connection' that was unfixable using all the suggestions around changing timeout values across php and redis conf files went away.
I also saw some suggestions around increasing limit on file descriptors to 100000 etc. I am running my use case on a cloud server with file descriptor limit at 1024 and my use case runs even with that limit perfectly.
I added the code ini_set(‘default_socket_timeout’, -1) in my php program, but I found it didn't work immediately.
However after 3 minutes when I started to run the php program again, at last I found the reason: the redis connection is not persistent
So I set timeout=0 in my redis.conf, and the problem is solved!

PHP script throws an impossible error - on one server, not on another

I suffer a little error and don't find a cue where to begin...
[11-Oct-2012 22:01:45] PHP Fatal error: Call to a member function
getVariableIndices() on a non-object in
/var/www/web1/html/inc/ProjectCache.php on line 545
Sound quite simple, but here is the code (with line numbers)
525 $itemData = $queryItems->fetchRows();
525 foreach ($questionData as $qData) {
526 $qp = QuestionPack::createWithData($qData, $itemData);
<snip>
538 $question = $qp->createGenerator(null);
539 if (!is_object($question)) {
540 trigger_error('Could not initialize generator ...', E_USER_WARNING);
541 continue;
542 }
543
544 // Question variables
545 $vix = $question->getVariableIndices();
<snip>
597 $question->destroy();
598 }
The method createGenerator() should always return an object. I added the subsequent IF statement for debugging reasons - never knows if my thinking is completely wrong...
The destroy method sets some instance variables (references to other objects) null, so that the garbage collector won't stick in circular references.
This problem only occurs in exactly one of about 10.000 projects on a productive system (Debian, PHP 5.3.3-7+squeeze14). Using the same script and the same data from the database, I fail to replicate it on my Windows developer system (PHP 5.3.1).
I should note that hunderts of thousands of method calls are done before the script comes to this statement (about 14 sec. processing time, because a cache is initialized during this run).
My best explanation for now is that anywhere in the megabytes of PHP script I trigger a buffer overflow or mislead the garbage collector, so that the object is released although still in use. However, the error triggers always in the same project, always with the same line (and it moves with this statement, if I place other debugging code before). This sounds quite untypical for my ideas.
Restarting the webserver did (of course) not help. Dropping the previous cache file did not help, either. This is where I am stuck. Anyone else finds this weired? Anyone who has an idea where to start??
Thanks
BurninLeo
I believe I have fixed a similar odd issue in php 5.3.2 by disabling the garbage collector via adding:
zend.enable_gc = Off
to php.ini. (and of course rebooting apache or whatever application server is parsing the php)
If that doesn't work I would suggest comparing your phpinfo() from a working server to the one with an issue. Preferably comparing linux to linux or windows to windows. Maybe something in the configuration will jump out at you.
I would also raise the error level to include notices.
Finally I would use gettype and get_class on $question when it's not an object to see what it IS.
http://www.php.net/manual/en/function.gettype.php
http://www.php.net/manual/en/function.get-class.php
Possibly call debug_backtrace as well when the error occurs.
http://php.net/manual/en/function.debug-backtrace.php
Good luck!

what is root cause of 'child pid 10708 exit signal Segmentation fault (11)' error?

I am getting more and more child pid 10708 exit signal Segmentation fault (11) errors.
what is the root cause of it and how to fix it ?
Is php ini memory is associated with this?
I am using apache2 server with php.
Thanks in advance.
It is entirely possible that your memory_limit variable in your php.ini may cause this problem. It certainly did in my case.
By lowering the memory_limit, I was able to resolve these errors.
The root cause is generally that the code is doing something wrong. A segmentation fault is generally what happens when a program does something that's not allowed, like trying to access memory that isn't valid.
If you're after a more specific cause, I'd be happy to list the thousand or so that spring to mind immediately :-)
On a more serious note, no. Short of knowing a great deal more, there is no easy way to give a specific answer.
The first step would be to figure out which program belongs to that process ID. Then you can start investigating why that program is faulting.
Though two general answers are already posted, I want to introduce one example that I have met in these days.
If your code runs into recursive infinite loop (that causes stack overflow error), child pid xxxxx exit signal Segmentation fault (11) will occur.
sample code :
function recursiveFunc() {
// some operation
recursiveFunc();
}

Categories