I have a Laravel Spark project that uses Horizon to manage a job queue with Redis.
Locally, (on my Homestead box, Mac OS) everything works as expected, but on our new Digital Ocean (Forge provisioned) Droplet, which is a memory-optimized 256GB, 32vCPUs, 10TB, and 1x 800GB VPS, I keep getting the error:
PDOException: Packets out of order. Expected 0 received 1. Packet size=23
Or some variation of that error, where the packet size info may be different.
After many hours/days of debugging and research, I have come across many posts on StackOverflow and elsewhere, that seem to indicate that this can be fixed by doing a number of things, listed below:
Set PDO::ATTR_EMULATE_PREPARES to true in my database.php config. This has absolutely no effect on the problem, and actually introduces another issue, whereby integers are cast as strings.
Set DB_HOST to 127.0.0.1 instead of localhost, so that it uses TCP instead of a UNIX socket. Again, this has no effect.
Set DB_SOCKET to the socket path listed in MySQL by logging into MySQL (MariaDB) and running show variables like '%socket%'; which lists the socket path as /run/mysqld/mysqld.sock. I also leave DB_HOST set to localhost. This has no effect either. One thing I did note, was that the pdo_mysql.default_socket variable is set to /var/run/mysqld/mysqld.sock, I'm not sure if this is part of the problem?
I have massively increased the MySQL configuration settings found in /etc/mysql/mariadb.conf.d/50-server.cnf to the following:
key_buffer_size = 2048M
max_allowed_packet = 2048M
max_connections = 1000
thread_concurrency = 100
query_cache_size = 256M
I must admit, that changing these settings was a last resort/clutching at straws type scenario. However, this did alleviate the issue to some degree, but it did not fix it completely, as MySQL still fails 99% of the time, albeit at a later stage.
In terms of the queue, I have a total of 1,136 workers split between 6 supervisors/queues and it's all handled via Laravel Horizon, which is being run as a Daemon.
I am also using the Laravel Websockets PHP package for broadcasting, again, which is also being run as a Daemon.
My current environment configuration is as follows (sensitive info omitted).
APP_NAME="App Name"
APP_ENV=production
APP_DEBUG=false
APP_KEY=thekey
APP_URL=https://appurl.com
LOG_CHANNEL=single
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=databse
DB_USERNAME=username
DB_PASSWORD=password
BROADCAST_DRIVER=pusher
CACHE_DRIVER=file
QUEUE_CONNECTION=redis
SESSION_DRIVER=file
SESSION_LIFETIME=120
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_MAILER=smtp
MAIL_HOST=smtp.gmail.com
MAIL_PORT=587
MAIL_USERNAME=name#email.com
MAIL_PASSWORD=password
MAIL_ENCRYPTION=tls
MAIL_FROM_ADDRESS=name#email.com
MAIL_FROM_NAME="${APP_NAME}"
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION="us-east-1"
AWS_BUCKET=
PUSHER_APP_ID=appid
PUSHER_APP_KEY=appkey
PUSHER_APP_SECRET=appsecret
PUSHER_APP_CLUSTER=mt1
MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
AUTHY_SECRET=
CASHIER_CURRENCY=usd
CASHIER_CURRENCY_LOCALE=en
CASHIER_MODEL=App\Models\User
STRIPE_KEY=stripekey
STRIPE_SECRET=stripesecret
# ECHO SERVER
LARAVEL_WEBSOCKETS_PORT=port
The server setup is as follows:
Max File Upload Size: 1024
Max Execution Time: 300
PHP Version: 7.4
MariaDB Version: 10.3.22
I have checked all logs (see below) at the time the MySQL server crashes/goes away, and there is nothing in the MySQL logs at all. No error whatsoever. I also don't see anything in:
/var/log/nginx/error.log
/var/log/nginx/access.log
/var/log/php7.4-fpm.log
I'm currently still digging through and debugging, but right now, I'm stumped. This is the first time I've ever come across this error.
Could this be down to hitting the database (read/write) too fast?
A little information on how the queues work.
I have an initial controller that dispatches a job to the queue.
Once this job completes, it fires an event which then starts the process of running several other listeners/events in sequence, all of which depend on the previous jobs completing before new events are fired and new listeners/jobs take up the work.
In total, there are 30 events that are broadcast.
In total, there are 30 listeners.
In total there are 5 jobs.
These all work sequentially based on the listener/job that was run and the event that it fires.
I have also monitored the laravel.log live and when the crash occurs, nothing is logged at all. Although, I do occasionally get production.ERROR: Failed to connect to Pusher. whether MySQL crashes or not, so I don't think that has any bearing on this problem.
I even noticed that the Laravel API rate limit was being hit, so I made sure to drastically increase that from 60 to 500. Still no joy.
Lastly, it doesn't seem to matter which Event, Job, or Listener is running as the error occurs on random ones. So, not sure it's code-specific, although, it may well be.
Hopefully, I've provided enough background and detailed information to get some help with this, but if I've missed anything, please do let me know and I'll add it to the question. Thanks.
For me what fixed it was increasing the max packet size.
In my.cnf, I added:
max_allowed_packet=200M
And then service mysql stop, service mysql start, and it worked :)
We were getting a similar PHP warning about packets out of order.
What solved it for us is increasing max_connections in the MySQL my.cnf.
Your current max_connections are probably 1024. We increased ours to 4096 and the warning went away.
In MySQL you can see your current max_connections with this command:
SHOW VARIABLES LIKE "%max_connections%";
or
mysqladmin variables | grep max_connections
I hit a similar issue that was reproducible, it was a programming error:
I was using an unbuffered database cursor and did not close the cursor before firing off other DB operations. The exact error thrown was Packets out of order. Expected 1 received 2.
The first thing to check is the wait_timeout of the MySQL server, in relation to the time that your application takes between queries. I'm able to recreate this error consistently by sleeping longer than wait_timeout seconds between SQL queries.
If your application performs a query, then does something else for a while that takes longer than that period, the MySQL server terminates the connection, but your PHP code may not be aware that the server has disconnected. If the PHP application then tries to issue another query using the the closed connection, it will generate this error (in my tests, consistently with Expected 0 received 1.
You could fix this by:
Extending the wait_timeout, either globally on the server, or on a per-session basis using the command SET session wait_timeout=<new_value>;
Catching the error and retrying once
Preemptively reconnecting to the server when you know that more than wait_timeout seconds have elapsed between queries.
This error could probably occur because of other problems as well.
I would check that you are using a persistent connection and not connecting to the server over and over again. Sometimes the connection process, especially with many simultaneous workers, causes a lot of network overhead that could cause a problem such as this.
Also, sometimes, in a production, high-transaction volume server, weird network stuff happens and this may just happen occasionally, even, it seems over the loopback interface in your case.
In any case, it is best to write your code so that it can gracefully handle errors and retry. Often, you could wrap your SQL query in a try..catch to catch this error when it happens and try again.
MySQL 8 - in mysql.cnf, disable all this ->
# For error - ( MySQL server has gone away )
#wait_timeout=90
#net_read_timeout=90
#net_write_timeout=90
#interactive_timeout=300
and looks like help me.
I have a Laravel Spark project that uses Horizon to manage a job queue with Redis.
Locally, (on my Homestead box, Mac OS) everything works as expected, but on our new Digital Ocean (Forge provisioned) Droplet, which is a memory-optimized 256GB, 32vCPUs, 10TB, and 1x 800GB VPS, I keep getting the error:
PDOException: Packets out of order. Expected 0 received 1. Packet size=23
Or some variation of that error, where the packet size info may be different.
After many hours/days of debugging and research, I have come across many posts on StackOverflow and elsewhere, that seem to indicate that this can be fixed by doing a number of things, listed below:
Set PDO::ATTR_EMULATE_PREPARES to true in my database.php config. This has absolutely no effect on the problem, and actually introduces another issue, whereby integers are cast as strings.
Set DB_HOST to 127.0.0.1 instead of localhost, so that it uses TCP instead of a UNIX socket. Again, this has no effect.
Set DB_SOCKET to the socket path listed in MySQL by logging into MySQL (MariaDB) and running show variables like '%socket%'; which lists the socket path as /run/mysqld/mysqld.sock. I also leave DB_HOST set to localhost. This has no effect either. One thing I did note, was that the pdo_mysql.default_socket variable is set to /var/run/mysqld/mysqld.sock, I'm not sure if this is part of the problem?
I have massively increased the MySQL configuration settings found in /etc/mysql/mariadb.conf.d/50-server.cnf to the following:
key_buffer_size = 2048M
max_allowed_packet = 2048M
max_connections = 1000
thread_concurrency = 100
query_cache_size = 256M
I must admit, that changing these settings was a last resort/clutching at straws type scenario. However, this did alleviate the issue to some degree, but it did not fix it completely, as MySQL still fails 99% of the time, albeit at a later stage.
In terms of the queue, I have a total of 1,136 workers split between 6 supervisors/queues and it's all handled via Laravel Horizon, which is being run as a Daemon.
I am also using the Laravel Websockets PHP package for broadcasting, again, which is also being run as a Daemon.
My current environment configuration is as follows (sensitive info omitted).
APP_NAME="App Name"
APP_ENV=production
APP_DEBUG=false
APP_KEY=thekey
APP_URL=https://appurl.com
LOG_CHANNEL=single
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=databse
DB_USERNAME=username
DB_PASSWORD=password
BROADCAST_DRIVER=pusher
CACHE_DRIVER=file
QUEUE_CONNECTION=redis
SESSION_DRIVER=file
SESSION_LIFETIME=120
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_MAILER=smtp
MAIL_HOST=smtp.gmail.com
MAIL_PORT=587
MAIL_USERNAME=name#email.com
MAIL_PASSWORD=password
MAIL_ENCRYPTION=tls
MAIL_FROM_ADDRESS=name#email.com
MAIL_FROM_NAME="${APP_NAME}"
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION="us-east-1"
AWS_BUCKET=
PUSHER_APP_ID=appid
PUSHER_APP_KEY=appkey
PUSHER_APP_SECRET=appsecret
PUSHER_APP_CLUSTER=mt1
MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
AUTHY_SECRET=
CASHIER_CURRENCY=usd
CASHIER_CURRENCY_LOCALE=en
CASHIER_MODEL=App\Models\User
STRIPE_KEY=stripekey
STRIPE_SECRET=stripesecret
# ECHO SERVER
LARAVEL_WEBSOCKETS_PORT=port
The server setup is as follows:
Max File Upload Size: 1024
Max Execution Time: 300
PHP Version: 7.4
MariaDB Version: 10.3.22
I have checked all logs (see below) at the time the MySQL server crashes/goes away, and there is nothing in the MySQL logs at all. No error whatsoever. I also don't see anything in:
/var/log/nginx/error.log
/var/log/nginx/access.log
/var/log/php7.4-fpm.log
I'm currently still digging through and debugging, but right now, I'm stumped. This is the first time I've ever come across this error.
Could this be down to hitting the database (read/write) too fast?
A little information on how the queues work.
I have an initial controller that dispatches a job to the queue.
Once this job completes, it fires an event which then starts the process of running several other listeners/events in sequence, all of which depend on the previous jobs completing before new events are fired and new listeners/jobs take up the work.
In total, there are 30 events that are broadcast.
In total, there are 30 listeners.
In total there are 5 jobs.
These all work sequentially based on the listener/job that was run and the event that it fires.
I have also monitored the laravel.log live and when the crash occurs, nothing is logged at all. Although, I do occasionally get production.ERROR: Failed to connect to Pusher. whether MySQL crashes or not, so I don't think that has any bearing on this problem.
I even noticed that the Laravel API rate limit was being hit, so I made sure to drastically increase that from 60 to 500. Still no joy.
Lastly, it doesn't seem to matter which Event, Job, or Listener is running as the error occurs on random ones. So, not sure it's code-specific, although, it may well be.
Hopefully, I've provided enough background and detailed information to get some help with this, but if I've missed anything, please do let me know and I'll add it to the question. Thanks.
For me what fixed it was increasing the max packet size.
In my.cnf, I added:
max_allowed_packet=200M
And then service mysql stop, service mysql start, and it worked :)
We were getting a similar PHP warning about packets out of order.
What solved it for us is increasing max_connections in the MySQL my.cnf.
Your current max_connections are probably 1024. We increased ours to 4096 and the warning went away.
In MySQL you can see your current max_connections with this command:
SHOW VARIABLES LIKE "%max_connections%";
or
mysqladmin variables | grep max_connections
I hit a similar issue that was reproducible, it was a programming error:
I was using an unbuffered database cursor and did not close the cursor before firing off other DB operations. The exact error thrown was Packets out of order. Expected 1 received 2.
The first thing to check is the wait_timeout of the MySQL server, in relation to the time that your application takes between queries. I'm able to recreate this error consistently by sleeping longer than wait_timeout seconds between SQL queries.
If your application performs a query, then does something else for a while that takes longer than that period, the MySQL server terminates the connection, but your PHP code may not be aware that the server has disconnected. If the PHP application then tries to issue another query using the the closed connection, it will generate this error (in my tests, consistently with Expected 0 received 1.
You could fix this by:
Extending the wait_timeout, either globally on the server, or on a per-session basis using the command SET session wait_timeout=<new_value>;
Catching the error and retrying once
Preemptively reconnecting to the server when you know that more than wait_timeout seconds have elapsed between queries.
This error could probably occur because of other problems as well.
I would check that you are using a persistent connection and not connecting to the server over and over again. Sometimes the connection process, especially with many simultaneous workers, causes a lot of network overhead that could cause a problem such as this.
Also, sometimes, in a production, high-transaction volume server, weird network stuff happens and this may just happen occasionally, even, it seems over the loopback interface in your case.
In any case, it is best to write your code so that it can gracefully handle errors and retry. Often, you could wrap your SQL query in a try..catch to catch this error when it happens and try again.
MySQL 8 - in mysql.cnf, disable all this ->
# For error - ( MySQL server has gone away )
#wait_timeout=90
#net_read_timeout=90
#net_write_timeout=90
#interactive_timeout=300
and looks like help me.
All of the sudden I've been having problems with my application that I've never had before. I decided to check the Apache's error log, and I found an error message saying "zend_mm_heap corrupted". What does this mean.
OS: Fedora Core 8
Apache: 2.2.9
PHP: 5.2.6
After much trial and error, I found that if I increase the output_buffering value in the php.ini file, this error goes away
This is not a problem that is necessarily solvable by changing configuration options.
Changing configuration options will sometimes have a positive impact, but it can just as easily make things worse, or do nothing at all.
The nature of the error is this:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main(void) {
void **mem = malloc(sizeof(char)*3);
void *ptr;
/* read past end */
ptr = (char*) mem[5];
/* write past end */
memcpy(mem[5], "whatever", sizeof("whatever"));
/* free invalid pointer */
free((void*) mem[3]);
return 0;
}
The code above can be compiled with:
gcc -g -o corrupt corrupt.c
Executing the code with valgrind you can see many memory errors, culminating in a segmentation fault:
krakjoe#fiji:/usr/src/php-src$ valgrind ./corrupt
==9749== Memcheck, a memory error detector
==9749== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==9749== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==9749== Command: ./corrupt
==9749==
==9749== Invalid read of size 8
==9749== at 0x4005F7: main (an.c:10)
==9749== Address 0x51fc068 is 24 bytes after a block of size 16 in arena "client"
==9749==
==9749== Invalid read of size 8
==9749== at 0x400607: main (an.c:13)
==9749== Address 0x51fc068 is 24 bytes after a block of size 16 in arena "client"
==9749==
==9749== Invalid write of size 2
==9749== at 0x4C2F7E3: memcpy##GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9749== by 0x40061B: main (an.c:13)
==9749== Address 0x50 is not stack'd, malloc'd or (recently) free'd
==9749==
==9749==
==9749== Process terminating with default action of signal 11 (SIGSEGV): dumping core
==9749== Access not within mapped region at address 0x50
==9749== at 0x4C2F7E3: memcpy##GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9749== by 0x40061B: main (an.c:13)
==9749== If you believe this happened as a result of a stack
==9749== overflow in your program's main thread (unlikely but
==9749== possible), you can try to increase the size of the
==9749== main thread stack using the --main-stacksize= flag.
==9749== The main thread stack size used in this run was 8388608.
==9749==
==9749== HEAP SUMMARY:
==9749== in use at exit: 3 bytes in 1 blocks
==9749== total heap usage: 1 allocs, 0 frees, 3 bytes allocated
==9749==
==9749== LEAK SUMMARY:
==9749== definitely lost: 0 bytes in 0 blocks
==9749== indirectly lost: 0 bytes in 0 blocks
==9749== possibly lost: 0 bytes in 0 blocks
==9749== still reachable: 3 bytes in 1 blocks
==9749== suppressed: 0 bytes in 0 blocks
==9749== Rerun with --leak-check=full to see details of leaked memory
==9749==
==9749== For counts of detected and suppressed errors, rerun with: -v
==9749== ERROR SUMMARY: 4 errors from 3 contexts (suppressed: 0 from 0)
Segmentation fault
If you didn't know, you already figured out that mem is heap allocated memory; The heap refers to the region of memory available to the program at runtime, because the program explicitly requested it (with malloc in our case).
If you play around with the terrible code, you will find that not all of those obviously incorrect statements results in a segmentation fault (a fatal terminating error).
I explicitly made those errors in the example code, but the same kinds of errors happen very easily in a memory managed environment: If some code doesn't maintain the refcount of a variable (or some other symbol) in the correct way, for example if it free's it too early, another piece of code may read from already free'd memory, if it somehow stores the address wrong, another piece of code may write to invalid memory, it may be free'd twice ...
These are not problems that can be debugged in PHP, they absolutely require the attention of an internals developer.
The course of action should be:
Open a bug report on http://bugs.php.net
If you have a segfault, try to provide a backtrace
Include as much configuration information as seems appropriate, in particular, if you are using opcache include optimization level.
Keep checking the bug report for updates, more information may be requested.
If you have opcache loaded, disable optimizations
I'm not picking on opcache, it's great, but some of it's optimizations have been known to cause faults.
If that doesn't work, even though your code may be slower, try unloading opcache first.
If any of this changes or fixes the problem, update the bug report you made.
Disable all unnecessary extensions at once.
Begin to enable all your extensions individually, thoroughly testing after each configuration change.
If you find the problem extension, update your bug report with more info.
Profit.
There may not be any profit ... I said at the start, you may be able to find a way to change your symptoms by messing with configuration, but this is extremely hit and miss, and doesn't help the next time you have the same zend_mm_heap corrupted message, there are only so many configuration options.
It's really important that we create bugs reports when we find bugs, we cannot assume that the next person to hit the bug is going to do it ... more likely than not, the actual resolution is in no way mysterious, if you make the right people aware of the problem.
USE_ZEND_ALLOC
If you set USE_ZEND_ALLOC=0 in the environment, this disables Zend's own memory manager; Zend's memory manager ensures that each request has it's own heap, that all memory is free'd at the end of a request, and is optimized for the allocation of chunks of memory just the right size for PHP.
Disabling it will disable those optimizations, more importantly it will likely create memory leaks, since there is a lot of extension code that relies upon the Zend MM to free memory for them at the end of a request (tut, tut).
It may also hide the symptoms, but the system heap can be corrupted in exactly the same way as Zend's heap.
It may seem to be more tolerant or less tolerant, but fix the root cause of the problem, it cannot.
The ability to disable it at all, is for the benefit of internals developers; You should never deploy PHP with Zend MM disabled.
I was getting this same error under PHP 5.5 and increasing the output buffering didn't help. I wasn't running APC either so that wasn't the issue. I finally tracked it down to opcache, I simply had to disable it from the cli. There was a specific setting for this:
opcache.enable_cli=0
Once switched the zend_mm_heap corrupted error went away.
If you are on Linux box, try this on the command line
export USE_ZEND_ALLOC=0
Check for unset()s. Make sure you don't unset() references to the $this (or equivalents) in destructors and that unset()s in destructors don't cause the reference count to the same object to drop to 0. I've done some research and found that's what usually causes the heap corruption.
There is a PHP bug report about the zend_mm_heap corrupted error. See the comment [2011-08-31 07:49 UTC] f dot ardelian at gmail dot com for an example on how to reproduce it.
I have a feeling that all the other "solutions" (change php.ini, compile PHP from source with less modules, etc.) just hide the problem.
For me none of the previous answers worked, until I tried:
opcache.fast_shutdown=0
That seems to work so far.
I'm using PHP 5.6 with PHP-FPM and Apache proxy_fcgi, if that matters...
In my case, the cause for this error was one of the arrays was becoming very big. I've set my script to reset the array on every iteration and that sorted the problem.
As per the bug tracker, set opcache.fast_shutdown=0. Fast shutdown uses the Zend memory manager to clean up its mess, this disables that.
I don't think there is one answer here, so I'll add my experience. I seen this same error along with random httpd segfaults. This was a cPanel server. The symptom in question was apache would randomly reset the connection (No data received in chrome, or connection was reset in firefox). These were seemingly random -- most of the time it worked, sometimes it did not.
When I arrived on the scene output buffering was OFF. By reading this thread, that hinted at output buffering, I turned it on (=4096) to see what would happen. At this point, they all started showing the errors. This was good being that the error was now repeatable.
I went through and started disabling extensions. Among them, eaccellerator, pdo, ioncube loader, and plenty that looked suspicion, but none helped.
I finally found the naughty PHP extension as "homeloader.so", which appears to be some kind of cPanel-easy-installer module. After removal, I haven't experienced any other issues.
On that note, it appears this is a generic error message so your milage will vary with all of these answers, best course of action you can take:
Make the error repeatable (what conditions?) every time
Find the common factor
Selectively disable any PHP modules, options, etc (or, if you're in a rush, disable them all to see if it helps, then selectively re-enable them until it breaks again)
If this fails to help, many of these answers hint that it could be code releated. Again, the key is to make the error repeatable every request so you can narrow it down. If you suspect a piece of code is doing this, once again, after the error is repeatable, just remove code until the error stops. Once it stops, you know the last piece of code you removed was the culprit.
Failing all of the above, you could also try things like:
Upgrading or recompiling PHP. Hope whatever bug is causing your issue is fixed.
Move your code to a different (testing) environment. If this fixes the issue, what changed? php.ini options? PHP version? etc...
Good luck.
I wrestled with this issue, for a week, This worked for me, or atleast so it seems
In php.ini make these changes
report_memleaks = Off
report_zend_debug = 0
My set up is
Linux ubuntu 2.6.32-30-generic-pae #59-Ubuntu SMP
with PHP Version 5.3.2-1ubuntu4.7
This didn’t work.
So I tried using a benchmark script, and tried recording where the script was hanging up.
I discovered that just before the error, a php object was instantiated, and it took more than 3 seconds to complete what the object was supposed to do, whereas in the previous loops it took max 0.4 seconds. I ran this test quite a few times, and every time the same. I thought instead of making a new object every time, (there is a long loop here), I should reuse the object. I have tested the script more than a dozen times so far, and the memory errors have disappeared!
Look for any module that uses buffering, and selectively disable it.
I'm running PHP 5.3.5 on CentOS 4.8, and after doing this I found eaccelerator needed an upgrade.
I just had this issue as well on a server I own, and the root cause was APC. I commented out the "apc.so" extension in the php.ini file, reloaded Apache, and the sites came right back up.
I've tried everything above and zend.enable_gc = 0 - the only config setting, that helped me.
PHP 5.3.10-1ubuntu3.2 with Suhosin-Patch (cli) (built: Jun 13 2012 17:19:58)
I had this error using the Mongo 2.2 driver for PHP:
$collection = $db->selectCollection('post');
$collection->ensureIndex(array('someField', 'someOtherField', 'yetAnotherField'));
^^DOESN'T WORK
$collection = $db->selectCollection('post');
$collection->ensureIndex(array('someField', 'someOtherField'));
$collection->ensureIndex(array('yetAnotherField'));
^^ WORKS! (?!)
On PHP 5.3 , after lot's of searching, this is the solution that worked for me:
I've disabled the PHP garbage collection for this page by adding:
<? gc_disable(); ?>
to the end of the problematic page, that made all the errors disappear.
source.
I think a lot of reason can cause this problem. And in my case, i name 2 classes the same name, and one will try to load another.
class A {} // in file a.php
class A // in file b.php
{
public function foo() { // load a.php }
}
And it causes this problem in my case.
(Using laravel framework, running php artisan db:seed in real)
I had this same issue and when I had an incorrect IP for session.save_path for memcached sessions. Changing it to the correct IP fixed the problem.
If you are using traits and the trait is loaded after the class (ie. the case of autoloading) you need to load the trait beforehand.
https://bugs.php.net/bug.php?id=62339
Note: this bug is very very random; due to it's nature.
For me the problem was using pdo_mysql. Query returned 1960 results. I tried to return 1900 records and it works. So problem is pdo_mysql and too large array. I rewrote query with original mysql extension and it worked.
$link = mysql_connect('localhost', 'user', 'xxxx') or die(mysql_error());
mysql_select_db("db", $link);
Apache did not report any previous errors.
zend_mm_heap corrupted
zend_mm_heap corrupted
zend_mm_heap corrupted
[Mon Jul 30 09:23:49 2012] [notice] child pid 8662 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:50 2012] [notice] child pid 8663 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:54 2012] [notice] child pid 8666 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:55 2012] [notice] child pid 8670 exit signal Segmentation fault (11)
"zend_mm_heap corrupted" means problems with memory management. Can be caused by any PHP module.
In my case installing APC worked out. In theory other packages like eAccelerator, XDebug etc. may help too. Or, if you have that kind of modules installed, try switching them off.
I am writing a php extension and also encounter this problem. When i call an extern function with complicated parameters from my extension, this error pop up.
The reason is my not allocating memory for a parameter(char *) in the extern function. If you are writing same kind of extension, please pay attention to this.
A lot of people are mentioning disabling XDebug to solve the issue. This obviously isn't viable in a lot of instances, as it's enabled for a reason - to debug your code.
I had the same issue, and noticed that if I stopped listening for XDebug connections in my IDE (PhpStorm 2019.1 EAP), the error stopped occurring.
The actual fix, for me, was removing any existing breakpoints.
A possibility for this being a valid fix is that PhpStorm is sometimes not that good at removing breakpoints that no longer reference valid lines of code after files have been changed externally (e.g. by git)
Edit:
Found the corresponding bug report in the xdebug issue tracker:
https://bugs.xdebug.org/view.php?id=1647
The issue with zend_mm_heap corrupted boggeld me for about a couple of hours. Firstly I disabled and removed memcached, tried some of the settings mentioned in this question's answers and after testing this seemed to be an issue with OPcache settings. I disabled OPcache and the problem went away. After that I re-enabled OPcache and for me the
core notice: child pid exit signal Segmentation fault
and
zend_mm_heap corrupted
are apparently resolved with changes to
/etc/php.d/10-opcache.ini
I included the settings I changed here; opcache.revalidate_freq=2 remains commmented out, I did not change that value.
opcache.enable=1
opcache.enable_cli=0
opcache.fast_shutdown=0
opcache.memory_consumption=1024
opcache.interned_strings_buffer=128
opcache.max_accelerated_files=60000
For me, it was the ZendDebugger that caused the memory leak and cuased the MemoryManager to crash.
I disabled it and I'm currently searching for a newer version. If I can't find one, I'm going to switch to xdebug...
Because I never found a solution to this I decided to upgrade my LAMP environment. I went to Ubuntu 10.4 LTS with PHP 5.3.x. This seems to have stopped the problem for me.
In my case, i forgot following in the code:
);
I played around and forgot it in the code here and there - in some places i got heap corruption, some cases just plain ol' seg fault:
[Wed Jun 08 17:23:21 2011] [notice] child pid 5720 exit signal Segmentation fault (11)
I'm on mac 10.6.7 and xampp.
I've also noticed this error and SIGSEGV's when running old code which uses '&' to explicitly force references while running it in PHP 5.2+.
Setting
assert.active = 0
in php.ini helped for me (it turned off type assertions in php5UTF8 library and zend_mm_heap corrupted went away)
For me the problem was crashed memcached daemon, as PHP was configured to store session information in memcached. It was eating 100% cpu and acting weird. After memcached restart problem has gone.
Since none of the other answers addressed it, I had this problem in php 5.4 when I accidentally ran an infinite loop.