How to debug php script that exits with 137 error? - php

I have long running worker that iterates over 5M records using batch processing. I use standard Laravel's function chunkById for this.
As long as i can see, i have not reached 200M of memory usage, which i can see in output of docker stats:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
fd05760e5d96 case-place-partners_case-place-partners_app_1 21.71% 140.5MiB / 7.666GiB 1.79% 919MB / 103MB 113MB / 21.4MB 19
Additionally, i have memory_get_usage() and memory_get_usage(true) everywhere and i dont see numbers higher than 52428800.
Output of journalctl -k | grep -i -e memory -e oom:
Aug 19 09:28:41 mirokko-i3 kernel: Memory: 8023772K/8259584K available (12291K kernel code, 1319K rwdata, 3900K rodata, 1612K init, 3616K bss, 235812K reserved, 0K cma-reserved)
Aug 19 09:28:41 mirokko-i3 kernel: Freeing SMP alternatives memory: 32K
Aug 19 09:28:41 mirokko-i3 kernel: x86/mm: Memory block size: 128MB
Aug 19 09:28:41 mirokko-i3 kernel: Freeing initrd memory: 9024K
Aug 19 09:28:41 mirokko-i3 kernel: check: Scanning for low memory corruption every 60 seconds
Aug 19 09:28:41 mirokko-i3 kernel: Freeing unused decrypted memory: 2040K
Aug 19 09:28:41 mirokko-i3 kernel: Freeing unused kernel image memory: 1612K
Aug 19 09:28:41 mirokko-i3 kernel: Freeing unused kernel image memory: 2012K
Aug 19 09:28:41 mirokko-i3 kernel: Freeing unused kernel image memory: 196K
Aug 19 09:28:57 mirokko-i3 kernel: [TTM] Zone kernel: Available graphics memory: 4019344 KiB
Aug 19 09:28:57 mirokko-i3 kernel: [TTM] Zone dma32: Available graphics memory: 2097152 KiB
Output of docker inspect container_id located here

Seems like Laravel has job timeout. Just run worker with --timeout="0" to disable this feature or set your own value.

Related

Cannot allocate memory: fork: Unable to fork new process on aws

I have been getting this error in my server log file.
[Sun Jan 29 00:22:43.570300 2017] [mpm_prefork:error] [pid 1205] (12)Cannot
allocate memory: AH00159: fork: Unable to fork new process
[Sun Jan 29 00:22:53.742820 2017] [mpm_prefork:error] [pid 1205] (12)Cannot
allocate memory: AH00159: fork: Unable to fork new process
[Sun Jan 29 00:23:03.771702 2017] [core:notice] [pid 1205] AH00051: child
pid 22134 exit signal Aborted (6), possible coredump in /etc/apache2
[Sun Jan 29 00:23:03.876081 2017] [core:notice] [pid 1205] AH00051: child
pid 22135 exit signal Aborted (6), possible coredump in /etc/apache2
[Sun Jan 29 00:23:04.899489 2017] [mpm_prefork:error] [pid 1205] (12)Cannot
allocate memory: AH00159: fork: Unable to fork new process
[Sun Jan 29 00:23:14.931272 2017] [mpm_prefork:error] [pid 1205] (12)Cannot
allocate memory: AH00159: fork: Unable to fork new process
[Sun Jan 29 00:23:24.965639 2017] [mpm_prefork:error] [pid 1205] (12)Cannot
allocate memory: AH00159: fork: Unable to fork new process
[Sun Jan 29 00:23:35.031174 2017] [mpm_prefork:error] [pid 1205] (12)Cannot
allocate memory: AH00159: fork: Unable to fork new process
help to solving out the issue.
The cannot allocate memory error usually points to an Out Of Memory (OOM) error.
This can happen very often on the smaller EC2 Instances, e.g. if you haven't tuned the maximum memory your apps can request from the operating system.
Your app (in this case Apache) attempts to allocate some memory (which it expects it should be able to request, based on its config) and the OS simply doesn't have enough to give it.
Some common solutions:
More Memory
Upgrade to a larger EC2 instance, with, well, simply more memory available. This clearly does not solve the problem's root cause, but - given low enough traffic - it could even cause it to stop appearing altogether.
This is only an option if your budget allows it, of course...
Add Hard-Disk-Backed Memory (Swap)
To be more precise: Consider adding swpa
As to how, the below creates a 4GB swapfile:
sudo dd if=/dev/zero of=/var/swapfile bs=1M count=4096
sudo chmod 600 /var/swapfile
sudo mkswap /var/swapfile
sudo swapon /var/swapfile
Word of warning: swap, when used inappropriately, can easily lead into nasty situations, such as thrashing
Use Existing Memory Wisely
You'll want to read up on Apache Performance Tuning
Hope this helps!
I was facing the same issue on a Wordpress application (PHP v7.4) being served from a Docker container running on an ECS cluster. Following steps helped me resolve the issue:
Increase the Task Memory (MiB) and Task CPU (unit) in the task definition based on your application memory requirements.
Also increase the Hard/Soft memory limits (MiB) in the Container definition if defined earlier.
Update the ECS cluster to point to the new task definition.
Kill the running tasks to force ECS service to start a fresh task as per the new task definition.

php5-fpm, killed by HUP Signal: Main Process failed to respawn

I had a problem on my main php server, wherein the main php5-fpm process would be killed by an HUP signal. After the main process would be killed it would fail to respawn. Since each child process is allowed only to server a certain number of requests, they would eventually die without spawning any other child process. This would cause the server to die and my users would receive a 502 response from the server. I was initially able to solve this issue by have a cron that would check the thread count of PHP processes and then restart if its less than 5.
Sep 14 11:41:41 ubuntu kernel: [ 3699.092724] init: php5-fpm main process (3592) killed by HUP signal
Sep 14 11:41:41 ubuntu kernel: [ 3699.092740] init: php5-fpm main process ended, respawning
Sep 14 11:41:42 ubuntu kernel: [ 3699.160940] init: php5-fpm main process (3611) terminated with status 78
Sep 14 11:41:42 ubuntu kernel: [ 3699.160954] init: php5-fpm main process ended, respawning
Sep 14 11:41:42 ubuntu kernel: [ 3699.216950] init: php5-fpm main process (3619) terminated with status 78
Sep 14 11:41:42 ubuntu kernel: [ 3699.216966] init: php5-fpm main process ended, respawning
Sep 14 11:41:42 ubuntu kernel: [ 3699.283573] init: php5-fpm main process (3627) terminated with status 78
Sep 14 11:41:42 ubuntu kernel: [ 3699.283590] init: php5-fpm main process ended, respawning
Sep 14 11:41:42 ubuntu kernel: [ 3699.337563] init: php5-fpm main process (3635) terminated with status 78
Sep 14 11:41:42 ubuntu kernel: [ 3699.337579] init: php5-fpm main process ended, respawning
Sep 14 11:41:42 ubuntu kernel: [ 3699.385293] init: php5-fpm main process (3643) terminated with status 78
Sep 14 11:41:42 ubuntu kernel: [ 3699.385305] init: php5-fpm main process ended, respawning
Sep 14 11:41:42 ubuntu kernel: [ 3699.430903] init: php5-fpm main process (3651) terminated with status 78
Sep 14 11:41:42 ubuntu kernel: [ 3699.430913] init: php5-fpm main process ended, respawning
Sep 14 11:41:42 ubuntu kernel: [ 3699.482790] init: php5-fpm main process (3659) terminated with status 78
Sep 14 11:41:42 ubuntu kernel: [ 3699.482800] init: php5-fpm main process ended, respawning
Sep 14 11:41:42 ubuntu kernel: [ 3699.532239] init: php5-fpm main process (3667) terminated with status 78
Sep 14 11:41:42 ubuntu kernel: [ 3699.532249] init: php5-fpm main process ended, respawning
Sep 14 11:41:42 ubuntu kernel: [ 3699.595810] init: php5-fpm main process (3675) terminated with status 78
Sep 14 11:41:42 ubuntu kernel: [ 3699.595825] init: php5-fpm main process ended, respawning
Sep 14 11:41:42 ubuntu kernel: [ 3699.648253] init: php5-fpm main process (3683) terminated with status 78
Sep 14 11:41:42 ubuntu0 kernel: [ 3699.648265] init: php5-fpm respawning too fast, stopped
My upstart script config
# php5-fpm - The PHP FastCGI Process Manager
description "The PHP FastCGI Process Manager"
author "Ondřej Surý <ondrej#debian.org>"
start on runlevel [2345]
stop on runlevel [016]
# Precise upstart does not support reload signal, and thus rejects the
# job. We'd rather start the daemon, instead of forcing users to
# reboot https://bugs.launchpad.net/ubuntu/+source/php5/+bug/1272788
#
#reload signal USR2
pre-start exec /usr/lib/php5/php5-fpm-checkconf
respawn
exec /usr/sbin/php5-fpm --nodaemonize --fpm-config /etc/php5/fpm/php-fpm.conf
After searching the internet was finally able to get a solution to this by modifying the upstart script of php5-fpm in /etc/init/php5-fpm.conf
# php5-fpm - The PHP FastCGI Process Manager
description "The PHP FastCGI Process Manager"
author "Ondřej Surý <ondrej#debian.org>"
start on runlevel [2345]
stop on runlevel [016]
# Precise upstart does not support reload signal, and thus rejects the
# job. We'd rather start the daemon, instead of forcing users to
# reboot https://bugs.launchpad.net/ubuntu/+source/php5/+bug/1272788
#
#reload signal USR2
pre-start exec /usr/lib/php5/php5-fpm-checkconf
pre-start exec /bin/bash /etc/init/php5-fpm.sh
post-start exec /bin/bash /etc/init/php5-fpm-onstart.sh
respawn
exec /usr/sbin/php5-fpm --nodaemonize --fpm-config /etc/php5/fpm/php-fpm.conf
So added additional scripts pre-start and post-start in the php5-fpm.conf. The pre-start script is
#!/bin/bash
rm /var/run/php5-fpm.pid
rm /var/run/php5-fpm.sock
CHILD_PIDS_FILE="/var/run/php5-fpm-child.pid"
CHILD_PIDS=`ps -ef | grep 'php' | grep -v grep |awk '{print $2}'`
echo "$CHILD_PIDS" > "$CHILD_PIDS_FILE"
The script basically deletes the main process pid and the sock file. Then writes the pids of the child processes to the file so than they can be killed once a new php5-fpm process is created.
The post-start script is
#!/bin/bash
CHILD_PIDS_FILE="/var/run/php5-fpm-child.pid"
while read PID; do
kill -9 $PID
done < $CHILD_PIDS_FILE
>$CHILD_PIDS_FILE
The post-start script deletes all the child-pids that were running before php5-fpm restarted.

installed sphinx extension for mediawiki, everything seems working well, but just no search result

I installed sphinx 2.2.10 in centos 7 ,mediawiki version is 1.22 , with the installation instruction in sphinxsearch extension homepage, I run the indexer command in step 3 successfully. But there is no search command installed on my server and I can't test sphinx out. It seems that there isn't any installation directory of sphinx in centos. It just have a searchd and indexer in the system bin directory.Luckily, I can start the searchd daemon by server searchd start and it said it's listening on port 9312. And I installed the extension and copied the php api correctly.But I can't get any result by search, there is a php warning shows in the page though, says some methods are deprecated, shouldn't be used.below are some of my searchd.log file:
Mon Mar  7 23:16:56.467 2016] [ 1737] shutdown complete
[Mon Mar  7 23:41:41.001 2016] [24720] watchdog: main process 24721 forked ok
[Mon Mar  7 23:41:41.006 2016] [24721] listening on 127.0.0.1:9312
[Mon Mar  7 23:41:41.379 2016] [24721] binlog: replaying log /var/data/binlog.001
[Mon Mar  7 23:41:41.380 2016] [24721] binlog: replay stats: 0 rows in 0 commits; 0 updates, 0 reconfigure; 0 indexes[Mon Mar  7 23:41:41.380 2016] [24721] binlog: finished replaying /var/data/binlog.001; 0.0 MB in 0.000 sec
[Mon Mar  7 23:41:41.380 2016] [24721] binlog: replaying log /var/data/binlog.001
[Mon Mar  7 23:41:41.380 2016] [24721] binlog: replay stats: 0 rows in 0 commits; 0 updates, 0 reconfigure; 0 indexes
[Mon Mar  7 23:41:41.380 2016] [24721] binlog: finished replaying /var/data/binlog.001; 0.0 MB in 0.000 sec[Mon Mar  7 23:41:41.380 2016] [24721] binlog: finished replaying total 2 in 0.000 sec
[Mon Mar  7 23:41:41.381 2016] [24721] accepting connections
Now I have no idea how to solve this problem and any suggestion or help will be appreciated.
I have not got any result because my wiki is not written in english, but the default configuration file of mediawiki sphinxsearch extension has no charset table for other languages, so the result is empty.

NFS hang from Vagrant guest to OSX

I have a Vagrant guest I'm using to run a Symfony 2 application locally for development. In general this is working fine, however, I am regularly finding the processes lock in the 'D+' state (waiting for I/O).
eg. I try to run my unit tests:
./bin/phpunit -c app
The task launches, but then never exits. In the process list I see:
vagrant 3279 0.5 4.9 378440 101132 pts/0 D+ 02:43 0:03 php ./bin/phpunit -c app
The task is unkillable. I need to power cycle the Vagrant guest to get it back again. This seems to happen mostly with PHP command line apps (but it's also the main command line tasks I do, so it might not be relevant).
The syslog reports a hung task:
Aug 20 03:04:40 precise64 kernel: [ 6240.210396] INFO: task php:3279 blocked for more than 120 seconds.
Aug 20 03:04:40 precise64 kernel: [ 6240.211920] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug 20 03:04:40 precise64 kernel: [ 6240.212843] php D 0000000000000000 0 3279 3091 0x00000004
Aug 20 03:04:40 precise64 kernel: [ 6240.212846] ffff88007aa13c98 0000000000000082 ffff88007aa13c38 ffffffff810830df
Aug 20 03:04:40 precise64 kernel: [ 6240.212849] ffff88007aa13fd8 ffff88007aa13fd8 ffff88007aa13fd8 0000000000013780
Aug 20 03:04:40 precise64 kernel: [ 6240.212851] ffff88007aa9c4d0 ffff880079e596f0 ffff88007aa13c78 ffff88007fc14040
Aug 20 03:04:40 precise64 kernel: [ 6240.212853] Call Trace:
Aug 20 03:04:40 precise64 kernel: [ 6240.212859] [<ffffffff810830df>] ? queue_work+0x1f/0x30
Aug 20 03:04:40 precise64 kernel: [ 6240.212863] [<ffffffff811170e0>] ? __lock_page+0x70/0x70
Aug 20 03:04:40 precise64 kernel: [ 6240.212866] [<ffffffff8165a55f>] schedule+0x3f/0x60
Aug 20 03:04:40 precise64 kernel: [ 6240.212867] [<ffffffff8165a60f>] io_schedule+0x8f/0xd0
Aug 20 03:04:40 precise64 kernel: [ 6240.212869] [<ffffffff811170ee>] sleep_on_page+0xe/0x20
Aug 20 03:04:40 precise64 kernel: [ 6240.212871] [<ffffffff8165ae2f>] __wait_on_bit+0x5f/0x90
Aug 20 03:04:40 precise64 kernel: [ 6240.212873] [<ffffffff81117258>] wait_on_page_bit+0x78/0x80
Aug 20 03:04:40 precise64 kernel: [ 6240.212875] [<ffffffff8108af00>] ? autoremove_wake_function+0x40/0x40
Aug 20 03:04:40 precise64 kernel: [ 6240.212877] [<ffffffff8111736c>] filemap_fdatawait_range+0x10c/0x1a0
Aug 20 03:04:40 precise64 kernel: [ 6240.212882] [<ffffffff81122a01>] ? do_writepages+0x21/0x40
Aug 20 03:04:40 precise64 kernel: [ 6240.212884] [<ffffffff81118da8>] filemap_write_and_wait_range+0x68/0x80
Aug 20 03:04:40 precise64 kernel: [ 6240.212892] [<ffffffffa01269fe>] nfs_file_fsync+0x5e/0x130 [nfs]
Aug 20 03:04:40 precise64 kernel: [ 6240.212896] [<ffffffff811a632b>] vfs_fsync+0x2b/0x40
Aug 20 03:04:40 precise64 kernel: [ 6240.212900] [<ffffffffa01272c3>] nfs_file_flush+0x53/0x80 [nfs]
Aug 20 03:04:40 precise64 kernel: [ 6240.212903] [<ffffffff81175d6f>] filp_close+0x3f/0x90
Aug 20 03:04:40 precise64 kernel: [ 6240.212905] [<ffffffff81175e72>] sys_close+0xb2/0x120
Aug 20 03:04:40 precise64 kernel: [ 6240.212907] [<ffffffff81664a82>] system_call_fastpath+0x16/0x1b`
To provision the box, I'm sharing a local folder using:
config.vm.synced_folder "/my/local/path.dev", "/var/www", :nfs => true
Vagrant creates the following /etc/exports file on the OSX host:
# VAGRANT-BEGIN: c7d0c56a-a126-46f5-a293-605bf554bc9a
"/Users/djdrey-local/Sites/oddswop.dev" 192.168.33.101 -mapall=501:20
# VAGRANT-END: c7d0c56a-a126-46f5-a293-605bf554bc9a
Output of nfsstat on the vagrant guest
Server rpc stats:
calls badcalls badclnt badauth xdrcall
0 0 0 0 0
Client rpc stats:
calls retrans authrefrsh
87751 0 87751
Client nfs v3:
null getattr setattr lookup access readlink
0 0% 35018 39% 1110 1% 8756 9% 19086 21% 0 0%
read write create mkdir symlink mknod
5100 5% 7059 8% 4603 5% 192 0% 0 0% 0 0%
remove rmdir rename link readdir readdirplus
4962 5% 262 0% 313 0% 0 0% 0 0% 1056 1%
fsstat fsinfo pathconf commit
1 0% 2 0% 1 0% 229 0%
I've ensured the Guest Additions are up to date on the guest using the plugin: vagrant-vbguest
I'm not sure how to go about debugging this. It's pretty clear to me this is a NFS issue between the guest and the Mac OSX host. If I try and up the debug logging for NFS on OSX using NFS Manager, I get a kernel panic in OSX.
Has anyone else had a similar issue? Any suggestions on a way forward would be appreciated - as power cycling the guest several times per day is unworkable.
Environment
OSX 10.8.4
Vagrant 1.2.7
Virtualbox 4.2.16
Vagrant guest O/S: Ubuntu 12.04.2 LTS (GNU/Linux 3.2.0-23-generic x86_64) [precise64.box]
I had a similar problem when running npm install within a shared nfs folder and subsequently found that disabling nfs_udp fixed the hanging issues :
config.vm.synced_folder ".", "/vagrant", type: "nfs", nfs_udp: false
You don't give enough detail on the specific configuration (e.g., the exports file, the fstab file, firewall config, etc.) for a specific answer. Here are some ideas though:
In the fstab try adding the "hard,intr" flags to the mount options -- this makes it possible to kill processes waiting for I/O on a dead mount.
Also make sure your firewall is open for rpc calls and the rpc-statd service is running.
Also figure out what version of nfs you're running and that you have the correct TCP/UDP ports open. If NFS v4 isn't working out, maybe try NFS v3.
Finally, are you connecting via IP address or hostname? Hostname is great, but make sure it always resolves correctly -- maybe in your /etc/hosts file. Alternatively, hard-code the IP addresses so there is no chance of name resolution failing...

PHP web server in PHP?

i.e to replace Apache with a PHP application that sent back html files when http requests for .php files are sent?
How practical is this?
It's already been done but if you want to know how practical it is, then i suggest you install and test with Apache bench to see the results:
http://nanoweb.si.kz/
Edit, A benchmark from the site:
Server Software: aEGiS_nanoweb/2.0.1-dev
Server Hostname: si.kz
Server Port: 80
Document Path: /six.gif
Document Length: 28352 bytes
Concurrency Level: 20
Time taken for tests: 3.123 seconds
Complete requests: 500
Failed requests: 0
Broken pipe errors: 0
Keep-Alive requests: 497
Total transferred: 14496686 bytes
HTML transferred: 14337322 bytes
Requests per second: 160.10 [#/sec] (mean)
Time per request: 124.92 [ms] (mean)
Time per request: 6.25 [ms] (mean, across all concurrent requests)
Transfer rate: 4641.91 [Kbytes/sec] received
Connnection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.9 0 13
Processing: 18 100 276.4 40 2739
Waiting: 1 97 276.9 39 2739
Total: 18 100 277.8 40 2750
Percentage of the requests served within a certain time (ms)
50% 40
66% 49
75% 59
80% 69
90% 146
95% 245
98% 449
99% 1915
100% 2750 (last request)
Apart from Nanoweb, there is also a standard PEAR component to build standalone applications with a built-in webserver:
http://pear.php.net/package/HTTP_Server
Likewise the upcoming PHP 5.4 release is likely to include an internal mini webserver which facilitates simple file serving. https://wiki.php.net/rfc/builtinwebserver
php -S localhost:8000
Why reinvent the wheel? Apache or any other web server has had a lot of work put into it by a lot of skilled people to be stable and to do everything you wanted it to do.
Just FYI, PHP 5.4 just released with in-built webserver. Now you can run a local server with very simple commands like -
$ cd ~/public_html
$ php -S localhost:8000
And you'll see the requests and responses like this -
PHP 5.4.0 Development Server started at Thu Jul 21 10:43:28 2011
Listening on localhost:8000
Document root is /home/me/public_html
Press Ctrl-C to quit.
[Thu Jul 21 10:48:48 2011] ::1:39144 GET /favicon.ico - Request read
[Thu Jul 21 10:48:50 2011] ::1:39146 GET / - Request read
[Thu Jul 21 10:48:50 2011] ::1:39147 GET /favicon.ico - Request read
[Thu Jul 21 10:48:52 2011] ::1:39148 GET /myscript.html - Request read
[Thu Jul 21 10:48:52 2011] ::1:39149 GET /favicon.ico - Request read

Categories