Apache2 starting with high amount of memory allocated - php

I've an Ubuntu Linux 16.04.1 server, running a wordpress blog my server hardware is pretty ok for this task but I found that apache2 is loading too much memory.
After i JUST REBOOT the server the OS shows me this consume
2215 www-data 563.27 MB /usr/sbin/apache2 -k start
2216 www-data 563.27 MB /usr/sbin/apache2 -k start
2217 www-data 563.27 MB /usr/sbin/apache2 -k start
It's a pretty high value for a server before any request, I found the config file setting the number of spare servers to 3 after start up and it is okay.
but the amount of memory each server loads is what making me confuse and i didnt found any configuration to set this value or a min or max.
Does anyone know the default value for an apache php mysql running a wordpress?
are there any config file i can check for understand why such high memory?

Related

Nginx php-fpm site slow

i have site on ubuntu, nginx with php-fpm,
My problem is that my first byte took about 15 seconds to respond.
below is the result of htop & www.conf file.
You can enable slow log for php-fpm to check which part in your code is slow.
Please check https://easyengine.io/tutorials/php/fpm-slow-log/

Nginx + PHP-FPM Randomly Gives 502

My site is working fine with Nginx + PHP-FPM but randomly it gives 502 error. Environment Details
OS - CetnOS 6
Nginx
PHP-FPM (php 5.4)
APC (Code Cache APC 3.1.13 beta)
Memcache (data cache)
In php-fpm
pm.max_children = 200
pm.start_servers = 40
pm.min_spare_servers = 30
pm.max_spare_servers = 50
pm.max_requests = 500
Also i am using TCP connection not socket.
If any body has any input please update me.
Thanks
Firstly reduce pm.max_children = 200 to pm.max_children = 50.
You will have to firstly increase the file limit of the system thereby allowing nginx and php-fpm to open a larger number of files. File limit has to be increased as in linux everything in the end is a file. So the more connections you open the more number of files will be required. In ubuntu the file limit configuration are done in /etc/security/limits.conf. You will need to locate this for CentOS.
Then try to increase the internal port range which can be used by php-fpm. Along with this tcp ports are generally associated with a timeout value before they are reused, reduce this timeout so that more ports are freed when their job is done.
Find detailed info here.
Addition:
In case if the error still persists try increasing the number of php-fpm worker processes to 100. Although it is not recommended to set the value so high as they consume addition memory.
pm.max_children = 100
pm.start_servers = 90
pm.min_spare_servers = 70
pm.max_spare_servers = 100
You can try out various values to get the optimum suited for your purpose.
The basic reason for a 502 is when nginx cannot forward or fails to forward the request to php-fpm. Increasing the number of php-fpm worker processes can be one of the way, thereby giving nginx more processes to forward the requests to.

How can I limit apache2's resources?

I have a very slow home server (raspberry pi) with 700 MHz CPU. When I develop some sites, it sometimes happens that I input a large (5 MB) .jpg file into an image resizer (imagecreatefromjpeg(), imagecreatetruecolor(), imagecopy()) which causes the server to hang.
I'd say it's processing the file, but even when I wait for minutes, it never ends.
The problem is that I cannot even access the shell to stop/restart apache2; the only solution is to power off the server completely.
I was wondering whether there's any way to limit HW resources for apache2; for example if I could limit only 80% CPU usage for it, maybe I could still access the shell and stop it.
I tried setting the timeout and max_execution_time directives, but they don't seem to stop apache2 from working infinitely and freezing the server.
Any ideas how to solve this?
I don't think you can limit the cpu usage of apache from its own settings.
You can try using a separate app like cpulimit (see: how-to):
$ sudo apt-get install cpulimit
$ man cpulimit
You can also try these to optimize the overall performance of your server.
Edit your /etc/apache2/apache2.conf and use these values:
<IfModule mpm_prefork_module>
StartServers 2
MinSpareServers 2
MaxSpareServers 3
MaxClients 50
MaxRequestsPerChild 0
</IfModule>
Expand your swap in /etc/dphys-swapfile set:
CONF_SWAPSIZE=512
Then run:
$ sudo dphys-swapfile setup
$ sudo dphys-swapfile swapon
Overclock your raspberry (it's safer than it sounds) here:
$ sudo raspi-config
I use it at 950MHz. There is a higher option (1000MHz), but some people on the forum complained about SD corruption with that one.
You can also set the graphic memory to 16 in raspi-config memory_split if you do not use the graphical interface.
You can install apache-mod_ratelimit. Also, see Control Apache with httpd.conf.

PHP Redis Error: Uncaught exception ‘RedisException’

I use Redis to build a IOS SNS App (for restful api). As more user use it, errors happened.
It throws out :
Uncaught exception 'RedisException' with message 'read error on connection'
in /data1/www/htdocs/11/iossns/Model/Core/Redis.php
I don't know how to solve the problem.
Can you help?
Thank you!
What PHP-to-Redis library are you using? Here’s the official list from Redis. What is your webserver? (Apache, nginx, etc) How is PHP running? (CGI, FPM, mod_php, etc)
Here’s a thread for the same exception message in phpredis. It turns out phpredis does not currently support persistent connections with php-fpm. Version 2.2.3 of phpredis has some connection handling changes that might decrease the frequency of your issues.
I recommend checking your Redis connector configuration to…
disable persistent connections
enable connection retries
increase log verbosity
You might also consider adjustments to (generally increasing) default_socket_timeout in php.ini.
It's a redis 6 bug.
You can do this:
sudo nano /etc/systemd/system/redis.service
# ADD THIS TO [Service] SECTION
ExecStartPost=/bin/sh -c "echo $MAINPID > /var/run/redis/redis.pid"
sudo systemctl daemon-reload && sudo systemctl restart redis-server
In my case in Ubuntu 20.04 I discovered systemctl was trying to restart service because it couldn't find the /run/redis/redis-server.pid
redis-server.service: Failed with result 'timeout'.
Feb 25 17:51:09 artamredis systemd[1]: Failed to start Advanced key-value store.
Feb 25 17:51:09 artamredis systemd[1]: redis-server.service: Scheduled restart job, restart counter is at 18.
Feb 25 17:51:09 artamredis systemd[1]: Stopped Advanced key-value store.
Feb 25 17:51:09 artamredis systemd[1]: Starting Advanced key-value store...
Feb 25 17:51:09 artamredis systemd[1]: redis-server.service: Can't open PID file /run/redis/redis-server.pid (yet?) after start: Operation not permitted
In order to solve it in /etc/redis/redis.conf file find
pidfile /var/run/redis_6379.pid
and change to
pidfile /run/redis/redis-server.pid

Locating memory leak in Apache httpd process, PHP/Doctrine-based application

I have a PHP application using these components:
Apache 2.2.3-31 on Centos 5.4
PHP 5.2.10
Xdebug 2.0.5 with Remote Debugging enabled
APC 3.0.19
Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC
MySQL 5.0.77 using Query Caching
I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory.
Here is a snapshot of my top output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd
1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd
1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd
1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd
1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd
1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd
1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd
1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd
7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd
9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd
I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that).
My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process.
How can I track down which part of my app is causing the memory leak?
What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?
My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process.
What's weird about that? That's exactly what APC does. The memory is shared between all httpd processes, though, so it's not as bad as it sounds. See Where does APC store its opcode and user variable cache? for details.

Categories