How can I limit apache2's resources? - php

I have a very slow home server (raspberry pi) with 700 MHz CPU. When I develop some sites, it sometimes happens that I input a large (5 MB) .jpg file into an image resizer (imagecreatefromjpeg(), imagecreatetruecolor(), imagecopy()) which causes the server to hang.
I'd say it's processing the file, but even when I wait for minutes, it never ends.
The problem is that I cannot even access the shell to stop/restart apache2; the only solution is to power off the server completely.
I was wondering whether there's any way to limit HW resources for apache2; for example if I could limit only 80% CPU usage for it, maybe I could still access the shell and stop it.
I tried setting the timeout and max_execution_time directives, but they don't seem to stop apache2 from working infinitely and freezing the server.
Any ideas how to solve this?

I don't think you can limit the cpu usage of apache from its own settings.
You can try using a separate app like cpulimit (see: how-to):
$ sudo apt-get install cpulimit
$ man cpulimit
You can also try these to optimize the overall performance of your server.
Edit your /etc/apache2/apache2.conf and use these values:
<IfModule mpm_prefork_module>
StartServers 2
MinSpareServers 2
MaxSpareServers 3
MaxClients 50
MaxRequestsPerChild 0
</IfModule>
Expand your swap in /etc/dphys-swapfile set:
CONF_SWAPSIZE=512
Then run:
$ sudo dphys-swapfile setup
$ sudo dphys-swapfile swapon
Overclock your raspberry (it's safer than it sounds) here:
$ sudo raspi-config
I use it at 950MHz. There is a higher option (1000MHz), but some people on the forum complained about SD corruption with that one.
You can also set the graphic memory to 16 in raspi-config memory_split if you do not use the graphical interface.

You can install apache-mod_ratelimit. Also, see Control Apache with httpd.conf.

Related

Nginx php-fpm site slow

i have site on ubuntu, nginx with php-fpm,
My problem is that my first byte took about 15 seconds to respond.
below is the result of htop & www.conf file.
You can enable slow log for php-fpm to check which part in your code is slow.
Please check https://easyengine.io/tutorials/php/fpm-slow-log/

How check apache & php-fpm config is appropriate (not too high or too low)

I will have an event with 3k users on an app (php base).
I launch several instances in the cloud and install LAMP on it.[to make load test and choose on for the event]
On Ubuntu 18
I enable mpm_event and php7.4-fpm, (which seems to be the better configuration for high traffic with apache and php app).
I use this post which explain how tune your conf.
Like this :
Here apache2 mpm event conf :
<IfModule mpm_*_module>
ServerLimit (Total RAM - Memory used for Linux, DB, etc.) / process size
StartServers (Number of Cores)
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 64
ThreadsPerChild 25
MaxRequestWorkers (Total RAM - Memory used for Linux, DB, etc.) / process size
MaxConnectionsPerChild 1000
</IfModule>
Here php7.4-fpm :
pm = dynamic
pm.max_children (total RAM - (DB etc) / process size)
pm.start_servers (cpu cores * 4)
pm.min_spare_servers (cpu cores * 2)
pm.max_spare_servers (cpu cores * 4)
pm.max_requests 1000
My goal is : even if I rely of these method, I would saw some metric like :
--> You have too many thread (from apache worker or from phpfpm) unused open
--> All your thread (from apache worker or from phpfpm) are already busy and use
I already test: htop, glance, vmstat, sar to check io, cpu, ram but even with that it's not clear to me :
Does my configuration is good for this machine with this load or should I increase/decrease something?
Then I could be sure these configuration are good and start other subject : CDN, cache ...
How do you manage this ?
thanks by advance,
As you noted, this depends on your script(s). We have this dynamically adjusted in our deploy scripts based on the server(s) being rolled up.
The following script is based on running Apache, on Centos, on AWS infrastructure but could easily be adapted to what you are using.
Basically:
set the size of apache processes
set the size of php process
scripts gets available memory, cores and does some crunching and then modifies the config.
we run this as part of stack roll up
Primary Source / Based on:
https://medium.com/#sbuckpesch/apache2-and-php-fpm-performance-optimization-step-by-step-guide-1bfecf161534
Steps:
Calculate process size
You need to know how many processes can run on your machine. So calculate the process size of your main CPU/memory drivers is necessary.
cd /tmp
curl https://raw.githubusercontent.com/pixelb/ps_mem/master/ps_mem.py --output ps_mem.py
chmod a+x ps_mem.py
sudo python ps_mem.py
# Sample numbers:
# 28.4 MiB + 103.0 KiB = 28.5 MiB memcached
# 34.7 MiB + 9.5 KiB = 34.7 MiB amazon-cloudwatch-agent
# 24.8 MiB + 18.0 MiB = 42.8 MiB httpd (15)
# 69.1 MiB + 7.0 MiB = 76.0 MiB php (2)
# 228.2 MiB + 46.0 MiB = 274.3 MiB php-fpm (36)
Here you can see that there are 15 httpd processes, consuming a total of 43MiB, so each Apache process is using roughly 3MiB of RAM.
The php-fpm process will use about 7.6MiB.
Calculate Apache MaxRequestWorkers
To be safe though, reserve 15% of memory for all other processes (in my case ~1.2GiB) and round up apache process size to 3MiB.
MaxRequestWorkers = (Total RAM - Memory used for Linux, DB, etc.) / process size
MaxRequestWorkers = (8000MB - 1200MB) / 3MB = 2,266
Calculate php-fpm max-children
To be safe though, reserve 1 GiB for all other processes and round up php process size to 8MiB.
max_children = (Total RAM - Memory used for Linux, DB, etc.) / process size
max_children = (8000MB - 1200MB) / 8MB = 850
Here is the script we use, on roll up.
#!/bin/bash
# Creates a configuration script to run once final servers are up.
PROCESS_SIZE_APACHE_MB=3
PROCESS_SIZE_PHP_MB=8
# Get some values from the server
MEMORY_KB=`grep MemTotal /proc/meminfo | awk '"'"'{print $2}'"'"'`
MEMORY_MB=$(($MEMORY_KB / 1024))
MEMORY_AVAILABLE_MB=$(($MEMORY_KB / 1178))
NUM_CORES=`nproc --all`
echo "Memory: $MEMORY_MB MB"
echo "Memory Available: $MEMORY_AVAILABLE_MB MB"
echo "Num Cores $NUM_CORES"
#Now do some calculations
SERVER_LIMIT=$(($MEMORY_AVAILABLE_MB / $PROCESS_SIZE_APACHE_MB))
echo "HTTP MPM Server Limit: $SERVER_LIMIT"
#Convert Apache from mpm-prefork to mpm-worker
#Set params
#<IfModule mpm_*_module>
# ServerLimit (Total RAM - Memory used for Linux, DB, etc.) / process size
# StartServers (Number of Cores)
# MinSpareThreads 25
# MaxSpareThreads 75
# ThreadLimit 64
# ThreadsPerChild 25
# MaxRequestWorkers (Total RAM - Memory used for Linux, DB, etc.) / process size
# MaxConnectionsPerChild 1000
# </IfModule>
# /etc/httpd/conf.modules.d/00-mpm.conf
echo "
# LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
# LoadModule mpm_worker_module modules/mod_mpm_worker.so
LoadModule mpm_event_module modules/mod_mpm_event.so
<IfModule mpm_*_module>
ServerLimit $SERVER_LIMIT
StartServers $NUM_CORES
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 64
ThreadsPerChild 25
MaxRequestWorkers $SERVER_LIMIT
MaxConnectionsPerChild 1000
</IfModule>
" > /etc/httpd/conf.modules.d/00-mpm.conf
# Configure the workers
# pm = dynamic
# pm.max_children (total RAM - (DB etc) / process size) = 850
# pm.start_servers (cpu cores * 4)
# pm.min_spare_servers (cpu cores * 2)
# pm.max_spare_servers (cpu cores * 4)
# pm.max_requests 1000
MAX_CHILDREN=$(($MEMORY_AVAILABLE_MB / $PROCESS_SIZE_PHP_MB))
echo "Max Children: $MAX_CHILDREN"
NUM_START_SERVERS=$(($NUM_CORES * 4))
NUM_MIN_SPARE_SERVERS=$(($NUM_CORES * 2))
NUM_MAX_SPARE_SERVERS=$(($NUM_CORES * 4))
sed -c -i "s/^;*pm.max_children.*/pm.max_children = $MAX_CHILDREN/" /etc/php- fpm.d/www.conf
sed -c -i "s/^;*pm.start_servers.*/pm.start_servers = $NUM_START_SERVERS/" /etc/php- fpm.d/www.conf
sed -c -i "s/^;*pm.min_spare_servers.*/pm.min_spare_servers = $NUM_MIN_SPARE_SERVERS/" /etc/php-fpm.d/www.conf
sed -c -i "s/^;*pm.max_spare_servers.*/pm.max_spare_servers = $NUM_MAX_SPARE_SERVERS/" /etc/php-fpm.d/www.conf
sed -c -i "s/^;*pm.max_requests = 500.*/pm.max_requests = 1000/" /etc/php-> fpm.d/www.conf
No tool will give you that kind of metric because the best configuration depends greatly on your php scripts. If you have 4 cores and each request consumes 100% of one core for 1 second, the server will handle 4 request per second in the best case regardless of your mpm and php configuration. The type of hardware you have is also important. Some CPUs perform multiple times better than others.
Since you are using php_fpm, the apache mpm configuration will have little effect on performance. You just need to make sure the server doesn't crash with too many requests and have more apache threads than php processes. Note that the RAM is not the only thing that can make a server unreachable. Trying to execute more process than the CPU can handle will increase the load and the number of context switches, decrease the CPU cache efficiency and result in even lower performance.
The ideal number of php processes depends on how your scripts use CPU and other resources. If each script uses 50% of the time with I/O operations for example, 2 processes per core may be ideal. Assuming that those I/O operations can be done in parallel without blocking each other.
You'll also need to take into account the amount of resources used by other processes such as the DB. SQL databases can easily use more resources than the php scripts themselves.
Spare Servers and Spare Threads are the number of processes/threads that can be idle waiting for work. Creating threads takes time, so it's better to have them ready when a request arrives. The downside is that those threads will consume resources such as RAM even when idle, so you'll want to keep just enough of them alive. Both apache and php_fpm will handle this automatically. The number of idle threads will be reduced and increased as needed, but remain between the minimum and maximum values set in the configuration. Note that not all apache threads will serve php files as some requests may be fetching static files, therefore you should always have more apache threads than php processes.
Start Server and Start Threads represents just the number of processes/threads created during the startup. This have almost no effect on performance since the number of threads will be immediately increased or reduced to fit the values of Spare Threads.
MaxConnectionsPerChild and max_requests are just the maximum amount of requests executed during the process/thread life. Unless you have memory leaks, you won't need to tune those values.

Upload data to Apache server keep failing if uploading lasts more than 20 sec

I have a website where users can upload images to my hosting Apache/PHP server.
If files uploading lasts less than 20 sec everything is fine.
But if it lasts more (no matter what image filesize is), upload fails.
In .htaccess I already have:
php_value upload_max_filesize 10M
php_value post_max_size 70M
php_value max_execution_time 180
php_value max_input_time 180
And in php script, returng result of:
echo "-max_execution_time ".ini_get('max_execution_time');
echo "-max_input_time ".ini_get('max_input_time');
echo "-upload_max_filesize ".ini_get('upload_max_filesize');
echo "-post_max_size ".ini_get('post_max_size');
echo "-memory_limit ".ini_get('memory_limit');
is as excepted:
-max_execution_time 180
-max_input_time 180
-upload_max_filesize 10M
-post_max_size 70M
-memory_limit 128M
These are requests - all failed after 22sec with error net::ERR_SPDY_PROTOCOL_ERROR (in firefox they fail after 20sec)
https://cdn1.imggmi.com/uploads/2019/5/8/fadd31a1a22674cfc3cc4603c97762ff-full.jpg
What I am missing here???
Once again, if uploading duration is less than 20 sec - everything is fine...
20 seconds might mean that the Apache module mod_reqtimeout could be what's killing your request since that is one of it's defaults (20 seconds to receive request body) if it's options are not configured.
https://httpd.apache.org/docs/trunk/mod/mod_reqtimeout.html
I believe that mod_reqtimeout is an extension loaded automatically with Apache 2.4
I also encountered a similar problem at my apache2.4-rails-passenger-aws-ec2 project.
When I upload a large file (over 500MB), it times out in about 20 seconds.
I was trying to change apache's Timeout related configuration, but it didn't get better.
I switched my application server from apache-passenger to thin, then uploading was no problem.
Since this ask was posted recently, I thought that a recent release of apache had a problem.
After all, I downgraded apache at executing following commands, it's working properly.
$ sudo yum list httpd24
Installed packages
httpd24.x86_64 2.4.39-1.87.amzn1
$ sudo yum erase httpd24 httpd24-tools httpd24-devel
$ sudo yum install httpd24-2.4.38-1.86.amzn1 httpd24-2.4.38-1.86.amzn1 httpd24-devel-2.4.38-1.86.amzn1
After I read this ask's answers, I re-upgrade apache to 2.4.39 and commented out the following line to disable mod_reqtimeout, it's also working.
# LoadModule reqtimeout_module modules/mod_reqtimeout.so
I think the cause of the problem is in a change of apache through 2.4.38 to 2.4.39.
Maybe This problem stems from "Timeout" Directive in apache httpd.conf file.
See here: https://httpd.apache.org/docs/2.4/mod/core.html#timeout
It was server problem due shared hosting and I had to contact hosting support...eventually they did some server reconfig and now upload does not break so early...thanks to #m908070
cheers
turning off this module - its help
LoadModule reqtimeout_module modules/mod_reqtimeout.so

Apache2 starting with high amount of memory allocated

I've an Ubuntu Linux 16.04.1 server, running a wordpress blog my server hardware is pretty ok for this task but I found that apache2 is loading too much memory.
After i JUST REBOOT the server the OS shows me this consume
2215 www-data 563.27 MB /usr/sbin/apache2 -k start
2216 www-data 563.27 MB /usr/sbin/apache2 -k start
2217 www-data 563.27 MB /usr/sbin/apache2 -k start
It's a pretty high value for a server before any request, I found the config file setting the number of spare servers to 3 after start up and it is okay.
but the amount of memory each server loads is what making me confuse and i didnt found any configuration to set this value or a min or max.
Does anyone know the default value for an apache php mysql running a wordpress?
are there any config file i can check for understand why such high memory?

Linux allowed open files limit differs between CGI and CLI

I'm having problems with my open files limits when running my script through command line vs through apache.
My apache server is running under the user "apache" and it has its limits overridden in the /etc/security/limits.conf file as follow:
apache soft nofile 10240
apache hard nofile 40960
I've got a simple script that I use for testing:
<?php
system('whoami');
system('ulimit -a | grep open');
When I hit this script through my browser I get:
apache
open files (-n) 1024
But when I run it in command line under user apache I get:
[reza#app pdf]$ sudo -u apache php script.php
apache
open files (-n) 10240
Can someone explain to me what would cause this discrepancy?
(not sure if it's a ServerFault question, feel free to move it there)
Cheers
When you run from the command line, the limits for the apache user apply. When you hit the web server, the limits from the environment that launched the web server apply. You probably want to add ulimit statements to the script that launched the apache web server.

Categories