i have site on ubuntu, nginx with php-fpm,
My problem is that my first byte took about 15 seconds to respond.
below is the result of htop & www.conf file.
You can enable slow log for php-fpm to check which part in your code is slow.
Please check https://easyengine.io/tutorials/php/fpm-slow-log/
Related
Are they getting queued and processed once php-fpm can do that? Or is php-fpm locking down in some way?
What I experience is that once the limit is reached, all new requests just hang indefinetly or until I restart apache php-fpm.
I've an Ubuntu Linux 16.04.1 server, running a wordpress blog my server hardware is pretty ok for this task but I found that apache2 is loading too much memory.
After i JUST REBOOT the server the OS shows me this consume
2215 www-data 563.27 MB /usr/sbin/apache2 -k start
2216 www-data 563.27 MB /usr/sbin/apache2 -k start
2217 www-data 563.27 MB /usr/sbin/apache2 -k start
It's a pretty high value for a server before any request, I found the config file setting the number of spare servers to 3 after start up and it is okay.
but the amount of memory each server loads is what making me confuse and i didnt found any configuration to set this value or a min or max.
Does anyone know the default value for an apache php mysql running a wordpress?
are there any config file i can check for understand why such high memory?
I'm having problems with my open files limits when running my script through command line vs through apache.
My apache server is running under the user "apache" and it has its limits overridden in the /etc/security/limits.conf file as follow:
apache soft nofile 10240
apache hard nofile 40960
I've got a simple script that I use for testing:
<?php
system('whoami');
system('ulimit -a | grep open');
When I hit this script through my browser I get:
apache
open files (-n) 1024
But when I run it in command line under user apache I get:
[reza#app pdf]$ sudo -u apache php script.php
apache
open files (-n) 10240
Can someone explain to me what would cause this discrepancy?
(not sure if it's a ServerFault question, feel free to move it there)
Cheers
When you run from the command line, the limits for the apache user apply. When you hit the web server, the limits from the environment that launched the web server apply. You probably want to add ulimit statements to the script that launched the apache web server.
I have a very slow home server (raspberry pi) with 700 MHz CPU. When I develop some sites, it sometimes happens that I input a large (5 MB) .jpg file into an image resizer (imagecreatefromjpeg(), imagecreatetruecolor(), imagecopy()) which causes the server to hang.
I'd say it's processing the file, but even when I wait for minutes, it never ends.
The problem is that I cannot even access the shell to stop/restart apache2; the only solution is to power off the server completely.
I was wondering whether there's any way to limit HW resources for apache2; for example if I could limit only 80% CPU usage for it, maybe I could still access the shell and stop it.
I tried setting the timeout and max_execution_time directives, but they don't seem to stop apache2 from working infinitely and freezing the server.
Any ideas how to solve this?
I don't think you can limit the cpu usage of apache from its own settings.
You can try using a separate app like cpulimit (see: how-to):
$ sudo apt-get install cpulimit
$ man cpulimit
You can also try these to optimize the overall performance of your server.
Edit your /etc/apache2/apache2.conf and use these values:
<IfModule mpm_prefork_module>
StartServers 2
MinSpareServers 2
MaxSpareServers 3
MaxClients 50
MaxRequestsPerChild 0
</IfModule>
Expand your swap in /etc/dphys-swapfile set:
CONF_SWAPSIZE=512
Then run:
$ sudo dphys-swapfile setup
$ sudo dphys-swapfile swapon
Overclock your raspberry (it's safer than it sounds) here:
$ sudo raspi-config
I use it at 950MHz. There is a higher option (1000MHz), but some people on the forum complained about SD corruption with that one.
You can also set the graphic memory to 16 in raspi-config memory_split if you do not use the graphical interface.
You can install apache-mod_ratelimit. Also, see Control Apache with httpd.conf.
When restarting the php-fpm service on my Linux system, the PHP CGI process take a while to shutdown completely. Until it does, trying to start a new PHP CGI instance fails because port 9000 is still held by the terminating process. Accessing the site during this time results in a 502 Gateway Error, which I'd like to avoid.
How can I restart php-fpm smoothly without getting this error?
Run two instances of php-fpm, describe it in one upstream section.
upstream fast_cgi {
server localhost:9000;
server localhost:9001 backup;
}
Change nginx.conf, to use fastcgi_pass fast_cgi;.
After that, if you restart one instance, nginx will process request through second php-fpm instance.