I use Mail-in-a-Box and want to use PHP on my Website. According to https://discourse.mailinabox.email/t/is-atleast-being-able-to-toggle-php-planned/288/4 using PHP for part of the site that is responsable for the mail(Roundcube) and the main site could open security vunerability. I want to consider this advises and would for example let my server process my main site in another php process.(Or something similar...)
Note: I dont know very much about php
Here is a snippet of the nginx configuration that miab uses to enable php on specific parts of the website.
upstream php-fpm {
server unix:/var/run/php/php7.2-fpm.sock;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
...
location ~ /mail/.*\.php {
# note: ~ has precendence over a regular location block
include fastcgi_params;
fastcgi_split_path_info ^/mail(/.*)()$;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /usr/local/lib/roundcubemail/$fastcgi_script_name;
fastcgi_pass php-fpm;
# Outgoing mail also goes through this endpoint, so increase the maximum
# file upload limit to match the corresponding Postfix limit.
client_max_body_size 128M;
}
...
Related
I currently get a warning on Wordpress saying I am on an insecure version of PHP (7.3.3).
I've been trying to follow the instructions on the following page to update the version to PHP 8.1.
https://www.cloudbooklet.com/how-to-install-or-upgrade-php-8-1-on-ubuntu-20-04/
I was able to install and enable php8.1 but stuck with the remaining steps. The article tells me to update a few lines in the location block of a conf file but I can't find it.
I looked at files like wordpress_https conf but could only find lines like this:
location ~ \.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
Any pointers on where I need to update the reference to php8.1? It's nginx server on Obuntu 20.04. It's for a Wordpress application installed on Vultr. Thanks.
Try this in your site' nginx config file. Comment out everything in your file or just backup the file and try this.
upstream php {
server unix:/tmp/php-cgi.socket;
server php:9000;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl;
server_name example.test www.example.test;
ssl_certificate /etc/nginx/ssl/example.test.pem;
ssl_certificate_key /etc/nginx/ssl/example.test-key.pem;
root /var/www/html;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include fastcgi.conf;
fastcgi_intercept_errors on;
fastcgi_pass php;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
#The only job of this block is to redirect http to https
server {
listen 80;
listen [::]:80;
server_name example.test www.example.test;
return 301 https://$server_name$request_uri;
}
Depending on your OS, the nginx config file for your website will either be in /etc/nginx/conf.d or in /etc/nginx/sites-available/ .
The above configuration was taken from this WordPress docker dev env.
After making the edits in the correct conf file, test nginx:
sudo nginx -t
If all is well in the conf file restart nginx based on your system i.e :
sudo service nginx restart
If this works for you, ensure you search for more security details you can add in your configuration to improve it. If it does not work for you, you can generate WP specifc Nginx configurations using DigitalOcean.
So my docker setup is the following: I have an Nginx container that accepts HTTP requests, and I have another container (my custom container) where I have php-fpm, and my application code. The application code is not on the host, only in the web container.
I want to configure Nginx as a proxy, to get requests and route them to php-fpm.
My nginx confiration is the following (i've removed some parts that are not important here):
upstream phpserver {
server web:9000;
}
server {
listen 443 ssl http2;
server_name app;
root /app/web;
ssl_certificate /ssl.crt;
ssl_certificate_key /ssl.key;
location ~ ^/index\.php(/|$) {
fastcgi_pass phpserver;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_read_timeout 160;
internal;
http2_push_preload on;
}
}
And my docker configuration (again, I've removed some not important parts)
nginx:
ports:
- 443:443/tcp
- 80:80/tcp
image: nginx
links:
- web:web
web:
image: custom_image
container_name: web
With this configuration I get the following Nginx error: "open() "/app/web" failed (2: No such file or directory)", because Nginx does not have access to that folder (that folder is in the web container were the php-fpm is).
Is there a way I can configure Nginx to route the HTTP requests, even if it does not have access to the application code?
I understand that one of the ways to fix this issue is to mount the application code to the Nginx container, but I would like to avoid that if possible. The reason for that is that in swarm mode, that wouldn't work if the two containers don't share a host.
I managed to solve the issue, so I'm posting my own solution bellow for people with similar problem.
The solution was to use the 'alias' directive and not use the 'root' directive in the nginx configuration (i've removed some parts that are not important here):
upstream phpserver {
server web:9000;
}
server {
listen 443 http2;
ssl on;
server_name app;
ssl_certificate /ssl.crt;
ssl_certificate_key /ssl.key;
location ~ ^/index\.php(/|$) {
alias /app/web;
fastcgi_pass phpserver;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
internal;
http2_push_preload on;
}
}
Now the request is properly routed to the phpserver on port 9000, and handled there by php fpm. Php fpm know which script to execute by looking at the 'alias' directive.
The problem now was how to serve static files. One solution was to serve them via php fpm as well, but from what I read online that's not recommended as the overhead would be bigger. So my solution was to share all the static files with the nginx docker container, so that ngnix has access to them and can serve them directly. If somebody has a better solution about how to serve static files in this scenario, please let me know.
# Cache Control for Static Files
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
#access_log on;
#log_not_found off;
expires 360d;
}
I am trying to configure nginx to serve PHP from another server.
The files can be located within a directory under /sample on the other server
Fast CGI is running on port 9000 on the other server
Here is what I have tried, which is not working at the moment.
location ~ [^/]\.php(/|$) {
proxy_pass http://192.168.x.xx;
proxy_redirect http://192.168.x.xx /sample;
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name)
{
return 404;
}
# Mitigate https://httpoxy.org/ vulnerabilities
fastcgi_param HTTP_PROXY "";
fastcgi_read_timeout 150;
fastcgi_buffers 4 256k;
fastcgi_buffer_size 128k;
fastcgi_busy_buffers_size 256k;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
I also need to configure nginx to serve static files from the same server
The following configuration does exactly what you need:
server {
listen 80;
index index.php index.html;
server_name localhost;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root {STATIC-FILES-LOCATION};
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass {PHP-FPM-SERVER}:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
All you have to do is replace {STATIC-FILES-LOCATION} with the location of your static files on the Nginx server and {PHP-FPM-SERVER} with the IP of the PHP-FPM server.
This way you will serve all files without the PHP extension statically from the Nginx server and all the PHP files will be interpreted with the PHP-FPM server.
Here's a working example of a dockerised version of what you are trying to achieve - https://github.com/mikechernev/dockerised-php/. It serves the static files from Nginx and interprets the PHP files via the PHP-FPM container. In the accompanying blog post (http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/) I go in lengths about the whole connection between Nginx and PHP-FPM.
EDIT: One important thing to keep in mind is that the paths in both the Nginx and PHP-FPM servers should match. So you will have to put your php files in the same directory on the PHP-FPM server as your static files on the Nginx one ({STATIC-FILES-LOCATION}).
An example would be to have /var/www/ on Nginx holding your static files and /var/www on PHP-FPM to hold your php files.
Hope this helps :)
You don't have to use proxy_ directives, because they work with HTTP protocol, but in this case FastCGI protocol is used. Also, as it was said in comments, no need for if statement, because Nginx server cannot determine if the file on a remote server exists.
You could try this configuration:
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
# Mitigate https://httpoxy.org/ vulnerabilities
fastcgi_param HTTP_PROXY "";
fastcgi_read_timeout 150;
fastcgi_buffers 4 256k;
fastcgi_buffer_size 128k;
fastcgi_busy_buffers_size 256k;
fastcgi_pass 192.168.x.xx:9000; #not 127.0.0.1, because we must send request to remote PHP-FPM server
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /path/to/site/root$fastcgi_script_name;
}
You will need to replace /path/to/site/root with a real path on the PHP-FPM server. For example, if the request http://example.com/some/file.php must be handled by /var/www/some/file.php, then set it like this:
fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name;
Also, to make PHP-FPM server be able to receive requests from outside, edit your FPM pool configuration (on Debian it usually located in /etc/php5/fpm/pool.d/www.conf, on Centos - /etc/php-fpm.d/www.conf):
Replace
listen = 127.0.0.1:9000
with:
listen = 9000
or:
listen = 192.168.x.xx:9000 # FPM server IP
Probably you will also need to edit allowed_clients directive:
listen.allowed_clients = 127.0.0.1,192.168.y.yy # Nginx server IP
I also need to configure nginx to serve static files from the same server
If I understand correctly, and you want to serve static files from the server, Nginx is working on, then you may just add another location to your Nginx configuration.
You should not use proxy_* directives. using Nginx as a proxy would be done only if a distant server has rendered the page (and you would request it with HTTP protocol).
Here the thing you want to proxy is a fastcgi server, not an HTTP server.
So the key is:
fastcgi_pass 127.0.0.1:9000;
Where you currently say you want to reach a fastcgi server on IP 127.0.0.1 port 900, which seems quite wrong.
Use instead:
fastcgi_pass 192.168.x.xx:9000;
And remove proxy_* stuff.
Edit: also, as stated in comments by #Bart, you should not use an if testing that a local file in the document root, matching the php script name does exists. The php files are not on this server. So remove this file.
If you want to apply some security check, you would, later, alter your very generic location [^/]\.php(/|$) to something more specific, like location=/index\.php, or some others variations.
No need to give /sample path
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
# Mitigate https://httpoxy.org/ vulnerabilities
fastcgi_param HTTP_PROXY "";
fastcgi_pass IP:9000;
fastcgi_index index.php;
include fastcgi_params;
}
For static files from nginx server you need to use try_files for that.
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
// other CGI parameters
}
Make sure you're aware of common pitfalls.
If you want to access static files from another server you need to run a webserver there and just proxy pass from Nginx
Is it possible to run multiple NGINX on a single Dedicated server?
I have a dedicated server with 256gb of ram, and I am running multiple PHP scripts on it but it's getting hangs because of memory used with PHP.
when I check
free -m
it's not even using 1% of memory.
So, I am guessing its has some to do with NGINX.
Can I install multiple NGINX on this server and use them like
5.5.5.5:8080, 5.5.5.5:8081, 5.5.5.5:8082
I have already allocated 20 GB memory to PHP, but still not working Properly.
Reason :- NGINX gives 504 Gateway Time-out
Either PHP or NGINX is misconfigured
You may run multiple instances of nginx on the same server provided that some conditions are met. But this is not the solution you should look for (also this may not solve your problem at all).
I got my Ubuntu / PHP / Nginx server set this way (it actually also runs some Node.js servers in parallel). Here is a configuration example which works fine on a AWS EC2 medium instance (m3).
upstream xxx {
# server unix:/var/run/php5-fpm.sock;
server 127.0.0.1:9000 max_fails=0 fail_timeout=10s weight=1;
ip_hash;
keepalive 512;
}
server {
listen 80;
listen 8080;
listen 443 ssl;
#listen [::]:80 ipv6only=on;
server_name xxx.mydomain.io yyy.mydomain.io;
if ( $http_x_forwarded_proto = 'http' ) {
return 301 https://$server_name$request_uri;
}
root /home/ubuntu/www/xxxroot;
index index.php;
location / {
try_files $uri $uri/ /index.php;
}
location ~ ^/(status|ping)$ {
access_log off;
allow 127.0.0.1;
#allow 1.2.3.4#your-ip;
#deny all;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass adn;
#fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
#fastcgi_param SCRIPT_FILENAME /xxxroot/$fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME $request_filename;
#fastcgi_param DOCUMENT_ROOT /home/ubuntu/www/xxxroot;
# send bad requests to 404
#fastcgi_intercept_errors on;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
Hope it helps,
I think you are running into a timeout. Your PHP-Scripts seams to run to long.
Check following:
max_execution_time in your php.ini
request_terminate_timeout in www.conf of your PHP-FPM configuration
fastcgi_read_timeout in http section or location section of your nginx configuration.
Nginx is designed more to be used as a reverse proxy or load balancer than to control application logic and run php scripts. Running multiple instances of nginx that each execute php isn't really playing to the server application's strengths. As an alternative, I'd recommend using nginx to proxy between one or more apache instances, which are better suited to executing heavy php scripts. http://kbeezie.com/apache-with-nginx/ contains information on getting apache and nginx to play nicely together.
i'm trying to setup nginx on my vps and i made it however when i'm try to use .php files it download then instead of runing them. This is my nginx.conf
server {
listen 80;
server_name site;
root /var/www/;
index index.php index.html;
}
Any ideas how to fix it?
(i have php5-fpm installed)
For pass the PHP scripts to FastCGI you must add this into server config:
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
You really need to enable php-cgi/fast-cgi or -cli depending on your server/vps configuration.
share with us more information to be able to help you (like config files etc.)