So I am setting up my first Linode (sort of new to managing everything myself). However, I have the following problem. The browser downloads the php file instead of executing it and MS Internet Explorer shows the file's content instead of downloading it.
I've read through a lot of content/answers about this problem but nothing seems works so I'd appreciate your help.
Important to note is that the website "crashes" only when I add the following line to the Virtual Host file
location ~* .(ico|jpg|webp|jpeg|gif|css|png|js|ico|bmp|zip|woff)$ {
expires 365d;
}
Here are the two files in full
NGINX.CONF
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10s;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
# ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log debug;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
and the sites-available/default file
server {
listen 80 default_server;
listen [::]:80 default_server;
root /www/bloggingwithdani.com;
index index.html index.php index.htm;
server_name localhost;
# pagespeed On;
# pagespeed FileCachePath "/var/cache/ngx_pagespeed/";
# pagespeed EnableFilters combine_css,combine_javascript;
location / {
try_files $uri $uri/ /index.php?$args;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~* .(ico|jpg|webp|jpeg|gif|png|ico|bmp|zip|woff|css|js|)$ {
expires 365d;
}
location ~ /\. {
deny all;
}
location ~* /(?:uploads|files)/.*\.php$ {
deny all;
}
location ~ [^/]\.php(/|$) {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
your php location block looks wrong to me. here's my location block for php
location ~ \.(php)$ {
try_files $uri = 404;
location ~ \..*/.*\.php$ {return 404;}
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_keep_conn on;
fastcgi_pass unix:/var/run/php5-fpm.sock;
# fastcgi_pass 127.0.0.1:9000; #passing directly to the socket
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
also, your static file caching is wrong, and has an erroneous wild card parameter. remove the last | and optionally add some extra configuration options to further optimize delivery of static content.
location ~* .(ico|jpg|webp|jpeg|gif|png|ico|bmp|zip|woff|css|js)$ {
expires max;
add_header Vary Accept-Encoding;
access_log off;
}
Related
My CSS and JS files are showing up as 404 / File Not Found. It is the "Laravel 404" page, not the "Nginx 404" page that is being shown, which makes me think it could be a Laravel issue, but I'm not sure. The rest of my site and the Laravel app in the sub-directory are working fine.
I have Nginx serving a regular PHP web site (PHP-FPM) from the default root at /
I also have Nginx serving a Laravel app from /todos/
But the images from under /todos/ (the Laravel app) are all showing up as 404. The file system location is /todos/public/css/ and /todos/public/js/ accordingly.
I'm guessing this is an Nginx issue, but I'm not sure. It might be a Laravel issue. Do I need to set a Route in /routes/web.php for css and js files in Laravel?
This is a pretty vanilla Bitnami Ubuntu install.
Here are my Nginx config files:
Contents of nginx.conf:
user daemon daemon;
worker_processes auto;
error_log "/opt/bitnami/nginx/logs/error.log";
pid "/opt/bitnami/nginx/logs/nginx.pid";
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
client_body_temp_path "/opt/bitnami/nginx/tmp/client_body" 1 2;
proxy_temp_path "/opt/bitnami/nginx/tmp/proxy" 1 2;
fastcgi_temp_path "/opt/bitnami/nginx/tmp/fastcgi" 1 2;
scgi_temp_path "/opt/bitnami/nginx/tmp/scgi" 1 2;
uwsgi_temp_path "/opt/bitnami/nginx/tmp/uwsgi" 1 2;
access_log "/opt/bitnami/nginx/logs/access.log";
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_proxied any;
gzip_vary on;
gzip_types text/plain
text/xml
text/css
text/javascript
application/json
application/javascript
application/x-javascript
application/ecmascript
application/xml
application/rss+xml
application/atom+xml
application/rdf+xml
application/xml+rss
application/xhtml+xml
application/x-font-ttf
application/x-font-opentype
application/vnd.ms-fontobject
image/svg+xml
image/x-icon
application/atom_xml;
gzip_buffers 16 8k;
add_header X-Frame-Options SAMEORIGIN;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS;
include "/opt/bitnami/nginx/conf/bitnami/bitnami.conf";
Contents of bitnami.conf:
# HTTP server
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
return 301 https://$host$request_uri;
location / {
root /opt/bitnami/nginx/html;
index index.php index.html index.htm;
try_files $uri $uri/ /index.php?$query_string;
}
## Begin - Security
# deny all direct access for these folders
location ~* /(\.git|cache|bin|logs|backup|tests)/.*$ { return 403; }
# deny running scripts inside core system folders
location ~* /(system|vendor)/.*\.(txt|xml|md|html|yaml|yml|php|pl|py|cgi|twig|sh|bat)$ { return 403; }
# deny running scripts inside user folder
location ~* /user/.*\.(txt|md|yaml|yml|php|pl|py|cgi|twig|sh|bat)$ { return 403; }
# deny access to specific files in the root folder
location ~ /(LICENSE\.txt|composer\.lock|composer\.json|nginx\.conf|web\.config|htaccess\.txt|\.htaccess) { return 403; }
## End - Security
include "/opt/bitnami/nginx/conf/bitnami/phpfastcgi.conf";
include "/opt/bitnami/nginx/conf/bitnami/bitnami-apps-prefix.conf";
}
# HTTPS server
server {
listen 443 ssl http2;
listen [::]:443 default ipv6only=on;
server_name localhost;
ssl_certificate server.crt;
ssl_certificate_key server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
location / {
root /opt/bitnami/nginx/html;
index index.php index.html index.htm;
try_files $uri $uri/ /index.php?$query_string;
}
location /todos {
try_files $uri $uri/ /todos/index.php?$query_string;
index index.php index.html index.htm;
root /opt/bitnami/nginx/html/todos/public/;
location ~ \.php$ {
fastcgi_index index.php;
fastcgi_read_timeout 300;
fastcgi_pass unix:/opt/bitnami/php/var/run/www.sock;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME /opt/bitnami/nginx/html/todos/public/index.php;
fastcgi_param QUERY_STRING $query_string;
include fastcgi_params;
}
}
## Begin - Security
# deny all direct access for these folders
location ~* /(\.git|cache|bin|logs|backup|tests)/.*$ { return 403; }
# deny running scripts inside core system folders
location ~* /(system|vendor)/.*\.(txt|xml|md|html|yaml|yml|php|pl|py|cgi|twig|sh|bat)$ { return 403; }
# deny running scripts inside user folder
location ~* /user/.*\.(txt|md|yaml|yml|php|pl|py|cgi|twig|sh|bat)$ { return 403; }
# deny access to specific files in the root folder
location ~ /(LICENSE\.txt|composer\.lock|composer\.json|nginx\.conf|web\.config|htaccess\.txt|\.htaccess) { return 403; }
## End - Security
include "/opt/bitnami/nginx/conf/bitnami/phpfastcgi.conf";
include "/opt/bitnami/nginx/conf/bitnami/bitnami-apps-prefix.conf";
}
include "/opt/bitnami/nginx/conf/bitnami/bitnami-apps-vhosts.conf";
Contents of /opt/bitnami/nginx/conf/bitnami/phpfastcgi.conf:
location ~ \.php$ {
root html;
fastcgi_read_timeout 300;
fastcgi_pass unix:/opt/bitnami/php/var/run/www.sock;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
include fastcgi_params;
}
Well, I ended up adding the following location blocks and it worked, turns out alias was the trick:
location /todos/css/ {
alias /opt/bitnami/nginx/html/todos/public/css/;
}
location /todos/js/ {
alias /opt/bitnami/nginx/html/todos/public/js/;
}
Php Scripts on my Nginx / php7.2-fpm only working with the default config and the IP Adress not with Domain Names or subdomains...
My Configs:
Default Config
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.php index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
}
My Domain Config:
server {
listen 80;
listen [::]:80;
root /home/fluke667/html/web.mydomain.com/web;
index index.php index.html index.htm index.nginx-debian.html;
server_name web.mydomain.com;
location / {
try_files $uri $uri/ =404;
}
}
Nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
#default_type application/octet-stream;
default_type text/html;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Without that subdomain and with IP it works in /var/www/html
Dont know why, can any1 Help me?
Using wordpress codex for nginx as an example
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
Which means for any request in this server block the first location block most likely will run. It will first try to find a static file, then try to look for a directory with that request path, then finally send it to index.php. When sent to the index.php the second location block of location ~ \.php$ will run which will send all requests with the php file extension to the php7.2-fpm.sock
You currently have 2 server blocks. One in Default Config and one in My Domain Config. You have server_name web.mydomain.com; in your custom conf which means any requests to that host will be answered by that server block. Any other host will be handled by the Default Config which includes by IP.
My server has roughly this arborescence:
www
|-index.php
|foo
|-index.php
|-bar.php
I can access to /index.php, but accessing to /, /foo or anything else result in a 403. I've tried various config, but none of them is working.
My nginx.conf file:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server {
listen 80;
server_name localhost;
root /var/www/html;
location / {
root /var/www/html;
#try_files $uri /index.php$is_args$args;
}
location ~ \.php$ {
fastcgi_pass web_fpm:9000;
fastcgi_index index.php
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
include fastcgi_params;
}
}
}
If I uncomment the commented line, any request serves /index.php, so it's no good.
I've found the solution: setting the index. Here is the snippet:
server {
listen 80;
server_name localhost;
root /var/www/html;
autoindex on;
location / {
index index.php index.html;
}
location ~ \.php$ {
fastcgi_pass web_fpm:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
include fastcgi_params;
}
}
I have a basic symfony2 app, when I try to reach the host I get a download instead of the execution of the PHP code.
This is the vhost configuration:
server {
listen 80;
server_name sandobox.dev www.sandbox.dev;
root /var/www/sandbox/web;
rewrite ^/app\.php/?(.*)$ /$1 permanent;
try_files $uri #rewriteapp;
location #rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
# Deny all . files
location ~ /\. {
deny all;
}
location ~ ^/(app|app_dev)\.php(/|$) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index app.php;
send_timeout 1800;
fastcgi_read_timeout 1800;
#fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_pass 127.0.0.1:9001;
}
# Statics
location /(bundles|media) {
access_log off;
expires 30d;
# Font files
#if ($filename ~* ^.*?\.(eot)|(ttf)|(woff)$){
# add_header Access-Control-Allow-Origin *;
#}
try_files $uri #rewriteapp;
}
}
This is my nginx.conf file
user nginx;
worker_processes 4;
worker_cpu_affinity 01 10;
worker_rlimit_nofile 1024;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
# max_clients = worker_processes * worker_connections
worker_connections 1024;
}
# Global environment variables
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush off;
keepalive_timeout 65;
keepalive_requests 100;
send_timeout 60;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Logging of trivial open() errors resulting in 404 to error log
log_not_found off;
# Send back the version in the error pages?
server_tokens off;
# Files to serve when directories are requested
index index.php index.htm index.htm;
# Automatic directory indexing
autoindex off;
# Defaults to on, but it's a problem for trailing slash redirections
server_name_in_redirect off;
# Increase the maximum length of a virtual host entry
server_names_hash_bucket_size 64;
# Compression
gzip on;
gzip_disable "msie6";
gzip_min_length 0;
gzip_types text/plain;
# Include separately managed configuration file(s)
include /etc/nginx/conf.d/*.conf;
}
~
~
~
~
Thanks in advance for any hint
I searched in the entire web but apparently none has published the configuration I'm looking for.
I'm currently testing on a VM what I'd like to be my server config, the 3 apps I'll install are rutorrent, web interface for rtorrent, owncloud and plex, 2 of these are configured with nginx but somehow my configuration doesn't work. I created 2 virtual servers one named rutorrent, the other owncloud, my idea would be to access these with serverip/rutorrent and serverip/owncloud, separating the 2. I'm on Ubuntu 14.04, my rutorrent and owncloud folders are into /var/www, my php version is 5.5.9-1.
The current problem is that the rutorrent config works if it's the only one enabled, but it doesn't if the owncloud is enabled too, moreover, the owncloud alone doesn't work. With a stock owncloud config from their manual the owncloud works but the rutorrent returns a file not foundpage.
Here are my server files from /etc/nginx/sites-available which I have linked to the enableddirectory:
upstream php-handler {
#server 127.0.0.1:9000;
server unix:/var/run/php5-fpm.sock;
}
server {
listen 80;
server_name 192.168.61.128;
return 301 https://$server_name$request_uri; # enforce https
}
server {
listen 443;
server_name 192.168.61.128;
ssl on;
ssl_certificate /srv/ssl/nginx.crt;
ssl_certificate_key /srv/ssl/nginx.key;
# Path to the root of your installation
root /var/www;
client_max_body_size 10G; # set max upload size
fastcgi_buffers 64 4K;
index index.php;
error_page 403 /core/templates/403.php;
error_page 404 /core/templates/404.php;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location /owncloud/ {
alias /var/www/owncloud/;
location ~ ^/owncloud/(?:\.htaccess|data|config|db_structure\.xml|README) {
deny all;
}
rewrite ^/owncloud/caldav(.*)$ /owncloud/remote.php/caldav$1 redirect;
rewrite ^/owncloud/carddav(.*)$ /owncloud/remote.php/carddav$1 redirect;
rewrite ^/owncloud/webdav(.*)$ /owncloud/remote.php/webdav$1 redirect;
rewrite ^/owncloud/.well-known/host-meta /owncloud/public.php?service=host-meta last;
rewrite ^/owncloud/.well-known/host-meta.json /owncloud/public.php?service=host-meta-json last;
rewrite ^/owncloud/.well-known/carddav /owncloud/remote.php/carddav/ redirect;
rewrite ^/owncloud/.well-known/caldav /owncloud/remote.php/caldav/ redirect;
rewrite ^/owncloud/apps/([^/]*)/(.*\.(css|php))$ /owncloud/index.php?app=$1&getfile=$2 last;
rewrite ^(/owncloud/core/doc/[^\/]+/)$ $1/index.html;
try_files $uri $uri/ index.php;
location ~ ^/owncloud/(.+?\.php)(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
set $path_info $fastcgi_path_info;
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_INFO $path_info;
fastcgi_param HTTPS on;
fastcgi_param SCRIPT_NAME /owncloud/Â$fastcgi_script_name;
fastcgi_pass php-handler;
}
}
# Optional: set long EXPIRES header on static assets
location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
expires 30d;
# Optional: Don't log access to assets
access_log off;
}
}
It's as close as possible to the official owncloud configuration but I get 404 error when I load the page.
The rutorrent config is as follows, it has both normal and ssl config because I tried changing stuff on the normal one without touching the ssl that works:
server {
listen 80;
server_name 192.168.61.128;
root /var/www;
index index.php index.html index.htm;
#location / {
# try_files $uri $uri/ =404;
#}
location /rutorrent {
auth_basic "rutorrent";
auth_basic_user_file /var/www/rutorrent/.htpasswd;
}
location /RPC2 {
include scgi_params;
scgi_pass localhost:5000;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi.conf;
#fastcgi_intercept_errors on;
#fastcgi_ignore_client_abort off;
#fastcgi_connect_timeout 60;
#fastcgi_send_timeout 180;
#fastcgi_read_timeout 180;
#fastcgi_buffer_size 128k;
#fastcgi_buffers 4 256k;
#fastcgi_busy_buffers_size 256k;
#fastcgi_temp_file_write_size 256k;
}
location ~ /\.ht {
deny all;
}
}
server {
listen 443;
server_name 192.168.61.128;
root /var/www;
index index.php index.html index.htm;
ssl on;
ssl_certificate /srv/ssl/nginx.crt; #server.crt
ssl_certificate_key /srv/ssl/nginx.key; #server.key
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
ssl_prefer_server_ciphers on;
location / {
#try_files $uri $uri/ =404;
}
location /rutorrent {
auth_basic "rutorrent";
auth_basic_user_file /var/www/rutorrent/.htpasswd;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi.conf;
#fastcgi_intercept_errors on;
#fastcgi_ignore_client_abort off;
#fastcgi_connect_timeout 60;
#fastcgi_send_timeout 180;
#fastcgi_read_timeout 180;
#fastcgi_buffer_size 128k;
#fastcgi_buffers 4 256k;
#fastcgi_busy_buffers_size 256k;
#fastcgi_temp_file_write_size 256k;
}
location /RPC2 {
include scgi_params;
scgi_pass localhost:5000;
}
location ~ /\.ht {
deny all;
}
}
And finally my nginx.conf which again is as close as standard as possible.
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log info;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
I'm quite bad with this stuff, but intuitively it shouldn't be this hard. Thanks for the help.
You're completely right, I modified the configuration putting both location in the same vhost, this is a working result, again mostly adapted from the OwnCloud manual.
upstream php-handler {
#server 127.0.0.1:9000;
server unix:/var/run/php5-fpm.sock;
}
server {
listen 80;
server_name 192.168.61.128;
return 301 https://$server_name$request_uri; # enforce https
}
server {
listen 443;
server_name 192.168.61.128;
root /var/www;
index index.php index.html index.htm;
ssl on;
ssl_certificate /srv/ssl/nginx.crt; #server.crt
ssl_certificate_key /srv/ssl/nginx.key; #server.key
rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect;
rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect;
rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect;
client_max_body_size 10G; # set max upload size
fastcgi_buffers 64 4K;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location / {
rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;
rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;
try_files $uri $uri/ index.php;
}
location ~ ^/(data|config|\.ht|db_structure\.xml|README) {
deny all;
}
location ~ \.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param HTTPS on;;
fastcgi_pass php-handler;
}
location /rutorrent {
auth_basic "rutorrent";
auth_basic_user_file /var/www/rutorrent/.htpasswd;
}
location /RPC2 {
include scgi_params;
scgi_pass unix:/home/rtorrent/.sockets/scgi.socket;
}
location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
expires 30d;
access_log off;
}
}
There's a lot with this config that could be improved, but your primary issue is that define two server blocks with identical server_name. They won't be merged, if that's what you're expecting, but one is picked, the other isn't.