I give below an execerpt of of /etc/nginx/sites-available/default file
server {
listen 443 ssl;
server_name example.com;
root /var/www/html;
index index.php index.html;
location / {
try_files $uri $uri/ = 404;
}
location /rproxy/ {
proxy_pass https://example.org:8144/;
}
location ~ \.php$ {
try_files $uri = 404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
....
}
The example.org:8144 server
has the files and
index.php - returns hello World
bonjour.php - returns bonjour
Now here is the issue:
If I browse to https://example.com/rproxy it promptly returns hello world - the expected result.
However, if I browse to https://example.com/rproxy/bonjour.php (or even https://example.com/rproxy/index.php) I get a 404 error.
I understand what is happening here. My Nginx configuration is causing the example.com instance of Nginx to attempt to find all *.php files locally (i.e. on example.com) which fails when the file I am seeking is in fact on example.org:8144.
I imagine that there is a relatively simple way to tell Nginx when NOT to attempt to attempt to execute a PHP file - when it is in fact on rproxy. However, my knowledge of Nginx confugration is too limited for me to be able to figure out just how I alter the configuration. I'd be most obliged to anyone who might be tell me how to change the configuration to prevent this from happening.
I should clarify something here:
I need to be able to run PHP scripts on Both SERVERS example.com and example.org.
There is a very easy workaround here - I use a different extension, say php5, for php scripts on the proxied server, example.org. However, that is easily liable to lead to unforseen problems.
For nginx regexp locations have bigger priority than prefix locations. But
If the longest matching prefix location has the “^~” modifier then
regular expressions are not checked.
So try to replace
location /rproxy/ {
with
location ^~ /rproxy/ {
The upstream servers you pass things to have no knowledge of your nginx config. Similarly, your nginx has no idea how your upstream should respond to requests. Neither of them will ever care about the other's configuration.
This is a start for actually passing the name of your script on, but there's a bunch of different ways to do it and they all entirely depend on how the upstream is configured. Also, don't use regexes if you can avoid it; there's no point in slowing everything down for no reason.
upstream reverse {
server example.org:8144;
}
server {
listen 443 ssl;
server_name example.com;
root /var/www/html;
index index.php index.html;
location / {
try_files $uri $uri/ =404;
}
location /rproxy {
proxy_pass https://reverse/$request_uri;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_pass /blah/tmp.sock;
}
}
The (not necessarily smart) but neat way is when you define your PHP block to fall back to your upstream instead of =404 (again, this is a terrible idea unless you know what you're doing, but it can be done):
location #proxy {
proxy_pass https://upstream$request_uri;
}
location ~ \.php$ {
try_files $uri #proxy;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_pass /blah/tmp.sock;
}
Related
I have some experience with Apache but now I switched to Nginx to learn something new. Finally made it to use basic PHP and Let's encrypt on my domain. (yes I'm happy to try new things)
I'd like have some static files with React served by Nginx (I've heard that's something Nginx is good at) and something like REST API with PHP under /API/{RESOURCE}/{ACTION|ID} URI.
Now, I have directory /API/ and configured (used some googling) to pass everything under domain.tld/(api|API)/ to /API/index.php (I'm using Nette FW).
index.php works as expected with PHP-FPM and displays, but when using endpoint with RESOURCE, it gives me some hash string (or random string) with header Content-Type: application/octet-stream even though I'm sending contentType from PHP
Here is my 2 domains "virtualhost" config (except HTTPS redirect, which works good);
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name domain.tld *.domain.tld username.tld *.username.cz;
# redirect other domains to main
if ($host != 'domain.tld') {
return 301 https://domain.tld$request_uri;
}
root /home/username/www/domain.tld/www;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php =404;
}
location /API {
try_files $uri $uri/ /index.php =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
}
ssl_certificate /etc/letsencrypt/live/domain.tld/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/domain.tld/privkey.pem; # managed by Certbot
}
Any ideas what's wrong? Thanks
Thanks to Ivan Shatsky's comment found the real problem - it was redirecting to 404 instead of index.php.
So now try_files in
location /API {
try_files $uri $uri/ /index.php =404;
}
became try_files $uri $uri/ /API/index.php;
(yes, it needs the whole path I suppose)
Thanks a lot. My school project can now work :)
The scenario is that I'd like to use Wordpress as a backend API provider for our Ember.js frontend app.
The Ember.js frontend needs to be served from the root, and the Wordpress instance ideally would be reachable by going to a subdirectory. So for example on localhost it would be http://localhost and http://localhost/wordpress
On the disk the two are deployed in /srv/http/ember and /srv/http/wordpress respectively.
I was trying to assemble the configuration going by the example on the Nginx site:
https://www.nginx.com/resources/wiki/start/topics/recipes/wordpress/
The config:
http {
upstream php {
server unix:/run/php-fpm/php-fpm.sock;
}
server {
listen 80;
server_name localhost;
root /srv/http/ember;
index index.html;
try_files $uri $uri/ /index.html?/$request_uri;
location /wordpress {
root /srv/http/wordpress;
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
#NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
include fastcgi.conf;
fastcgi_intercept_errors on;
fastcgi_pass php;
fastcgi_split_path_info ^(/wordpress)(/.*)$;
}
}
}
However this is obviously not the correct solution.
Upon trying to access the address http://localhost/wordpress/index.php I get the following in the logs:
2016/05/01 17:50:14 [error] 4332#4332: *3 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET /wordpress/index.php HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm/php-fpm.sock:", host: "localhost"
The recipe isn't clear about where to put the root directive for the location of wordpress. I also tried with adding index index.php, which doesn't help either.
(Serving the Ember app works fine.)
From your question it seems that the location ~ \.php$ block is used by WordPress alone. However, it needs a root of /srv/http in order to find the script files for URIs beginning with /wordpress under the local path /srv/http/wordpress.
As there are two locations which both use the same WordPress root, it is possibly cleaner to make /srv/http the default (that is, inherited from the server block) and move root /srv/http/ember; into a separate location / block.
server {
listen 80;
server_name localhost;
root /srv/http;
location / {
root /srv/http/ember;
index index.html;
try_files $uri $uri/ /index.html?/$request_uri;
}
location /wordpress {
index index.php;
try_files $uri $uri/ /wordpress/index.php?$args;
}
location ~ \.php$ {
include fastcgi.conf;
fastcgi_intercept_errors on;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php;
}
}
Notice that the default URI in location /wordpress is /wordpress/index.php and not /index.php as you originally had.
I have explicitly set SCRIPT_FILENAME as it may or may not appear in your fastcgi.conf file.
fastcgi_split_path_info has been removed as it is unnecessary in your specific case, and I think it would actually break WordPress the way you had it.
I'm currently trying to setup a generic, multi-project development environment in Vagrant for students of a web-development mentoring project. The idea is the domain <project>.vagrant maps to ~/code/<project>
I thought I had enough experience with Nginx to solve this, but it turns out I don't.
Assuming that PHP-FPM is correctly setup, I need help with the try_files/routing for the site-configuration.
Whilst the homepage (/) works fine, any request to a non-static file (which should therefore be passed to PHP-FPM) results in either a 301 Moved Permanently to the homepage, or downloads the contents of the PHP script instead of executing it.
And yes I know listing so many index files is not ideal but the students will be dealing with multiple projects (phpMyAdmin, WordPress) and frameworks (Symfony, Silex, Laravel, etc).
Any help with this would be greatly appreciated!
The contents of the single site-available configuration file so far is:
map $host $projectname {
~^(?P<project>.+)\.vagrant$ $project;
}
upstream phpfpm {
server unix:/var/run/php5-fpm.sock;
}
server {
listen 80;
server_name *.vagrant;
server_tokens off;
root /home/vagrant/code/$projectname/web;
index app_dev.php app.php index.php index.html;
autoindex on;
client_max_body_size 5M;
location / {
try_files $uri $uri/ / =404;
}
# Pass all PHP files onto PHP's Fast Process Manager server.
location ~ [^/]\.php(/|\?|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# Specify the determined server-name, not the literal "*.vagrant".
fastcgi_param SERVER_NAME $projectname.vagrant;
fastcgi_pass phpfpm;
}
}
I've looked up a few topics on here around this, but none of the solutions I've found so far seem to work.
I have 3 boxes created via a Vagrantfile with puppet modules, which have nginx and php installed. I've created a simple webpage to output the host name statically, plus php info.
On the load balancer I have the following code for /etc/nginx/sites-available/127.0.0.1 (note this is now the default site and linked setup through my Vagrantfile)
# vagrant/puppet/modules/nginx/files/loadBalancer/127.0.0.1
upstream backend {
server 192.168.205.20; #ip of second machine
server 192.168.205.30; #ip of third machine
}
server {
listen 80;
server_name _;
root /var/www/app;
index index.php;
location / {
try_files $uri /index.php;
proxy_pass http://backend;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
The two additional hosts which host this web app, have the following file for their /etc/nginx/sites-available/127.0.0.1
# vagrant/puppet/modules/nginx/files/127.0.0.1
server {
listen 80;
server_name _;
root /var/www/app;
index index.php;
location / {
try_files $uri /index.php;
proxy_pass http://backend;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
However, this results in only one page coming up (the load balancers, it never alternates to the other two like it should).
I have also tried passing in the backend upstream as the fastcgi_pass but this causes a 502 bad gateway. Is there something I am misunderstanding as far how this should function? Any help would really be appreciated!
I am trying to setup a Nginx server configuration to serve a CakePHP installation from and to a subfolder.
URL: https://sub.domain.com/cakefolder
Folder on system: /var/www/domain/sub/cakefolder
So i am using a sub folder for both the URL as well as on the system. Now it took me a while to figure the following config out with which requests are properly handled by CakePHP. This means it's correctly bootstraping and handling controllers.
What doesn't work however, is serving static files from the webroot directory (e.g. *.css files) as those are all interpreted as CakePHP controllers leading to a CssController could not be found. error.
My site.conf:
server {
listen *:80;
listen *:443 ssl;
server_name sub.domain.com;
ssl_certificate ./ssl/domain.crt;
ssl_certificate_key ./ssl/domain.key;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
if ($ssl_protocol = "") {
rewrite ^ https://$server_name$request_uri? permanent;
}
root /var/www/domain/sub/cakefolder/;
autoindex off;
index index.php;
location /cakefolder {
root /var/www/domain/sub/cakefolder/app/webroot/;
try_files $uri $uri/ /cakefolder/index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
How do I stop Nginx from routing existing static files through the FastCGI PHP interpreter?
Based on https://stackoverflow.com/a/22550332/671047 I already tried replacing my location /cakefolder { ... } with
location ~ /cakefolder/(.*) {
try_files /cakefolder/$1 /cakefolder/$1/ /cakefolder/index.php?$args;
}
but this leads to a redirection loop causing a HTTP 500 error.
Solution (thanks Pete!):
I found the following additional directive to be working for this specific setup. This might not be the most elegant solution but who cares, glad it's working for now.
location ~* /cakefolder/(.*)\.(css|js|ico|gif|png|jpg|jpeg)$ {
root /var/www/domain/sub/cakefolder/app/webroot/;
try_files /$1.$2 =404;
}
you could catch it early:
location ~* \.(css|js|ico)$ {
try_files $uri =404;
}
i have a similar setup and that worked for me when i experienced the same thing (just not cake.) I won't lie, i never understood why the try_files w/redirect always failed on existing static files, where as throwing a try_files like ^above finds the file np. ideas on that? (perhaps today is a source-reading day)