How to skip a directory in a URI using NGINX? - php

So I saw this answer, but it didn't seem related to this question.
My situation is, I have a directory on my website called mywebsite.com/indev/dist (since dist is the build folder). I want all the files in mywebsite.com/indev/dist to appear and work as if the dist folder didn't exist at all.
For example, if I requested mywebsite.com/indev/index45.php, the file would be located at /indev/dist/index45.php.
Here's what I've tried so far:
location ~ .*\/indev\/dist.* {
try_files $uri $uri/ =404;
}
location ~ .*\/indev.* {
rewrite ^\/indev\/(.*) /indev/dist/$1 break;
}
This works fine with one major drawback: php files don't run, they go to the original address which gives a 404 in return. Is there any way to do what I'm using using the NGINX config without any major drawbacks (or having to make unnecessary rules (ex. just to handle php files)?)

Related

How to have a Wordpress site on foo.example, and another Wordpress site on foo.example/bar?

I have a legacy WordPress blog that runs only in PHP 5.2 (lot's of incompatibilities in later versions), and I am developing a new Wordpress blog that should run in PHP 7.
The requirement is that the new blog have an URL of foo.example, and the legacy would be in foo.example/bar.
Due to different PHP versions, each one is hosted in a different machine. Until now, the closest I got was having a subdomain bar.foo.example pointing to the legacy blog, but couldn't make foo.example/bar do the same thing (don't even know if it's possible).
I would gladly apreciate some help with this task and I'm open to new alternatives.
Depending on your server's software, I know you can do something like in NGINX:
server {
server_name domain.tld;
root /var/www/wordpress;
index index.php;
...
location / {
try_files $uri $uri/ /index.php?$args;
}
location /bar/ {
root /var/www/wordpress-legacy;
try_files $uri $uri/ /index.php?$args;
}
...
}
You can not really point to different server based on request path - domain will always point to one server (at least from user perspective). However this one server could work as a proxy and serve content from correct server. Potential solutions:
Put a load balancer in front of this two servers - see HAProxy - URL Based routing with load balancing.
Configure new server as a proxy to serve content from legacy server for subdirectory. For example by using NGINX Reverse Proxy:
location /bar/ {
proxy_pass http://legacy.foo.example/bar/;
}

Nginx - Serve Wordpress/app in one directory, if file not found try a different directory

I'm stumped on this. It seems like it should be easy but I can't seem to get it done.
I have an old site with thousands of old files. These are hardly organized but need to be accessible. I want to put these in /var/www/site/legacy/.
The current site runs wordpress. I want wordpress to run in /var/www/site/wordpress/
I'd like the logic of things to go:
Go to www.site.com/
If ordinary Wordpress request (e.g., file in /wp-content/ or a templated page), deliver it by loading php reuqest in /var/www/site/wordpress/
If not a WP request, look for the file in /var/www/site/legacy/
It seems like I should be able to do something like this:
root /var/www/site.com/wordpress;
....
location / {
try_files $uri $uri/ index.php?args #legacy;
}
location #legacy {
root /var/www/site/legacy;
try_files $uri $uri/;
}
This SHOULD be easy but I've tried a dozen permutations of the above logic with no luck.
Thanks.

Nginx: X-Accel-Redirect not working in files with know MIME extension

I am developing a webapp and X-Accel-Redirect header works fine only in files without extension. For some reason, if I add an extension to the file name the X-Accel-Redirect doesn't work.
Working example:
X-Accel-Redirect: /protected_files/myfile01.z
Non-working example:
X-Accel-Redirect: /protected_files/myfile01.zip
I'm using nginx 1.7.1.
Initially, The weird part is that if I change the extension part (in this case ".zip") with something not registed in the mime.types file, it works fine (Obviously I rename the file accordingly), but with a extension pointing to a know mime type (something like "zip", "jpg", "html") will generate a "404 Not found" error.
UPDATE:
It seems that the issue is due to this rule I have in the conf file:
location ~ \.(js|css|png|jpg|gif|swf|ico|pdf|mov|fla|zip|rar)$ {
try_files $uri =404;
}
For some reason, it seems that nginx tests the existence of the file in the file system first and after that it tries the the "internal/aliased" path.
Any ideas about how to let nginx to filter all the "/protected_files" coming from X-Accel-Redirect directly to the "internal" instead of trying to find in other paths first?
Thanks in advance.
The error was due to a conflict in rules in nginx config file.
So, the solution was:
location ^~ /protected_files { # ^~ needed according to the [nginx docs][1] to avoid nginx to check more locations
internal;
alias /path/to/static/files/directory;
}
#avoid processing of calls to unexisting static files by my app
location ~ \.(js|css|png|jpg|gif|swf|ico|pdf|mov|fla|zip|rar)$ {
try_files $uri =404;
}
Hope this helps many of you.

Redirect /download URI to subdomain on same domain

I finished developed an app that features a downloading system that is hosted with NGINX at:
http://dashboard.myapp.com
The URL for downloads is:
http://dashboard.myapp.com/download/file-slug
This page is a regular PHP page that will require some user input and then PHP handles the actual file download, it's not the direct path for the file.
Since these download URLs will be made publicly available, I want to ditch that dashboard subdomain.
The default domain (myapp.com) is already working with a wordpress setup with this:
location / {
try_files $uri $uri/ /index.php?q=$uri&$args;
}
Is there an easy way to get the:
http://myapp.com/download/file-slug
to act as if:
http://dashboard.myapp.com/download/file-slug
was accessed, without actually redirecting?
Try this - Place in your server block for myapp.com, anywhere outside another location block. Set the root to the same root as the dashboard subdomain (if on the same server). The script would see itself as being hosed at myapp.com instead of dashboard.myapp.com, but it should retain the remainder of the framework rules. If this doesn't work, try the next option.
location /download/file-slug {
root /path/folder;
try_files $uri $uri/ /index.php?q=$uri&$args;
}
Another option is to proxy through Nginx. This option actually runs the script on the current location, accessing it like a client would through dashboard.myapp.com. See proxy_pass documentation on Nginx.org.
location /download/file-slug { proxy_pass http://dashboard.myapp.com/download/file-slug; }
I was able to work it out with Nginx only.
Inside the myapp.com config file I added:
location ~ /download/(.*) {
resolver 8.8.8.8;
proxy_pass http://dashboard.myapp.com/download/$1;
}
The resolver 8.8.8.8 is actually using Google DNS. Without this line I was getting a "no resolver defined to resolve" error.

PHP front controller in nginx

I have a wiki that hosts user-generated content with URLs like /wiki/view/pagename and /wiki/modify/pagename. I'm using an nginx configuration that goes something like:
location /wiki/ {
try_files $uri $uri/ /wiki/index.php?q=$uri&$args;
}
location ~ \.php$ {
try_files $uri =404;
#fastcgi stuff...
}
It's been working great and as far as i can tell, this is the recommended approach. However, today, a user created a page named "whatever.php", so it needs the URLs /wiki/view/whatever.php to be redirected to my /wiki/index.php... but it gets caught in the second location block and returns a 404 to the user-agent.
Does anyone have any suggestions? Can i add an extra location block to rewrite *.php to the main script somewhere in such a way that won't affect actually routing pages? I still want to use nginx to serve static content inside the /wiki/ directory and to preserve the behaviour of everything outside this directory.
Repost of this dead forum thread

Categories