Configure multi site on nginx server through IP using Digital Ocean - php

Problem: Unable to hit two websites handled under one nginx server i.e. <<ip-address>> & <<ip-address>>/web2
Configuration on Digital Ocean:
1 Droplet / Ubuntu 18 / LEMP
I have two test PHP website in the CodeIgniter framework
Folder config for 1st Website: /var/www/html/web1/
Folder config for 2nd Website: /var/www/html/web2/
Nginx Server Block configuration for two sites
web1.com
server {
listen 80;
root /var/www/html/web1;
index index.php index.html index.htm index.nginx-debian.html;
server_name <<ip-address>>;
location / {
try_files $uri $uri/ /index.php;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
web2.com
server {
listen 80;
root /var/www/html/web2;
index index.php index.html index.htm index.nginx-debian.html;
server_name <<ip-address>>/web2;
location /web2/ {
try_files $uri $uri/ /index.php;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
I am totally new to nginx server, I do according to the documentation provided by a community under a digital ocean.
Please help!
Thanks.

What you're trying to do is not how nginx works out of the box. It could, with a lot of fiddling, end up working that way, but I don't think it's worth the effort.
See, nginx configuration expects server_name to be either a FQDN (fully qualified domain name) or an IP address, but not a full URL with path.
In your case, the request for ip-address/web2 is probably actually matching web1's config (so pointing you to /var/www/html/web1/web2/ which doesn't exist)
Best way to work this out (assuming you want to keep both sites on the same droplet): get a FQDN for each site. It could be a subdomain for a domain you already have (i.e. web1.sharad.com and web2.sharad.com)... Then on each of nginx's config files use the appropriate server name (web1.sharad.com and web2.sharad.com), check for typos and errors with sudo nginx -t and if all is OK restart nginx with sudo systemctl restart nginx

Related

NGINX alias/root or rewrite for a link loading from a different directory [duplicate]

I want to serve multiple Laravel apps in a single nginx server, the first one has a root directory in /var/www/html/app1, the second one has /var/www/html/app2, and so on. The index.php file of each app is in a subdirectory named /public.
Whenever user calls http://www.mywebsite.com/app1, nginx shoulds return the app1 and if user calls http://www.mywebsite.com/app2, nginx shoulds return the app2.
My current nginx conf file is as below:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.php index.html index.htm index.nginx-debian.html;
server_name _;
location /app1 {
root /var/www/html/app1/public;
index index.php;
}
location /app2 {
root /var/www/html/app2/public;
index index.php;
}
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ =404;
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
}
}
But, nginx always returning 404 page result. What's going wrong here?
During one of the deployment on linux server, I came across some sort of your challenge. It was as follow
<base_url> : One Laravel project needs to served on this.
<base_url>/<sub_url> : Another Laravel project needs to be served on this.
Of course this can be extended to any number of Laravel projects which follows <base_url>/<unique_sub_url> concept.
Now let's dive into actual implementation
# Nginx.conf
# App 1(Path: /var/www/html/app1, Url: http://www.mywebsite.com)
# App 2(Path: /var/www/html/app2, Url: http://www.mywebsite.com/app2)
server {
# Listing port and host address
# If 443, make sure to include ssl configuration for the same.
listen 80;
listen [::]:80;
server_name www.mywebsite.com;
# Default index pages
index index.php;
# Root for / project
root /var/www/html/app1/public;
# Handle main root / project
location / {
#deny all;
try_files $uri $uri/ /index.php?$args;
}
# Handle app2 project, just replicate this section for further projects app3, app4
# by just replacing app2 with appropriate tag(app3/app4)
location /app2 {
# Root for this project
root /var/www/html/app2/public;
# Rewrite $uri=/app2/xyz back to just $uri=/xyz
rewrite ^/app2/(.*)$ /$1 break;
# Try to send static file at $url or $uri/
# Else try /index.php (which will hit location ~\.php$ below)
try_files $uri $uri/ /index.php?$args;
}
# Handle all locations *.php files (which will always be just /index.php)
# via factcgi PHP-FPM unix socket
location ~ \.php$ {
# At this point, $uri is /index.php, $args=any GET ?key=value and $request_uri = /app2/xyz.
# But we don't want to pass /app2/xyz to PHP-FPM, we want just /xyz to pass to fastcgi REQUESTE_URI below.
# This allows laravel to see /app2/xyz as just /xyz in its router.
# So laravel route('/xyz') responds to /app2/xyz as you would expect.
set $newurl $request_uri;
if ($newurl ~ ^/app2(.*)$) {
set $newurl $1;
root /var/www/html/app2/public;
}
# Pass all PHP files to fastcgi php fpm unix socket
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# Use php fpm sock which is installed on your machine like php7.2, php5.6
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
# Here we are telling php fpm to use updated route that we've created to properly
# response to laravel routes.
fastcgi_param REQUEST_URI $newurl;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors off;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
}
# Deny .ht* access
location ~ /\.ht {
deny all;
}
}
Note: When we're using session based laravel setup, all the route generator functions(url(), route()) use hostname www.mywebsite.com as root url, not www.mywebsite.com/app2. To resolve this issue please do following changes in laravel app.
Define APP_URL in .env file as APP_URL="www.mywebsite.com/app2"
Go to RouteServiceProvider which is located at app/Providers/RouteServiceProvider and force laravel to use APP_URL as root url for your app.
public function boot()
{
parent::boot();
// Add following lines to force laravel to use APP_URL as root url for the app.
$strBaseURL = $this->app['url'];
$strBaseURL->forceRootUrl(config('app.url'));
}
Update: Make sure to run php artisan config:clear or php artisan config:cache command to load the updated value of APP_URL.
For windows: Multiple Laravel Applications Using Nginx - Windows

Nginx set subdomain to work with php

What I want to achive is this:
Create a subdomain my.domain.com
Execute an index.php on it
I created a file called my.conf and put it in /etc/nginx/conf.d/my.conf
Inside of it I put the nex code:
server{
listen 80;
listen [::]:80;
root /var/www/my;
index index.php index.html;
server_name my.domain.com;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
I created an A record Type: A, Name: my, Value: my IP
I put a file inside /var/www/my/ called index.php and wrote a small echo;
I did a nginx -t and everything was fine.
Then sudo service nginx restart and sudo systemctl reload nginx.
Everytime when I visit my.domain.com I get the content from domain.com. It looks like my.conf doesn't do what it should do because I don't see the index.php content on my subdomain.
Any ideas what am I doing wrong?
Edit: I also use Cloudflare.
Thanks

Nginx - PHP scripts not being called from reverse proxy

I give below an execerpt of of /etc/nginx/sites-available/default file
server {
listen 443 ssl;
server_name example.com;
root /var/www/html;
index index.php index.html;
location / {
try_files $uri $uri/ = 404;
}
location /rproxy/ {
proxy_pass https://example.org:8144/;
}
location ~ \.php$ {
try_files $uri = 404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
....
}
The example.org:8144 server
has the files and
index.php - returns hello World
bonjour.php - returns bonjour
Now here is the issue:
If I browse to https://example.com/rproxy it promptly returns hello world - the expected result.
However, if I browse to https://example.com/rproxy/bonjour.php (or even https://example.com/rproxy/index.php) I get a 404 error.
I understand what is happening here. My Nginx configuration is causing the example.com instance of Nginx to attempt to find all *.php files locally (i.e. on example.com) which fails when the file I am seeking is in fact on example.org:8144.
I imagine that there is a relatively simple way to tell Nginx when NOT to attempt to attempt to execute a PHP file - when it is in fact on rproxy. However, my knowledge of Nginx confugration is too limited for me to be able to figure out just how I alter the configuration. I'd be most obliged to anyone who might be tell me how to change the configuration to prevent this from happening.
I should clarify something here:
I need to be able to run PHP scripts on Both SERVERS example.com and example.org.
There is a very easy workaround here - I use a different extension, say php5, for php scripts on the proxied server, example.org. However, that is easily liable to lead to unforseen problems.
For nginx regexp locations have bigger priority than prefix locations. But
If the longest matching prefix location has the “^~” modifier then
regular expressions are not checked.
So try to replace
location /rproxy/ {
with
location ^~ /rproxy/ {
The upstream servers you pass things to have no knowledge of your nginx config. Similarly, your nginx has no idea how your upstream should respond to requests. Neither of them will ever care about the other's configuration.
This is a start for actually passing the name of your script on, but there's a bunch of different ways to do it and they all entirely depend on how the upstream is configured. Also, don't use regexes if you can avoid it; there's no point in slowing everything down for no reason.
upstream reverse {
server example.org:8144;
}
server {
listen 443 ssl;
server_name example.com;
root /var/www/html;
index index.php index.html;
location / {
try_files $uri $uri/ =404;
}
location /rproxy {
proxy_pass https://reverse/$request_uri;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_pass /blah/tmp.sock;
}
}
The (not necessarily smart) but neat way is when you define your PHP block to fall back to your upstream instead of =404 (again, this is a terrible idea unless you know what you're doing, but it can be done):
location #proxy {
proxy_pass https://upstream$request_uri;
}
location ~ \.php$ {
try_files $uri #proxy;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_pass /blah/tmp.sock;
}

Vagrant & Nginx - Name Based Virtual Hosts

I believe I have set up an nginx virtual host file correctly, however, I'm not able to navigate to that URL on the host machine. I have a slight feeling that I'm misunderstanding something here.
Here's the situation.
I have a Vagrantfile that does the following:
Installation of Required Packages
Composer Update
Copies an Nginx Virtual Host file over to Nginx
Restarts all relevant services
Heres the vhost file:
server {
listen 80;
server_name vagrant;
root /usr/share/nginx/www;
index index.html index.htm index.php;
location / {
try_files $uri /index.php;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
I can verify that when the virtual host file was called simply vagrant and the server name was set to the following server_name _; that a page was rendered when I visited localhost.
As I'm trying to set up Magento, which doesn't work well with localhost domains, I'm trying to set up a name based virtual host for it under the domain dev.magento.co.uk
If I ssh into the vagrant installation, I can verify that the dev.magento.co.uk file is in the /etc/nginx/site-enabled directory and that it's contents were copied over correctly.
What am I missing?

nginx + php with drupal + codeigniter in separate folders

I have a Slicehost slice for a dev server, with nginx and PHP.
I'm trying to get drupal running on localhost/drupal and a codeigniter app running on localhost/codeigniter.
I can get one or the other to work, but not both -- the rewrite and fastcgi seem to be interfering with one another.
Does anyone know how to have /drupal and /codeigniter both working, with rewrite rules (for SEF URLs), in separate folders in my /var/www?
Cheers.
Ok, you have to create a file (no extension needed) in /etc/nginx/sites-available that represents the name of your folder/domain (ex: drupal, yoursite.com).
Here's a sample file:
server {
server_name yourdomain.com;
root /var/www/yourdomain;
index index.php;
location / {
autoindex on;
try_files $uri $uri/ /index.php;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
In the sample above it will actually send url rewrites to $_SERVER['REQUEST_URI']. For more nginx rewrites, you can take a look at http://wiki.nginx.org/HttpRewriteModule for more reference.
Then you want to enable it by creating a symlink of this file in your /etc/nginx/sites-enabled folder
Example: # ln -s /etc/nginx/sites-available/yoursite /etc/nginx/sites-enabled/yoursite
Then restart/reload nginx
# services nginx reload

Categories