Laravel Forge Nginx Config for SSL - php

I usually don't post questions until I've researched it to death on the internet. I create a CSR using Laravel Forge, add the Certificate, activate it, edit the Nginx Config using these resources:
https://stackoverflow.com/questions/26192839/laravel-forge-ssl-certificate-not-working
^curl https://domain.com returns data
server {
listen 80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name example.com www.example.com;
root /home/forge/example.com/public;
# FORGE SSL (DO NOT REMOVE!)
ssl_certificate /etc/nginx/ssl/vgport.com/3042/server.crt;
ssl_certificate_key /etc/nginx/ssl/vgport.com/3042/server.key;
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/default-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
I do 'service nginx restart' in command line, and go to /var/log/nginx/error.log and see the following error:
'conflicting server name "" on 0.0.0.0:80, ignored'
'conflicting server name "www.domain.com" on 0.0.0.0:80, ignored'
When I visit domain.com, it gets redirected to https://domain.com with 'This webpage has a redirect loop'. Clearly the Nginx redirect isn't working somehow but I've followed all the steps.
Please let me know what additional error logs and information I should post to troubleshoot this issue. Any help would be greatly appreciated, thanks in advance.

okay so the problem was simpler than I thought. I was using the free cloudflare dns pointing which didn't support ssl. I switched to using the namescheap dns and it started working.

After spending some time researching nginx, I'd just like to add that when you add a cert, you have to manually press "Activate it" once downloaded.

Related

Accessing website(Laravel in Digitalocean Droplet) through the droplet IP is working, but using the registered domain is showing 404

I have deployed a Laravel Project on a Digitalocean droplet, and serving it using NGINX.
Everything is working fine while trying to access the webpage through the IP https://<Droplet_IP>/ (https is enabled, but when it wasn't enabled, the same problem was present.)
But, when I try to access the same webpage through the registered domain https://<domain_name>/ I'm getting 404 error page - this is the custom error page made by us. Very weird.
Trying to access ANY link using the domain results in the 404 error page, meanwhile the same action while accessing through the Droplet IP works like a charm.
Ofcourse, I've set the DNS records in Digitalocean, both the # and www records.
Also, have updated the NS on the domain registrar with Digitalocean Nameservers.
I've also gone through different configuration of NGINX, trying various solutions out from the internet, and my latest NGINX config is below:
server_name domain www.domain;
root /var/www/html/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
location / {
#try_files $uri $uri/ /index.php?$query_string;
try_files $uri $uri/ /index.php?$is_args$args;
#try_files $uri $uri/ /index.php?$args;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
#error_page 404 /index.php;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/habitoz.in/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/habitoz.in/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.habitoz.in) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = habitoz.in) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name domain www.domain;
#return 404; # managed by Certbot
}
Right now, I'm at a loss as to what to do.
How is the website working perfectly through the IP, but not through the domain.
And, if there was a problem with the domain in the first place, how is the custom 404 page being shown.
That means that the application is working, but somehow, something is amiss when accessing through the domain.
Spend the whole day & night, and couldn't figure out the answer. When I woke up in the morning, the answer clicked in me.
A very simple, and idiotic error it turned out to be!
When I deployed the app in the droplet, I modified the .env file, and changed APP_URL to the <Droplet_IP> but when I linked the droplet to the domain, I skipped the step to change the APP_URL, and it's such a small thing, that I don't remember that I skipped that step.
All my error checks, and trying out different solutions amounted to nothing, yet it didn't strike me to check the .env file.
Once the APP_URL is modified to point to the right URL, the error was gone.

Configuring adminer with NGINX: "no port in upstream php" error

I have set up MeteorJs application with MongoDB backend in Digital Ocean. I am now trying to set up adminer so I can query MongoDB without opening input ports on my droplet. Everytime I try to reload the nginx settings, I get nginx: [emerg] no port in upstream "php" in /etc/nginx/sites-enabled/admin:32 error.
What am I missing?
/etc/nginx/sites-enabled/admin
server {
listen 80;
server_name 165.227.197.220;
return 301 https://$server_name$request_uri;
}
server {
server_name 165.227.197.220;
listen 443 ssl;
access_log /var/log/nginx/admin.access.log;
error_log /var/log/nginx/admin.error.log;
ssl_certificate /etc/nginx/ssl/admin.crt;
ssl_certificate_key /etc/nginx/ssl/admin.key;
root /var/www/admin;
index adminer.php;
# Get file here https://codex.wordpress.org/Nginx
include global/restrictions.conf;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd/admin;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php;
fastcgi_index adminer.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
/etc/nginx/global/restrictions.conf
# Global restrictions configuration file.
# Designed to be included in any server {} block.
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~ /\. {
deny all;
}
# Deny access to any files with a .php extension in the uploads directory
# Works in sub-directory installs and also in multisite network
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~* /(?:uploads|files)/.*\.php$ {
deny all;
}
You need to setup an upstream for example using wordpress that is what we have
upstream index_php_upstream {
server 127.0.0.1:8090; # NGINX Unit backend address for index.php with
# 'script' parameter
}
For more about upstream, so can view this article UpStream - Nginx

NGINX + Wordpress gives ERR_TOO_MANY_REDIRECTS

I have been trying to host a simple Wordpress blog using NGINX as the web-server. The blog is hosted as a subdirectory under domain_name.com/blog.
The main blog opens up correctly. But when trying to open the wp-admin under domain_name.com/blog/wp-admin my browser shows ERR_TOO_MANY_REDIRECTS.
I am not sure if this is an issue with my NGINX configuration or wordpress configuration. Following is my NGINX server block:
server {
listen 80;
server_name <domain_name.com>;
root /var/www/html;
index index.php;
location /blog {
try_files $uri $uri/ /blog/index.php?$args;
}
location ~ \.php$ {
include fastcgi.conf;
fastcgi_intercept_errors on;
fastcgi_pass php;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
Wordpress is installed under the /var/www/html/blog directory. And the values for "siteurl" and "home" wp_options in the database are both pointing to domain_name.com/blog.
What would be a good way to solve this problem?
Additional notes that might be helpful:
When I try to access static files under the wp-content directory, they open without any issues. No redirection errors there.
It is usual for WordPress to redirect an http session to https whenever accessing wp-admin. This may be controlled using the FORCE_SSL_LOGIN and FORCE_SSL_ADMIN settings in wp-config.php.
When a reverse proxy is terminating SSL, the fact that the originating connection is over https must be conveyed to WordPress to avoid a redirection loop.
Your reverse proxy should be setting headers such as X-Forwarded-Proto.
You need to change your nginx configuration so that the HTTPS flag is set correctly for WordPress.
For example:
map $http_x_forwarded_proto $https_flag {
default off;
https on;
}
server {
listen 80;
server_name example.com;
root /var/www/html;
index index.php;
location /blog {
try_files $uri $uri/ /blog/index.php?$args;
}
location ~ \.php$ {
include fastcgi.conf;
fastcgi_intercept_errors on;
fastcgi_param HTTPS $https_flag;
fastcgi_pass php;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}

Moodle installation error on Homestead VM

I have installed the Homestead VM and setup the Moodle installation folder on my Mac (OSX Yosemite). I also created the 'moodledata' folder and gave it the permissions 0777 as well as the folder 'moodledata/sessions' via my system command line (I tried doing this via SSH inside the VM but it didn't appear to change the permissions). However checking the permissions after doing it via my system showed the folder was writable from inside the VM.
I then moved on to the installation which ran through and created the DB tables and did the check which showed 2 check warnings:
Intl and xmlrpc to check
I don't believe these are essential for initial installation so carried on. It is when I get to the admin user creation where I am getting a problem. The page (/user/editadvanced.php?id=2) stops loading any images and when I post the form I get an error: 'Incorrect sesskey submitted, form not accepted!'
I thought this could be down to the session not being writable in the moodledata folder but as I have checked that now I am out of ideas!
I have attached a couple of screenshots.
Many thanks, Mike.
Ok after a good few days of head scratching I fixed my own issue by editing the NGINX config file. Below is what it was by default:
server {
listen 80;
server_name example.com;
root /home/forge/example.com;
# FORGE SSL (DO NOT REMOVE!)
# ssl_certificate;
# ssl_certificate_key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/example.com-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
And this is what I changed it to and it now works:
server {
listen 80;
server_name example.com; #REPLACE SERVER NAME
root /var/www/example.com/www/; #REPLACE MOODLE INSTALL PATH
error_log /var/www/example.com/log/example.com_errors.log; #REPLACE MOODLE ERROR LOG PATH
access_log /var/www/example.com/log/example.com_access.log; #REPLACE MOODLE ACCESS LOG PATH
rewrite ^/(.*\.php)(/)(.*)$ /$1?file=/$3 last;
location / {
index index.php index.html index.htm;
try_files $uri $uri/ /index.php;
}
fastcgi_intercept_errors on;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
I haven't had time to see which part/parts of the above config fixed the issue, maybe someone who knows can see straight away? I suspect it could be the rewrite rule? Either way I hope this helps someone else in the future and I am really happy to get this working!
I can confirm is just the rewrite part for that specific config file, although in the Moodle Nginx page it's not documented that way.
My guess is that the location ~ [^/]\.php(/|$) { part is doing the same thing as the rewrite rule rewrite ^/(.*\.php)(/)(.*)$ /$1?file=/$3 last; and the location ~ \.php$ { directive. Will need to make some test changing the location directive to see if that works as well.

Unable to get rewriting workingly properly on nginx for laravel

I am looking for some help configuring my nginx to allow laravel routes to work correctly, I have found numerous tutorials giving slightly different ways but to no avail.
following: nginx configuration for Laravel 4 seems quite close to what I need, however I am getting the error No input file specified.
when I look into the error log I can see that instead of my route going to eg
/url/index.php/args
it is instead being routed to /url/args/index.php
This is my nginx app configuration file, and it's all you need to make it work, and, nginx doesn't make use of .htaccess:
server {
listen 80;
server_name laravel.dev;
root /var/www/laravel/public/;
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log /var/log/nginx/laravel.dev-access.log combined;
error_log /var/log/nginx/laravel.dev-error.log error;
error_page 404 /index.php;
sendfile off;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
I managed to get my site working, I think at least one of the configs I had already tried was correct however my sites-enabled copy of the config was a copy and not a symlink so my changes weren't actually updating.

Categories