I have set up MeteorJs application with MongoDB backend in Digital Ocean. I am now trying to set up adminer so I can query MongoDB without opening input ports on my droplet. Everytime I try to reload the nginx settings, I get nginx: [emerg] no port in upstream "php" in /etc/nginx/sites-enabled/admin:32 error.
What am I missing?
/etc/nginx/sites-enabled/admin
server {
listen 80;
server_name 165.227.197.220;
return 301 https://$server_name$request_uri;
}
server {
server_name 165.227.197.220;
listen 443 ssl;
access_log /var/log/nginx/admin.access.log;
error_log /var/log/nginx/admin.error.log;
ssl_certificate /etc/nginx/ssl/admin.crt;
ssl_certificate_key /etc/nginx/ssl/admin.key;
root /var/www/admin;
index adminer.php;
# Get file here https://codex.wordpress.org/Nginx
include global/restrictions.conf;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd/admin;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php;
fastcgi_index adminer.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
/etc/nginx/global/restrictions.conf
# Global restrictions configuration file.
# Designed to be included in any server {} block.
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~ /\. {
deny all;
}
# Deny access to any files with a .php extension in the uploads directory
# Works in sub-directory installs and also in multisite network
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~* /(?:uploads|files)/.*\.php$ {
deny all;
}
You need to setup an upstream for example using wordpress that is what we have
upstream index_php_upstream {
server 127.0.0.1:8090; # NGINX Unit backend address for index.php with
# 'script' parameter
}
For more about upstream, so can view this article UpStream - Nginx
Related
I'm trying to configure an error page in nginx for 500 and 502 error codes, I tried many different configuration options and solutions but none of them worked for me.
The issue itself is, that no matter how I do the configuration, I always get the generic Nginx error page with 502 bad gateway.
The following docker stack is running with these containers:
Nginx
MySQL
Composer
Azure CLI
PHP
A TYPO3 system is running behind the php/composer container.
I'm using Nginx instead of a Apache web server.
Below you can see my current nginx configuration.
server {
listen 80;
root /var/www/html/public;
index index.php index.htm index.html;
# Make site accessible from http://localhost/
server_name _;
# Disable sendfile as per https://docs.vagrantup.com/v2/synced-folders/virtualbox.html
sendfile off;
error_log /dev/stdout info;
access_log /var/log/nginx/access.log;
# NGINX - Provide error page
error_page 500 502 /error.html;
location = /error.html {
internal;
}
## provide a health check endpoint
location /healthcheck {
access_log off;
stub_status on;
keepalive_timeout 0; # Disable HTTP keepalive
return 200;
}
location / {
absolute_redirect off;
try_files $uri $uri/ /index.php$is_args$args;
}
# pass the PHP scripts to FastCGI server listening on socket
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass ${PHP_DOMAIN}:9000;
fastcgi_buffers 16 128k;
fastcgi_buffer_size 128k;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_intercept_errors off;
# fastcgi_read_timeout should match max_execution_time in php.ini
fastcgi_read_timeout 600;
fastcgi_param SERVER_NAME $host;
fastcgi_cache_bypass $http_x_blackfire_query;
}
# Expire rules for static content
# Feed
location ~* \.(?:rss|atom)$ {
expires 1h;
}
# Media: images, icons, video, audio, HTC
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
# Prevent clients from accessing hidden files (starting with a dot)
# This is particularly important if you store .htpasswd files in the site hierarchy
# Access to `/.well-known/` is allowed.
# https://www.mnot.net/blog/2010/04/07/well-known
# https://tools.ietf.org/html/rfc5785
location ~* /\.(?!well-known\/) {
deny all;
}
# Prevent clients from accessing to backup/config/source files
location ~* (?:\.(?:bak|conf|dist|fla|in[ci]|log|psd|sh|sql|sw[op])|~)$ {
deny all;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
# TYPO3 - Block access to composer files
location ~* composer\.(?:json|lock) {
deny all;
}
# TYPO3 - Block access to flexform files
location ~* flexform[^.]*\.xml {
deny all;
}
# TYPO3 - Block access to language files
location ~* locallang[^.]*\.(?:xml|xlf)$ {
deny all;
}
# TYPO3 - Block access to static typoscript files
location ~* ext_conf_template\.txt|ext_typoscript_constants\.(?:txt|typoscript)|ext_typoscript_setup\.(?:txt|typoscript) {
deny all;
}
# TYPO3 - Block access to miscellaneous protected files
location ~* /.*\.(?:bak|co?nf|cfg|ya?ml|ts|typoscript|dist|fla|in[ci]|log|sh|sql)$ {
deny all;
}
# TYPO3 - Block access to recycler and temporary directories
location ~ _(?:recycler|temp)_/ {
deny all;
}
# TYPO3 - Block access to configuration files stored in fileadmin
location ~ fileadmin/(?:templates)/.*\.(?:txt|ts|typoscript)$ {
deny all;
}
# TYPO3 - Block access to libaries, source and temporary compiled data
location ~ ^(?:vendor|typo3_src|typo3temp/var) {
deny all;
}
# TYPO3 - Block access to protected extension directories
location ~ (?:typo3conf/ext|typo3/sysext|typo3/ext)/[^/]+/(?:Configuration|Resources/Private|Tests?|Documentation|docs?)/ {
deny all;
}
if (!-e $request_filename) {
rewrite ^/(.+)\.(\d+)\.(php|js|css|png|jpg|gif|gzip)$ /$1.$3 last;
}
#Include development locations if needed
include /etc/nginx/conf.d/locations/*.conf;
}
I think the issue itself does not come from the configuration but from anywhere else but I don't know where.. I can't find the problem.
Hope u guys can help me, btw it's my first stack overflow question :D
EDIT:
Just added the configuration below to test the error codes but unfortunately I still get a 502 Bad Gateway, maybe a problem with the local setup. To my surprise the configured location for a healthcheck is working, just the error page not.
location /get_error {
return 500;
}
UPDATE:
The configuration itself was correct, I just deployed the changes made to the dev system and it just worked! I don't know why and where the issue was but it just won't work for my local dev environment.
I have started off with a docker image which has a processor for php files in place.
Requesting the normal index.php file goes without issue, but the routes I added are less accessible.
One of the routes for example is /time, accessing this is done (with the php builtin webserver) with
index.php/time, and it would return me the time. Now the docker I used has php working perfectly, just that I think one of the rules is breaking this functionality for me. tried a lot of changes, but cant seem to find the issue.
$app->get('/time', function ($request, $response, $args) {
// Current Unix timestamp
$seconds = time();
// Current Unix timestamp with microseconds
$milliseconds = microtime(true)*1000;
// Nanosecond precision
// This is default as per InfluxDB standard
$nanoseconds = $milliseconds*1000000;
// Return
return $response->withJson(array(
'seconds' => $seconds,
'milliseconds' => intval($milliseconds),
'nanoseconds' => intval($nanoseconds)
), 200);
});
This is the configuration of the nginx server I am serving in the docker. One of the things I have tried is adding the location /ping { line. But to no avail, /ping, /time keep redirecting to a 404 not found.
server {
listen 80; ## listen for ipv4; this line is default and implied
listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/html/public;
index index.php index.html index.htm;
# Make site accessible from http://localhost/
server_name _;
# Disable sendfile as per https://docs.vagrantup.com/v2/synced-folders/virtualbox.html
sendfile off;
# Add stdout logging
error_log /dev/stdout info;
access_log /dev/stdout;
# Add option for x-forward-for (real ip when behind elb)
#real_ip_header X-Forwarded-For;
#set_real_ip_from 172.16.0.0/12;
location /ping {
try_files $uri $uri/ /index.php/ping;
}
location / {
# First attempt to serve request as file, then
# as directory, then fall back to index.html
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
location = /404.html {
root /var/www/errors;
internal;
}
location ^~ /sad.svg {
alias /var/www/errors/sad.svg;
access_log off;
}
location ^~ /twitter.svg {
alias /var/www/errors/twitter.svg;
access_log off;
}
location ^~ /gitlab.svg {
alias /var/www/errors/gitlab.svg;
access_log off;
}
# pass the PHP scripts to FastCGI server listening on socket
#
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
location ~* \.(jpg|jpeg|gif|png|css|js|ico|webp|tiff|ttf|svg)$ {
expires 5d;
}
# deny access to . files, for security
#
location ~ /\. {
log_not_found off;
deny all;
}
location ^~ /.well-known {
allow all;
auth_basic off;
}
}
Facing redirection issue while setting up Wordpress with Nginx via Amazon API Gateway and Network Load Balancer.
Description:-
We have our main website xyz.com(served by Amazon API Gateway and Load Balancer) and want our blogs to be present on xyz.com/blogs.
So, we have set up Amazon API Gateway and Load Balancer to redirect any request of the for xyz.com/blogs to the EC2 containing Wordpress with Nginx.
Problem:-
The problem that we are facing is, the home page is rendered fine but when we try to render any other page, e.g:- xyz.com/blogs/my-first-post/ or xyz.com/blogs/wp-admin then it gets stuck over there and nothing comes as response. As a part of our initial debugging, we found out that Wordpress is making redirections to the Network Load Balancer url, (which as per our guess) is not accessible and we are not getting any response.
This is how our default nginx conf looks like (/etc/nginx/conf.d/xyz_blogs.conf), which we got from this link => Wordpress|Nginx
# Upstream to abstract backend connection(s) for php
upstream php {
server unix:/tmp/php-cgi.socket;
server 127.0.0.1:9000;
}
server {
## Your website name goes here.
server_name xyz.com;
## Your only path reference.
root /var/www/html;
## This should be in your http block and if it is, it's not needed here.
index index.php;
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location / {
# This is cool because no php is touched for static content.
# include the "?$args" part so non-default permalinks doesn't break when using query string
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
#NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
include fastcgi.conf;
fastcgi_intercept_errors on;
fastcgi_pass php;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
How shall we resolve this issue?
A prior thanks for any help given hereby.
Thy this:
upstream php-app { # <-- change name of the upstram
# server unix:/tmp/php-cgi.socket; # <-- Remove this
# List of [IP:Port] OR list of [sockets] not both mixed
server 127.0.0.1:9000;
# server 127.0.0.1:9001; # example
# server 127.0.0.1:9002; # example
}
server {
listen 80 default_server; # <-- Add this line
listen [::]:80 default_server; # <-- Add this line
server_name xyz.com *.xyz.com;
root /var/www/html;
# index index.php; # <-- Will be removed, you app is provided by the socket/port
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~ \.php$ {
include fastcgi.conf;
fastcgi_intercept_errors on;
fastcgi_pass php;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
location / { # Put this at the end, NGinx apply priority
proxy_pass http://php-app; # Add this line
}
}
Update 1:
update config
...
server_name xyz.com *.xyz.com *.amazonaws.com;
...
Sorry for the late post, but currently we have implemented this using a solution which might not be much scalable. What we have done is we have deployed our website and wordpress in the same machine but in a different containers. So, our website is running in a different container and wordpress is running in a different container but both are present in the same EC2.
And we are using nginx configuration in wordpress to redirect the requests to the fellow container. Currently my nginx configuration looks something like this:
upstream php {
server unix:/tmp/php-cgi.socket;
server 127.0.0.1:9000;
}
server {
listen 80;
server_name xyz.com;
root /var/www/html;
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
index index.php;
location /blog {
try_files $uri $uri/ /blog/index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
if (!-e $request_filename) {
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
}
location / {
rewrite ^/(.*) /$1 break;
proxy_pass http://xyz-container:80;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
It would have been great if somehow we would have been able to deploy wordpress container in a different EC2 and still achieve this.
It will really be helpful and a lot better if someone can suggest a better scalable working solution for the problem with wordpress container in a different EC2.
Thanks.
I have 12 sites that I plan to run on a single server that has NGinx and php5-fpm on it. I set them all up using one server block per conf file, all included by the main nginx.conf file. It's a mix of Wordpress, PhpMyAdmin, and PHP sites. The wordpress and PhpMyAdmin sites are working fine, but the PHP sites are not. Meaning, when I pull up example.com, Chrome says connection refused, and there's no trace of an incoming connection on NGinx logs. test.example.com pulls up the default site(because I didn't configure test.example.com then) at the same time.
I copied the nginx configs from the working sites to set up the sites that are not working, but no luck. The only difference in nginx config between the working and non-working sites are the server_name directive.
After checking and rechecking for over 2 hours, I found out that the sites that have the server_name as pqr.example.com work, but the ones with example.com don't. All of the working sites are configured to use subdomain URLs, and that's probably why they're working.
My questions are -
1. What am I missing in the config to make the abc.com work ?
2. I have two sites, example.com and example.net that I'm trying to run on the same server. Is that going to be a problem for NGinx ?
3. Does Nginx have a problem with differentiating between example.com, test.example.com, and example.net ?
4. I also noticed that if www.example.net works, www.example.com doesn't and vice versa, which means I have to assign each site that has the name abc in it different subdomains like www.example.net and test.example.com. Is this a standard/expected behavior of Nginx, or am I missing something ?
5. All of my base URLs auto redirect from http://example.com to http://www.example.com; How do I find out where that redirect is happening ?
Below are the Nginx config files that I'm having problems with, truncated to include the important parts; Please let me know if more info is needed.
Main nginx.conf file -
user www-data www-data;
pid /var/run/nginx.pid;
worker_processes 4;
worker_rlimit_nofile 100000;
events {
worker_connections 4096;
include /etc/nginx.custom.events.d/*.conf;
}
http {
default_type application/octet-stream;
access_log off;
error_log /var/log/nginx/error.log crit;
.......
server_tokens off;
include proxy.conf;
include fcgi.conf;
include conf.d/*.conf;
include /etc/nginx.custom.d/*.conf;
}
include /etc/nginx.custom.global.d/*.conf;
Here is the full working .conf file for a blog that works. All other sites have this full config, since they are just basic PHP sites.
server {
listen *:80;
server_name blog.example.com;
access_log /var/log/nginx/blog-example.access.log;
error_log /var/log/nginx/blog-example.error.log;
root /var/www/example/blog;
index index.html index.htm index.php;
# This order might seem weird - this is attempted to match last if rules below fail.
location / {
try_files $uri $uri/ /index.php?$args;
}
# Add trailing slash to */wp-admin requests.
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
# Directives to send expires headers and turn off 404 error logging.
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off; log_not_found off; expires max;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~ /\. {
deny all;
}
# Deny access to any files with a .php extension in the uploads directory
# Works in sub-directory installs and also in multisite network
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~* /(?:uploads|files)/.*\.php$ {
deny all;
}
location ~ [^/]\.php(/|$) {
# Zero-day exploit defense.
# http://forum.nginx.org/read.php?2,88845,page=3
# Won't work properly (404 error) if the file is not stored on this server, which is entirely possible with php-fpm/php-fcgi.
# Comment the 'try_files' line out if you set up php-fpm/php-fcgi on another machine. And then cross your fingers that you won't get hacked.
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
include fcgi.conf;
fastcgi_pass unix:/var/run/php-fcgi-blog-example-php-fcgi-0.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Here's the truncated .conf file for example.com
server {
listen *:80;
server_name example.com www.example.com test.example.com;
access_log /var/log/nginx/examplecom.access.log;
error_log /var/log/nginx/examplecom.error.log;
root /var/www/example/com;
index index.html index.htm index.php;
# This order might seem weird - this is attempted to match last if rules below fail.
location / {
try_files $uri $uri/ /index.php?$args;
}
........
location ~ [^/]\.php(/|$) {
......
fastcgi_index index.php;
include fcgi.conf;
fastcgi_pass unix:/var/run/php-fcgi-examplecom-php-fcgi-0.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Here's the truncated file for example.net
server {
listen *:80;
server_name example.net www.example.net test.example.net;
access_log /var/log/nginx/examplenet.access.log;
error_log /var/log/nginx/examplenet.error.log;
root /var/www/example/net;
index index.html index.htm index.php;
# This order might seem weird - this is attempted to match last if rules below fail.
location / {
try_files $uri $uri/ /index.php?$args;
}
........
location ~ [^/]\.php(/|$) {
......
fastcgi_index index.php;
include fcgi.conf;
fastcgi_pass unix:/var/run/php-fcgi-examplenet-php-fcgi-0.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Meaning, when I pull up example.com, Chrome says connection refused, and there's no trace of an incoming connection on NGinx logs. test.example.com pulls up the default site(because I didn't configure test.example.com then) at the same time.
Well, your server is listening. Chances are you haven't configured your DNS records correctly, or there is DNS caching. Set your host file to test this theory.
I usually don't post questions until I've researched it to death on the internet. I create a CSR using Laravel Forge, add the Certificate, activate it, edit the Nginx Config using these resources:
https://stackoverflow.com/questions/26192839/laravel-forge-ssl-certificate-not-working
^curl https://domain.com returns data
server {
listen 80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name example.com www.example.com;
root /home/forge/example.com/public;
# FORGE SSL (DO NOT REMOVE!)
ssl_certificate /etc/nginx/ssl/vgport.com/3042/server.crt;
ssl_certificate_key /etc/nginx/ssl/vgport.com/3042/server.key;
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/default-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
I do 'service nginx restart' in command line, and go to /var/log/nginx/error.log and see the following error:
'conflicting server name "" on 0.0.0.0:80, ignored'
'conflicting server name "www.domain.com" on 0.0.0.0:80, ignored'
When I visit domain.com, it gets redirected to https://domain.com with 'This webpage has a redirect loop'. Clearly the Nginx redirect isn't working somehow but I've followed all the steps.
Please let me know what additional error logs and information I should post to troubleshoot this issue. Any help would be greatly appreciated, thanks in advance.
okay so the problem was simpler than I thought. I was using the free cloudflare dns pointing which didn't support ssl. I switched to using the namescheap dns and it started working.
After spending some time researching nginx, I'd just like to add that when you add a cert, you have to manually press "Activate it" once downloaded.