My first time using Nginx, but I am more than familiar with Apache and Linux. I am using an existing project and when ever I am trying to see the index.php I get a 404 File not found.
Here is the access.log entry:
2013/06/19 16:23:23 [error] 2216#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET /index.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.ordercloud.lh"
And here is the sites-available file:
server {
# Listening on port 80 without an IP address is only recommended if you are not running multiple v-hosts
listen 80;
# Bind to the public IP bound to your domain
#listen 127.0.0.11:80;
# Specify this vhost's domain name
server_name www.ordercloud.lh;
root /home/willem/git/console/frontend/www;
index index.php index.html index.htm;
# Specify log locations for current site
access_log /var/log/access.log;
error_log /var/log/error.log warn;
# Typically I create a restrictions.conf file that I then include across all of my vhosts
#include conf.d/restrictions.conf;
# I've included the content of my restrictions.conf in-line for this example
# BEGIN restrictions.conf
# Disable logging for favicon
location = /favicon.ico {
log_not_found off;
access_log off;
}
# Disable logging for robots.txt
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
# END restrictions.conf
# Typically I create a yiiframework.conf file that I then include across all of my yii vhosts
#include conf.d/yiiframework.conf;
# I've included the content of my yiiframework.conf in-line for this example
# BEGIN yiiframework.conf
# Block access to protected, framework, and nbproject (artifact from Netbeans)
location ~ /(protected|framework|nbproject) {
deny all;
access_log off;
log_not_found off;
}
# Block access to theme-folder views directories
location ~ /themes/\w+/views {
deny all;
access_log off;
log_not_found off;
}
# Attempt the uri, uri+/, then fall back to yii's index.php with args included
# Note: old examples use IF statements, which nginx considers evil, this approach is more widely supported
location / {
try_files $uri $uri/ /index.php?$args;
}
# END yiiframework.conf
# Tell browser to cache image files for 24 hours, do not log missing images
# I typically keep this after the yii rules, so that there is no conflict with content served by Yii
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 24h;
log_not_found off;
}
# Block for processing PHP files
# Specifically matches URIs ending in .php
location ~ \.php$ {
try_files $uri =404;
fastcgi_intercept_errors on;
# Fix for server variables that behave differently under nginx/php-fpm than typically expected
#fastcgi_split_path_info ^(.+\.php)(/.+)$;
# Include the standard fastcgi_params file included with nginx
include fastcgi_params;
#fastcgi_param PATH_INFO $fastcgi_path_info;
#fastcgi_index index.php;
# Override the SCRIPT_FILENAME variable set by fastcgi_params
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# Pass to upstream PHP-FPM; This must match whatever you name your upstream connection
fastcgi_pass 127.0.0.1:9000;
}
}
My /home/willem/git/console is owned by www-data:www-data (my web user running php etc) and I have given it 777 permissions out of frustration...
Can anybody advise?
That message from the fastcgi server usually means that the SCRIPT_FILENAME that it was given was not found or inaccessible as a file on its filesystem.
Checkout file permissions on /home/willem/git/console/frontend/www/index.php
Is it 644?
And /home/willem/git/console/frontend/www/
Is it 755?
Ok, so 3 things I found after a day of struggling
For some reason I had already something running on port 9000 so I
changed to 9001
My default site was intercepting my new one, once again I don't
under stand why since it shouldn't, but I just unlinked it
Nginx doesn't automatically do the sym link for sites-available to
site-enabled.
Hope this saves someone some trouble!
Here is a more detailed link in server fault: https://serverfault.com/questions/517190/nginx-1-fastcgi-sent-in-stderr-primary-script-unknown/517207#517207
In case anyone had the same error: in my case the problem was the missing root directive inside the location block in nginx.conf, as explained in the Arch wiki
"Primary script unknown" is caused by SELinux security context.
client get the response
File not found.
nginx error.log has the following error message
*19 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream
so just change security context type of web root folder to httpd_sys_content_t
chcon -R -t httpd_sys_content_t /var/www/show
there are 3 users for nginx/php-fpm config
/etc/nginx/nginx.conf
user nobody nobody; ### `user-1`, this is the user run nginx woker process
...
include servers/*.conf;
/etc/nginx/servers/www.conf
location ~ \.php$ {
# fastcgi_pass 127.0.0.1:9000; # tcp socket
fastcgi_pass unix:/var/run/php-fpm/fpm-www.sock; # unix socket
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
/etc/php-fpm.d/www.conf
[www]
user = apache ### `user-2`, this is the user run php-fpm pool process
user = apache
;listen = 127.0.0.1:9000 # tcp socket
listen = /var/run/php-fpm/fpm-www.sock # unix socket
listen.onwer = nobody ### `user-3`, this is the user for unix socket, like /var/run/php-fpm/fpm-www.sock
listen.group = nobody # for tcp socket, these lines can be commented
listen.mode = 0660
user-1 and user-2 is not necessary to be the same.
for unix socket, user-1 need to be the same as user-3,
as nginx fastcgi_pass must have read/write permission on the unix socket.
otherwise nginx will get 502 Bad Gateway, and nginx error.log has the following error message
*36 connect() to unix:/var/run/php-fpm/fpm-www.sock failed (13: Permission denied) while connecting to upstream
I dont know how the $document_root is calculated but I resolved the issue , by
really making sure that my document root is at /usr/share/nginx/ just wher the html folder exist
Related
I have set up MeteorJs application with MongoDB backend in Digital Ocean. I am now trying to set up adminer so I can query MongoDB without opening input ports on my droplet. Everytime I try to reload the nginx settings, I get nginx: [emerg] no port in upstream "php" in /etc/nginx/sites-enabled/admin:32 error.
What am I missing?
/etc/nginx/sites-enabled/admin
server {
listen 80;
server_name 165.227.197.220;
return 301 https://$server_name$request_uri;
}
server {
server_name 165.227.197.220;
listen 443 ssl;
access_log /var/log/nginx/admin.access.log;
error_log /var/log/nginx/admin.error.log;
ssl_certificate /etc/nginx/ssl/admin.crt;
ssl_certificate_key /etc/nginx/ssl/admin.key;
root /var/www/admin;
index adminer.php;
# Get file here https://codex.wordpress.org/Nginx
include global/restrictions.conf;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd/admin;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php;
fastcgi_index adminer.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
/etc/nginx/global/restrictions.conf
# Global restrictions configuration file.
# Designed to be included in any server {} block.
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~ /\. {
deny all;
}
# Deny access to any files with a .php extension in the uploads directory
# Works in sub-directory installs and also in multisite network
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~* /(?:uploads|files)/.*\.php$ {
deny all;
}
You need to setup an upstream for example using wordpress that is what we have
upstream index_php_upstream {
server 127.0.0.1:8090; # NGINX Unit backend address for index.php with
# 'script' parameter
}
For more about upstream, so can view this article UpStream - Nginx
I have a hybrid php/Rails app sitting on one AWS ec2 server. I am hosting a Mediawiki installation and using Rails as a frontend to it. For the Rails app, I am using Passenger as a server. I would like location / to serve the Rails app, and anything at location /w or any .php files to be served by Mediawiki (php5-fpm).
I used to have a working configuration, but it was hacked together and I would like to refactor it.
My current working implementation gives me a 403 Forbidden error when I try to access the Rails app at /.
The error I get (from rails_error.log): 2017/10/24 20:08:31 [error] 14947#14947: *2 directory index of "/var/www/myapp/public/" is forbidden, client: xx.yy.zz.aa, server: myapp.amazonaws.com, request: "GET / HTTP/1.1", host: "myapp.amazonaws.com"
I would like to be able to access only the Rails app at / for now; I am not focused on the php5-fpm configurations yet.
Here are my .conf files:
sites-available/myapp.conf:
fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=mw_cache:10m max_size=10g inactive=60m use_temp_path=off;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
server {
listen 80;
listen [::]:80 ipv6only=on default_server;
server_name myapp.com;
charset utf-8;
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
root /var/www/myapp/public;
passenger_enabled on;
location /w {
alias /var/www/mediawiki-1.28.0;
index index.php index.html index.htm;
charset utf-8;
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_cache mw_cache;
fastcgi_cache_valid 200 60m;
try_files $uri /index.php =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:7777;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
error_log /var/log/nginx/mediawiki_error.log;
access_log /var/log/nginx/mediawiki_access.log;
}
error_log /var/log/nginx/rails_error.log;
access_log /var/log/nginx/rails_access.log;
}
nginx.conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
passenger_root /home/ubuntu/.rvm/gems/ruby-2.3.1#myapp/gems/passenger-5.1.1;
passenger_ruby /home/ubuntu/.rvm/gems/ruby-2.3.1#myapp/wrappers/ruby;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
I have a suspicion it has to do with how Passenger is installed or running, or it could be that I am running Passenger not as www-data but as ubuntu.
/var/www/myapp/ is also owned by ubuntu, though I have tried chown -R www-data /var/www/myapp and chown -R ubuntu:www-data /var/www/myapp to no avail.
Does anyone have any pointers from here?
Thanks.
Your config works for me: the app is started successfully, at least, if I start Nginx as root (how it usually is done).
Note that the user directive from your config tells Nginx what user to run its workers as, it does not specify what user to run the Passenger core as (that is inherited from what Nginx was started with).
My pointers would be as follows:
Usually the first thing to do is to check the logs.
Your config declares logfiles, but doesn't set the top level error log, so you're missing the Passenger log output.
To solve this, move the error_log /var/log/nginx/error.log; to above the http { line in your nginx.conf.
If needed, you can also set passenger_log_level 7; (in the http block) to get very detailed logs.
By changing the log level and observing the result you can also ensure that the config you think is being used, is actually the one that is used, on the URL that you are querying (i.e. you can see requests coming in).
Passenger has some troubleshooting tools, e.g. passenger-status can be used to inspect if it's running successfully. Note that you haven't declared a passenger_pre_start url, so your app won't be started by Passenger until the first request is routed to it.
This is the first time I use nginx with Symfony, and I used the configuration that is on the Symfony document. I have the basic configuration of the latest version of nginx.
However, on my url: 127.0.0.1:83/app_dev.php/fr/admin/organisations
I get the error in my nginx project_error.log file :
2017/06/01 09:16:14 [error] 20204#20204:
*1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream,
client: 127.0.0.1, server: localhost,
request: "GET /app_dev.php/fr/admin/organisations/ HTTP/1.1",
upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock:", host: "127.0.0.1:83",
referrer: "http://127.0.0.1:83/app_dev.php/fr/admin/organisations/1/edit"
My nginx server config (From Symfony documentation) :
server {
listen 83 default_server;
listen [::]:83 default_server;
server_name localhost;
root /var/www/SymfonySkeleton/web;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /app.php$is_args$args;
}
# DEV
# This rule should only be placed on your development environment
# In production, don't include this and don't deploy app_dev.php or config.php
location ~ ^/(app_dev|config)\.php(/|$) {
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# When you are using symlinks to link the document root to the
# current version of your application, you should pass the real
# application path instead of the path to the symlink to PHP
# FPM.
# Otherwise, PHP's OPcache may not properly detect changes to
# your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126
# for more information).
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
# PROD
location ~ ^/app\.php(/|$) {
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# When you are using symlinks to link the document root to the
# current version of your application, you should pass the real
# application path instead of the path to the symlink to PHP
# FPM.
# Otherwise, PHP's OPcache may not properly detect changes to
# your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126
# for more information).
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
# Prevents URIs that include the front controller. This will 404:
# http://domain.tld/app.php/some-path
# Remove the internal directive to allow URIs like this
internal;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
error_log /var/log/nginx/project_error.log;
access_log /var/log/nginx/project_access.log;
}
Any ideas ? I strongly assume that my configuration is not good
Thanks !
Remove the lines:
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
Though this may impact on the use of symlinks in your project. It will remove that error message though.
Please check the root path and the files under it .
I have 12 sites that I plan to run on a single server that has NGinx and php5-fpm on it. I set them all up using one server block per conf file, all included by the main nginx.conf file. It's a mix of Wordpress, PhpMyAdmin, and PHP sites. The wordpress and PhpMyAdmin sites are working fine, but the PHP sites are not. Meaning, when I pull up example.com, Chrome says connection refused, and there's no trace of an incoming connection on NGinx logs. test.example.com pulls up the default site(because I didn't configure test.example.com then) at the same time.
I copied the nginx configs from the working sites to set up the sites that are not working, but no luck. The only difference in nginx config between the working and non-working sites are the server_name directive.
After checking and rechecking for over 2 hours, I found out that the sites that have the server_name as pqr.example.com work, but the ones with example.com don't. All of the working sites are configured to use subdomain URLs, and that's probably why they're working.
My questions are -
1. What am I missing in the config to make the abc.com work ?
2. I have two sites, example.com and example.net that I'm trying to run on the same server. Is that going to be a problem for NGinx ?
3. Does Nginx have a problem with differentiating between example.com, test.example.com, and example.net ?
4. I also noticed that if www.example.net works, www.example.com doesn't and vice versa, which means I have to assign each site that has the name abc in it different subdomains like www.example.net and test.example.com. Is this a standard/expected behavior of Nginx, or am I missing something ?
5. All of my base URLs auto redirect from http://example.com to http://www.example.com; How do I find out where that redirect is happening ?
Below are the Nginx config files that I'm having problems with, truncated to include the important parts; Please let me know if more info is needed.
Main nginx.conf file -
user www-data www-data;
pid /var/run/nginx.pid;
worker_processes 4;
worker_rlimit_nofile 100000;
events {
worker_connections 4096;
include /etc/nginx.custom.events.d/*.conf;
}
http {
default_type application/octet-stream;
access_log off;
error_log /var/log/nginx/error.log crit;
.......
server_tokens off;
include proxy.conf;
include fcgi.conf;
include conf.d/*.conf;
include /etc/nginx.custom.d/*.conf;
}
include /etc/nginx.custom.global.d/*.conf;
Here is the full working .conf file for a blog that works. All other sites have this full config, since they are just basic PHP sites.
server {
listen *:80;
server_name blog.example.com;
access_log /var/log/nginx/blog-example.access.log;
error_log /var/log/nginx/blog-example.error.log;
root /var/www/example/blog;
index index.html index.htm index.php;
# This order might seem weird - this is attempted to match last if rules below fail.
location / {
try_files $uri $uri/ /index.php?$args;
}
# Add trailing slash to */wp-admin requests.
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
# Directives to send expires headers and turn off 404 error logging.
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off; log_not_found off; expires max;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~ /\. {
deny all;
}
# Deny access to any files with a .php extension in the uploads directory
# Works in sub-directory installs and also in multisite network
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~* /(?:uploads|files)/.*\.php$ {
deny all;
}
location ~ [^/]\.php(/|$) {
# Zero-day exploit defense.
# http://forum.nginx.org/read.php?2,88845,page=3
# Won't work properly (404 error) if the file is not stored on this server, which is entirely possible with php-fpm/php-fcgi.
# Comment the 'try_files' line out if you set up php-fpm/php-fcgi on another machine. And then cross your fingers that you won't get hacked.
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
include fcgi.conf;
fastcgi_pass unix:/var/run/php-fcgi-blog-example-php-fcgi-0.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Here's the truncated .conf file for example.com
server {
listen *:80;
server_name example.com www.example.com test.example.com;
access_log /var/log/nginx/examplecom.access.log;
error_log /var/log/nginx/examplecom.error.log;
root /var/www/example/com;
index index.html index.htm index.php;
# This order might seem weird - this is attempted to match last if rules below fail.
location / {
try_files $uri $uri/ /index.php?$args;
}
........
location ~ [^/]\.php(/|$) {
......
fastcgi_index index.php;
include fcgi.conf;
fastcgi_pass unix:/var/run/php-fcgi-examplecom-php-fcgi-0.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Here's the truncated file for example.net
server {
listen *:80;
server_name example.net www.example.net test.example.net;
access_log /var/log/nginx/examplenet.access.log;
error_log /var/log/nginx/examplenet.error.log;
root /var/www/example/net;
index index.html index.htm index.php;
# This order might seem weird - this is attempted to match last if rules below fail.
location / {
try_files $uri $uri/ /index.php?$args;
}
........
location ~ [^/]\.php(/|$) {
......
fastcgi_index index.php;
include fcgi.conf;
fastcgi_pass unix:/var/run/php-fcgi-examplenet-php-fcgi-0.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Meaning, when I pull up example.com, Chrome says connection refused, and there's no trace of an incoming connection on NGinx logs. test.example.com pulls up the default site(because I didn't configure test.example.com then) at the same time.
Well, your server is listening. Chances are you haven't configured your DNS records correctly, or there is DNS caching. Set your host file to test this theory.
I am deploying Laravel application on NGINX
I have a route specified in routes.php for handling non-blade php files and well as some of my blade php files
Route::any('/{pagephp}.php',function($pagephp) {
return View::make($pagephp); //This will handle .PHP as well as blades
});
But I get No Input File specified error whenever I try to access files such as terms.blade.php
Note that I do not get error when I access specified routes. for e.g. I have signin.blade.php for which I have
Route::get('/signin',function() {
return View::make('signin');
});
When looked in to error log I see
[error] 969#0: *91 FastCGI sent in stderr: "Unable to open primary script:
/var/www/xxxx/public/terms.php (No such file or directory)" while reading response
header from upstream, client: xxxxxxxx, server: xxxx, request: "GET /terms.php HTTP/1.1",
upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host:
As per the error, NGINX is trying to look for terms.php in public directory and not sending it to the route.php
Is there any way to fix this?
My NGINX config file is as follows
server {
# Port that the web server will listen on.
listen 80;
# Host that will serve this project.
server_name xxxx;
# Useful logs for debug.
access_log /var/www/xxxx/app/storage/logs/access.log;
error_log /var/www/xxxx/app/storage/logs/error.log;
rewrite_log on;
# The location of our projects public directory.
root /var/www/xxxx/public;
# Point index to the Laravel front controller.
index index.php;
location / {
# URLs to attempt, including pretty ones.
try_files $uri $uri/ /index.php?$query_string;
}
# Remove trailing slash to please routing system.
if (!-d $request_filename) {
rewrite ^/(.+)/$ /$1 permanent;
}
# PHP FPM configuration.
location ~* \.php$ {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(.*)$;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
# We don't need .ht files with nginx.
location ~ /\.ht {
deny all;
}
# Set header expirations on per-project basis
location ~* \.(?:ico|css|js|jpe?g|JPG|png|svg|woff)$ {
expires 365d;
}
}
If these files does not belongs to laravel framework they should not be on the directory but if in case you want to test a particular php script you can put that in public folder and that will be accessible via yourdomain.com/script.php