Laravel 500 error when using with nginx - php

I have tried a lot of configuration settings for nginx to work with laravel but none of them do. I keep getting a 500 error on my webpage.
example.com is currently unable to handle this request.
server {
listen _:80;
server_name example.com;
root "C:\Users\Administrator\Google Drive\projects\laravel";
index index.php index.html;
log_not_found off;
charset utf-8;
access_log logs/cdn.example.com-access.log ;
location ~ /\. {deny all;}
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
location / {
include mime.types;
}
location = /favicon.ico {
}
location = /robots.txt {
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9054;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
I am pretty brand new to laravel and sort of new to nginx, I have all my laravel files in the document root that I just moved from my http://localhost:8000 development server. How can I make it live and make it work with nginx?

I can see wrong configuration in your post. You need to properly setup you web server. You should set public directory as root directory, set write permissions on a storage folder.
So, change this:
root "C:\Users\Administrator\Google Drive\projects\laravel";
To this:
root "C:\Users\Administrator\Google Drive\projects\laravel\public";
And restart your web server.

Related

Nginx no error page is showing, no matter what configuration I use

I'm trying to configure an error page in nginx for 500 and 502 error codes, I tried many different configuration options and solutions but none of them worked for me.
The issue itself is, that no matter how I do the configuration, I always get the generic Nginx error page with 502 bad gateway.
The following docker stack is running with these containers:
Nginx
MySQL
Composer
Azure CLI
PHP
A TYPO3 system is running behind the php/composer container.
I'm using Nginx instead of a Apache web server.
Below you can see my current nginx configuration.
server {
listen 80;
root /var/www/html/public;
index index.php index.htm index.html;
# Make site accessible from http://localhost/
server_name _;
# Disable sendfile as per https://docs.vagrantup.com/v2/synced-folders/virtualbox.html
sendfile off;
error_log /dev/stdout info;
access_log /var/log/nginx/access.log;
# NGINX - Provide error page
error_page 500 502 /error.html;
location = /error.html {
internal;
}
## provide a health check endpoint
location /healthcheck {
access_log off;
stub_status on;
keepalive_timeout 0; # Disable HTTP keepalive
return 200;
}
location / {
absolute_redirect off;
try_files $uri $uri/ /index.php$is_args$args;
}
# pass the PHP scripts to FastCGI server listening on socket
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass ${PHP_DOMAIN}:9000;
fastcgi_buffers 16 128k;
fastcgi_buffer_size 128k;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_intercept_errors off;
# fastcgi_read_timeout should match max_execution_time in php.ini
fastcgi_read_timeout 600;
fastcgi_param SERVER_NAME $host;
fastcgi_cache_bypass $http_x_blackfire_query;
}
# Expire rules for static content
# Feed
location ~* \.(?:rss|atom)$ {
expires 1h;
}
# Media: images, icons, video, audio, HTC
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
# Prevent clients from accessing hidden files (starting with a dot)
# This is particularly important if you store .htpasswd files in the site hierarchy
# Access to `/.well-known/` is allowed.
# https://www.mnot.net/blog/2010/04/07/well-known
# https://tools.ietf.org/html/rfc5785
location ~* /\.(?!well-known\/) {
deny all;
}
# Prevent clients from accessing to backup/config/source files
location ~* (?:\.(?:bak|conf|dist|fla|in[ci]|log|psd|sh|sql|sw[op])|~)$ {
deny all;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
# TYPO3 - Block access to composer files
location ~* composer\.(?:json|lock) {
deny all;
}
# TYPO3 - Block access to flexform files
location ~* flexform[^.]*\.xml {
deny all;
}
# TYPO3 - Block access to language files
location ~* locallang[^.]*\.(?:xml|xlf)$ {
deny all;
}
# TYPO3 - Block access to static typoscript files
location ~* ext_conf_template\.txt|ext_typoscript_constants\.(?:txt|typoscript)|ext_typoscript_setup\.(?:txt|typoscript) {
deny all;
}
# TYPO3 - Block access to miscellaneous protected files
location ~* /.*\.(?:bak|co?nf|cfg|ya?ml|ts|typoscript|dist|fla|in[ci]|log|sh|sql)$ {
deny all;
}
# TYPO3 - Block access to recycler and temporary directories
location ~ _(?:recycler|temp)_/ {
deny all;
}
# TYPO3 - Block access to configuration files stored in fileadmin
location ~ fileadmin/(?:templates)/.*\.(?:txt|ts|typoscript)$ {
deny all;
}
# TYPO3 - Block access to libaries, source and temporary compiled data
location ~ ^(?:vendor|typo3_src|typo3temp/var) {
deny all;
}
# TYPO3 - Block access to protected extension directories
location ~ (?:typo3conf/ext|typo3/sysext|typo3/ext)/[^/]+/(?:Configuration|Resources/Private|Tests?|Documentation|docs?)/ {
deny all;
}
if (!-e $request_filename) {
rewrite ^/(.+)\.(\d+)\.(php|js|css|png|jpg|gif|gzip)$ /$1.$3 last;
}
#Include development locations if needed
include /etc/nginx/conf.d/locations/*.conf;
}
I think the issue itself does not come from the configuration but from anywhere else but I don't know where.. I can't find the problem.
Hope u guys can help me, btw it's my first stack overflow question :D
EDIT:
Just added the configuration below to test the error codes but unfortunately I still get a 502 Bad Gateway, maybe a problem with the local setup. To my surprise the configured location for a healthcheck is working, just the error page not.
location /get_error {
return 500;
}
UPDATE:
The configuration itself was correct, I just deployed the changes made to the dev system and it just worked! I don't know why and where the issue was but it just won't work for my local dev environment.

nginx error: openat() failed (20: not a directory) for images

I have a PHP project, a REST API. Nginx configuration is working for the API but is not for uploaded images
Images are always returning 404 error
the project starts at /public directory, upload directory is inside public, so the image access url is something like:
DOMAIN.COM/upload/201812/20181204133821.jpg
The actual NGINX configuration is
server {
listen 80;
listen [::]:80;
set $root_path '/usr/share/nginx/html/api/public';
root $root_path;
index index.php index.html index.htm;
server_name api-eduplus.blanco-estudio.com;
#try_files $uri $uri/ #rewrite;
try_files $uri $uri/ /index.php?q=$uri&$args;
location ~* \.(jpg|jpeg|png|gif)$ {
root $root_path;
expires max;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
#error_log logs/error.log warn;
}
So the API starts at /public directory and images are uploaded into /public/upload/
Also. The nginx error log on the server says:
2018/12/04 16:35:54 [error] 17338: *1 openat() "/usr/share/nginx/html/api/public/upload/201812/20181204133821.jpg" failed (20: Not a directory), request: "GET /upload/201812/20181204133821.jpg HTTP/1.1"
Please help, I'm actually stuck
I just fixed this issue on my server. The cause of the error was a symlink.
I just encountered this (not very elucidating) error message on my server's nginx config:
"/path/to/index.html" is not found (20: Not a directory)
After some trial-and-error, I determined that the actual cause was that nginx couldn't access the site's document root because part of the path to the root included a symlink, but I had disable_symlinks on; set in my main nginx.conf file.
Commenting-out the disable_symlinks on; line or changing it to disable_symlinks off; fixes the issue.
See also:
https://nginx.org/en/docs/http/ngx_http_core_module.html#disable_symlinks

NGINX + Wordpress gives ERR_TOO_MANY_REDIRECTS

I have been trying to host a simple Wordpress blog using NGINX as the web-server. The blog is hosted as a subdirectory under domain_name.com/blog.
The main blog opens up correctly. But when trying to open the wp-admin under domain_name.com/blog/wp-admin my browser shows ERR_TOO_MANY_REDIRECTS.
I am not sure if this is an issue with my NGINX configuration or wordpress configuration. Following is my NGINX server block:
server {
listen 80;
server_name <domain_name.com>;
root /var/www/html;
index index.php;
location /blog {
try_files $uri $uri/ /blog/index.php?$args;
}
location ~ \.php$ {
include fastcgi.conf;
fastcgi_intercept_errors on;
fastcgi_pass php;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
Wordpress is installed under the /var/www/html/blog directory. And the values for "siteurl" and "home" wp_options in the database are both pointing to domain_name.com/blog.
What would be a good way to solve this problem?
Additional notes that might be helpful:
When I try to access static files under the wp-content directory, they open without any issues. No redirection errors there.
It is usual for WordPress to redirect an http session to https whenever accessing wp-admin. This may be controlled using the FORCE_SSL_LOGIN and FORCE_SSL_ADMIN settings in wp-config.php.
When a reverse proxy is terminating SSL, the fact that the originating connection is over https must be conveyed to WordPress to avoid a redirection loop.
Your reverse proxy should be setting headers such as X-Forwarded-Proto.
You need to change your nginx configuration so that the HTTPS flag is set correctly for WordPress.
For example:
map $http_x_forwarded_proto $https_flag {
default off;
https on;
}
server {
listen 80;
server_name example.com;
root /var/www/html;
index index.php;
location /blog {
try_files $uri $uri/ /blog/index.php?$args;
}
location ~ \.php$ {
include fastcgi.conf;
fastcgi_intercept_errors on;
fastcgi_param HTTPS $https_flag;
fastcgi_pass php;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}

Base URLs like example.com not working in NGinx

I have 12 sites that I plan to run on a single server that has NGinx and php5-fpm on it. I set them all up using one server block per conf file, all included by the main nginx.conf file. It's a mix of Wordpress, PhpMyAdmin, and PHP sites. The wordpress and PhpMyAdmin sites are working fine, but the PHP sites are not. Meaning, when I pull up example.com, Chrome says connection refused, and there's no trace of an incoming connection on NGinx logs. test.example.com pulls up the default site(because I didn't configure test.example.com then) at the same time.
I copied the nginx configs from the working sites to set up the sites that are not working, but no luck. The only difference in nginx config between the working and non-working sites are the server_name directive.
After checking and rechecking for over 2 hours, I found out that the sites that have the server_name as pqr.example.com work, but the ones with example.com don't. All of the working sites are configured to use subdomain URLs, and that's probably why they're working.
My questions are -
1. What am I missing in the config to make the abc.com work ?
2. I have two sites, example.com and example.net that I'm trying to run on the same server. Is that going to be a problem for NGinx ?
3. Does Nginx have a problem with differentiating between example.com, test.example.com, and example.net ?
4. I also noticed that if www.example.net works, www.example.com doesn't and vice versa, which means I have to assign each site that has the name abc in it different subdomains like www.example.net and test.example.com. Is this a standard/expected behavior of Nginx, or am I missing something ?
5. All of my base URLs auto redirect from http://example.com to http://www.example.com; How do I find out where that redirect is happening ?
Below are the Nginx config files that I'm having problems with, truncated to include the important parts; Please let me know if more info is needed.
Main nginx.conf file -
user www-data www-data;
pid /var/run/nginx.pid;
worker_processes 4;
worker_rlimit_nofile 100000;
events {
worker_connections 4096;
include /etc/nginx.custom.events.d/*.conf;
}
http {
default_type application/octet-stream;
access_log off;
error_log /var/log/nginx/error.log crit;
.......
server_tokens off;
include proxy.conf;
include fcgi.conf;
include conf.d/*.conf;
include /etc/nginx.custom.d/*.conf;
}
include /etc/nginx.custom.global.d/*.conf;
Here is the full working .conf file for a blog that works. All other sites have this full config, since they are just basic PHP sites.
server {
listen *:80;
server_name blog.example.com;
access_log /var/log/nginx/blog-example.access.log;
error_log /var/log/nginx/blog-example.error.log;
root /var/www/example/blog;
index index.html index.htm index.php;
# This order might seem weird - this is attempted to match last if rules below fail.
location / {
try_files $uri $uri/ /index.php?$args;
}
# Add trailing slash to */wp-admin requests.
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
# Directives to send expires headers and turn off 404 error logging.
location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off; log_not_found off; expires max;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~ /\. {
deny all;
}
# Deny access to any files with a .php extension in the uploads directory
# Works in sub-directory installs and also in multisite network
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~* /(?:uploads|files)/.*\.php$ {
deny all;
}
location ~ [^/]\.php(/|$) {
# Zero-day exploit defense.
# http://forum.nginx.org/read.php?2,88845,page=3
# Won't work properly (404 error) if the file is not stored on this server, which is entirely possible with php-fpm/php-fcgi.
# Comment the 'try_files' line out if you set up php-fpm/php-fcgi on another machine. And then cross your fingers that you won't get hacked.
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
include fcgi.conf;
fastcgi_pass unix:/var/run/php-fcgi-blog-example-php-fcgi-0.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Here's the truncated .conf file for example.com
server {
listen *:80;
server_name example.com www.example.com test.example.com;
access_log /var/log/nginx/examplecom.access.log;
error_log /var/log/nginx/examplecom.error.log;
root /var/www/example/com;
index index.html index.htm index.php;
# This order might seem weird - this is attempted to match last if rules below fail.
location / {
try_files $uri $uri/ /index.php?$args;
}
........
location ~ [^/]\.php(/|$) {
......
fastcgi_index index.php;
include fcgi.conf;
fastcgi_pass unix:/var/run/php-fcgi-examplecom-php-fcgi-0.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Here's the truncated file for example.net
server {
listen *:80;
server_name example.net www.example.net test.example.net;
access_log /var/log/nginx/examplenet.access.log;
error_log /var/log/nginx/examplenet.error.log;
root /var/www/example/net;
index index.html index.htm index.php;
# This order might seem weird - this is attempted to match last if rules below fail.
location / {
try_files $uri $uri/ /index.php?$args;
}
........
location ~ [^/]\.php(/|$) {
......
fastcgi_index index.php;
include fcgi.conf;
fastcgi_pass unix:/var/run/php-fcgi-examplenet-php-fcgi-0.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Meaning, when I pull up example.com, Chrome says connection refused, and there's no trace of an incoming connection on NGinx logs. test.example.com pulls up the default site(because I didn't configure test.example.com then) at the same time.
Well, your server is listening. Chances are you haven't configured your DNS records correctly, or there is DNS caching. Set your host file to test this theory.

Multiple WordPress Installations with Nginx

I am trying to use a single site config file with nginx to server an arbitrary number of WordPress installations in sub-folders, but can't get pretty permalinks to work at all.
+ root
+ WordPressOne
+ WordPressTwo
+ WordPressThree
With permalinks disabled, /WordPressOne/?p=12 works fine. With permalinks enabled, /WordPressOne/MyPage/ results in a 404 delivered from the root folder.
Since the number of WP installations in subfolders is constantly varying (it's used for development) and I don't want to have to constantly modify/create/delete site configs, I'd like it to work such that I can just copy a new WordPress installation to a new subfolder, set WP up, and have permalinks working for that installation without restarting nginx or PHP-FPM.
This is the basic site.conf used in the /etc/nginx/sites-available/ folder (it reflects a number of rules used in production as well):
server {
listen 5000 default;
server_name dev wpdev;
root /vagrant/sites;
client_max_body_size 2m;
expires -1;
charset utf-8;
index index.html index.php;
location ~* ^.+\.(manifest|appcache)$ {
expires -1;
index index.html index.htm;
uwsgi_cache off;
}
location ~* \.(?:rss|atom)$ {
expires 1h;
add_header Cache-Control "public";
}
location ~* ^.+\.(css |js|jpg|jpeg|gif|png|ico|gz|svg|svgz|ttf|otf|woff|eot|mp4|ogg|ogv|webm|txt)$ {
expires max;
access_log off;
index index.html index.htm;
add_header Cache-Control "public";
uwsgi_cache off;
}
location / {
try_files $uri $uri/ /index.php?$args;
uwsgi_cache off;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
uwsgi_cache off;
include /etc/nginx/fastcgi_params;
fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name;
}
location ~ /\.ht {
deny all;
}
add_header "X-UA-Compatible" "IE=Edge,chrome=1";
add_header "Vary" "Accept-Language";
add_header "Access-Control-Allow-Origin" "*";
add_header "Cache-Control" "no-transform";
}
This is also used as part of a virtual machine configuration (using Vagrant, as you can see), and I'd love for designers and WP devs to not have to worry about keeping their site files in nginx updated.
Thanks!
It looks like you've copied a basic WordPress setup and I see no attempts to implement the logic you want. Each WordPress install has a router called index.php, which should be the catch-all for pretty URLs to work. They will need to carry the first path below the site root, which is probably a WordPress configuration setting.
Then you will need to set the root based on the first path component in the request_uri and route non existing files and directories to the appropriate index.php.
I'm not entirely sure this would work and can't test it for time constraints, so try this and report back:
set $wpinstall = '';
location ~ /([^/]+)/(.*)$ {
set $wpinstall = $1;
try_files $uri $uri/ #wp;
}
location #wp {
rewrite /$wpinstall/ /$wpinstall/index.php;
}
location ~ ^(.+\.php)(.*)$ {
root /vagrant/sites;
fastcgi_split_path_info ^(.*\.php)(.*)$;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root/$wpinstall$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
#...rest of fastcgi parameters
}
If you report back, be sure to include the rewriting debug info by setting the appropriate error_log debug level. In all honesty, this may be a lot easier if you use virtual hosts, based on this configuration. Your developers would only need to edit their host files.

Categories