I'm having an issue since today when I uploaded updates to my production server from my local development environment on my laptop.
When I try to access a page, it's completely blank. I didn't touch anything - I did like I always do when uploading new files, which has always worked.
I can also access my assets, like images, js and css perfectly fine. And my auth middleware is working as I'm using Steam login and it redirects me to login through Steam.
What I remember doing before it broke:
Uploaded my files
ran composer dumpautoload
ran php artisan cache:clear
What I have done so far:
Checked my NGINX/PHP/MySQL error logs. Nothing.
Checked laravel logs, nothing there either.
Made sure PHP, NGINX and Mysql is running.
Made sure php, nginx and mysql is up to date.
Generated a new key
Double checked my .env so it's valid.
composer update
chmodded my directories (even though I had no problems with this prior to the blank page).
At this point I don't know how and why it suddenly does not work, when it's working perfectly fine on my local laptop.
nginx config:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com;
proxy_set_header X-Forwarded-For $remote_addr;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
root /var/www/preview/example/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
charset utf-8;
index index.html index.htm index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
#auth_basic "Restricted Content";
#auth_basic_user_file /etc/nginx/.htpasswd;
}
location ~ /\.(?!well-known).* {
deny all;
}
server_name hsbuilds.com;
ssl on;
ssl_certificate /home/hsbuilds/src/hsbuilds/example.com.chained.crt;
ssl_certificate_key /home/hsbuilds/src/example.com/example.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
#location / {
# proxy_pass http://example.com:8000;
# proxy_set_header X-Forwarded-For $remote_addr;
#}
}
Edit: When checking the network tab in chrome, the page returns code 200.
first, check .env file and especially vars APP_KEY(php artisan key:generate) and APP_URL.
Then run, just for sure php artisan cache:clear && php artisan config:clear && php artisan config:cache.
Related
I have a Laravel web application I'm trying to deploy to Azure Web App.
I'm using PHP 8.1 and Laravel 9, I created a resource on Azure, And deployed my web application.
I created nginx config file on /home, with this configuration (from Laravel docs):
server {
listen 80;
listen [::]:80;
server_name MY_WEBAPP_URL;
root /home/site/wwwroot/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
I then set the web app's startup command to the recommended command:
cp /home/default /etc/nginx/sites-enabled/default; service nginx restart
I restarted the web app server, and tried to access it, but I get error 5XX, which says:
:( Application Error
I searched the web, but it seems like there is no solution to properly deploy PHP 8.1 application to Azure.
I tried few different nginx configuration that I saw, but none worked.
Deploying with Docker also has it's own different errors (on Azure only, works fine on local container).
I have deployed a Laravel Project on a Digitalocean droplet, and serving it using NGINX.
Everything is working fine while trying to access the webpage through the IP https://<Droplet_IP>/ (https is enabled, but when it wasn't enabled, the same problem was present.)
But, when I try to access the same webpage through the registered domain https://<domain_name>/ I'm getting 404 error page - this is the custom error page made by us. Very weird.
Trying to access ANY link using the domain results in the 404 error page, meanwhile the same action while accessing through the Droplet IP works like a charm.
Ofcourse, I've set the DNS records in Digitalocean, both the # and www records.
Also, have updated the NS on the domain registrar with Digitalocean Nameservers.
I've also gone through different configuration of NGINX, trying various solutions out from the internet, and my latest NGINX config is below:
server_name domain www.domain;
root /var/www/html/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
location / {
#try_files $uri $uri/ /index.php?$query_string;
try_files $uri $uri/ /index.php?$is_args$args;
#try_files $uri $uri/ /index.php?$args;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
#error_page 404 /index.php;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/habitoz.in/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/habitoz.in/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.habitoz.in) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = habitoz.in) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name domain www.domain;
#return 404; # managed by Certbot
}
Right now, I'm at a loss as to what to do.
How is the website working perfectly through the IP, but not through the domain.
And, if there was a problem with the domain in the first place, how is the custom 404 page being shown.
That means that the application is working, but somehow, something is amiss when accessing through the domain.
Spend the whole day & night, and couldn't figure out the answer. When I woke up in the morning, the answer clicked in me.
A very simple, and idiotic error it turned out to be!
When I deployed the app in the droplet, I modified the .env file, and changed APP_URL to the <Droplet_IP> but when I linked the droplet to the domain, I skipped the step to change the APP_URL, and it's such a small thing, that I don't remember that I skipped that step.
All my error checks, and trying out different solutions amounted to nothing, yet it didn't strike me to check the .env file.
Once the APP_URL is modified to point to the right URL, the error was gone.
I've just installed a Ghost Blog on a new server running NGINX. The Ghost config.json file is pointing at the correct directory /blog and the blog loads fine when I visit it.
What isn't working is when I remove /blog from the URL, I get taken to a 404 page. I've checked my sites-enabled file, which looks like this:
server {
listen 80;
listen [::]:80;
server_name *********;
root /var/www/ghost/system/nginx-root;
location ^~ /blog {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://*********:2368;
proxy_redirect off;
}
location ~ /.well-known {
allow all;
}
client_max_body_size 50m;
But I'm not entirely sure what I need to change to not get the 404 error. I have an example .php file which should be loading but isn't.
I've always used the Digital Ocean One-Click Ghost app but I wanted to use the Ghost CLI this time round. I have a feeling I've missed something though.
the following may remove some of your restrictions but it will work
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name _;
ssl on;
ssl_certificate /etc/letsencrypt/live/thedomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/thedomain.com/privkey.pem;
access_log /var/log/nginx/thedomain.access.log;
error_log /var/log/nginx/thedomain.error.log;
root /var/www/thedomain;
index index.html;
gzip on;
gzip_proxied any;
gzip_types text/css text/javascript text/xml text/plain application/javascript application/x-javascript application/json;
location / {
try_files $uri $uri/ =404;
}
}
You need to make sure all the ssl files are there and permissioned for access by www-data.
If you need to run certbot for the first time, just but the 443 code in an 80 block without the ssl statements
The nginx configuration you've posted only deals with Ghost.
You've setup a server responding on port 80, set the root to Ghost's nginx-root, and created 2 location blocks. One is for /blog/ and serves Ghost, the second .well-known block is for handling generation of SSL certificates with letsencrypt.
I'm not an expert at configuring nginx for PHP, but this guide from Digital Ocean and this stackoverflow question covers a lot of the details.
I think you have a couple of options:
Set the index to be index.php
Add a new location block for / which serves php files
Add a block to handle all php files
I believe adding a new location block like this, will mean any .php files you have will always be called if the path in the URL matches.
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
But the value of fastcgi_pass will depend on how you have PHP setup on your system.
With this, going to /index.php should work.
Setting index: index.php will mean that / maps to /index.php I'm not sure if that will interfere with Ghost, if it does, you'd need a specific location / {} block instead of the index being set.
Hi i am working on a laravel 5.4 project it calls the amazon api the script take longer then 2 to 3 minutes the script works fine on local appache server. but when i shifted my code to laravel forge by creating a digitOccean server and then i am running the amazon api call it gives me this error 504 Gateway Time-out
The server didn't respond in time.
i have checked the logs it says this in the nigix error log file
upstream timed out (110: Connection timed out) while reading response
header from upstream
i have tried adding
proxy_read_timeout 1500;
and also
fastcgi_read_timeout 1500;
but the issue is still their
here is my nigix config file from laravel forge
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/datadrive.nl/before/*;
server {
listen 80;
listen [::]:80;
server_name datadrive.nl;
root /home/forge/datadrive.nl/public;
# FORGE SSL (DO NOT REMOVE!)
# ssl_certificate;
# ssl_certificate_key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/datadrive.nl/server/*;
location / {
try_files $uri $uri/ /index.php?$query_string;
proxy_read_timeout 1500;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/datadrive.nl-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
fastcgi_index index.php;
fastcgi_read_timeout 1500;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/datadrive.nl/after/*;
what i want is this that the my php script does not timeout if it takes even 5 to 6 minutes to execute..
I'm using Laravel (5.4) Forge for a web app that uploads Vimeo and Youtube videos from S3. In the past, before moving to Forge, this script worked correctly, and also still works correctly with smaller files today.
Now that I'm trying to upload larger files (~1gb), Im receiving a 502 Bad Gateway after just over 1 minute for the php upload script. The rest of the application runs fine.
Specifically, here is the error:
2017/04/24 20:36:48 [error] 2111#2111: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: X.X.X.X.X, server: myserver.com, request: "POST /recordings/vimeo/upload HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.1-fpm.sock:", host: "myserver.com", referrer: "http://myserver.com/recordings"
I have tried:
adding / editing fastcgi directives in the nginx config
upping output_buffering in php
adding the proxy_ and client_max_body items below
Here's my NGINX config:
include forge-conf/myserver.com/before/*;
server {
listen 80;
listen [::]:80;
server_name .myserver.com;
root /home/forge/myserver.com/public;
# FORGE SSL (DO NOT REMOVE!)
# ssl_certificate;
# ssl_certificate_key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'hidden for SO';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
include forge-conf/myserver.com/server/*;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/myserver.com-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
fastcgi_index index.php;
fastcgi_read_timeout 3600;
fastcgi_buffers 8 512k;
fastcgi_buffer_size 512k;
include fastcgi_params;
client_max_body_size 128M;
proxy_buffer_size 256k;
proxy_buffers 4 512k;
proxy_busy_buffers_size 512k;
}
location ~ /\.ht {
deny all;
}
}
include forge-conf/myserver.com/after/*;
What am I missing? I can't seem to figure this out at all. Thank you in advance for the help.
"request_terminate_timeout" turned out to be the issue:
https://laracasts.com/discuss/channels/forge/502-bad-gateway-with-large-file-uploads
I had the same 502 problem and after some debugging discovered that I was hitting a limit inside nginx, not a problem in PHP.
Added the following to my site conf and things seem to be working now:
server{
fastcgi_temp_file_write_size 10m;
fastcgi_busy_buffers_size 512k;
fastcgi_buffer_size 512k;
fastcgi_buffers 16 512k;
#...our rest of config
}
typically you can find nginx config file inside
/etc/nginx/sites-available/default or
/etc/nginx/sites-available/your_domain.com