As I am new to cloud hosting and server hosting (decided to take the jump from shared hosting) I can't pinpoint why this is happening.
Long story short I'm trying to get Google Fonts to load and neither Chrome nor Firefox are allowing it so I've begun to look up and understand the headers. I'm using php7.2 and Nginx 1.1.14 and both the default and my custom.conf file (domain file) have no CSPs loaded?
Any ideas how I can track this down?!
Refused to load the stylesheet 'https://fonts.googleapis.com/css?family=Averia+Serif+Libre' because it violates the following Content Security Policy directive: "style-src 'self' 'unsafe-inline'". Note that 'style-src-elem' was not explicitly set, so 'style-src' is used as a fallback.
But I don't have any CSP anywhere! So frustrated.
Here's my custom.conf:
server {
listen 80;
root /var/www/html/custom;
index index.php index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~* \.(eot|ttf|woff)$ {
add_header Access-Control-Allow-Origin *;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
And here's my default:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.php index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
EDIT: If it helps any I chose the "LEMP" option on Digital Ocean to create a custom setup? I've opened a ticket over there as well but it's been a couple days now.
Related
I want to setup php (wordpress) and a backend API (fastapi) on the same local machine for development. I am using docker swarm mode on this single node. So, nginx, php, and backend are all docker services.
When I use following nginx.conf, wordpress takes over the whole address space. E.g. when I write localhost/api/docs, wordpress shows me its main page. Whereas I would like to see my fastapi documentation. How can I fix this?
upstream php {
server unix:/tmp/php-cgi.socket;
server php:9000;
}
server {
listen 80;
server_name _;
location /api {
proxy_pass http://backend:8888/api;
}
location / {
root /var/www/html;
index index.php index.html;
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /var/www/html$fastcgi_script_name;
fastcgi_intercept_errors on;
if ($uri !~ "^/images/") {
fastcgi_pass php;
}
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
root /var/www/html;
expires max;
log_not_found off;
}
}
I would like to know how I can have several sites on Nginx and be able to access each of them with the same IP (without the domain, since I am doing tests in a laboratory locally).
I have the server on a separate PC and I access it remotely from my computer using the IP. Both are on the same LAN.
In the directory /var/www/ I have two sites 'nextcloud' and 'phpmyadmin'. I would like to be able to enter both by placing (for example) 192.168.1.14/nextcloud and 192.168.1.14/phpmyadmin. Or having any other project in the www directory.
I tried all the solutions I found, but none of them worked for me. When I enter phpmyadmin for example, it gives me to download the page instead of entering it.
Within /etc/nginx/sites-enabled I have the two files, one from nextcloud:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/nextcloud/;
index index.php index.html index.htm;
server_name localhost;
client_max_body_size 512M;
fastcgi_buffers 64 4K;
location / {
root /var/www/nextcloud;
rewrite ^ /index.php$request_uri;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
}
location ~ ^/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
location ~ \.(?:css|js|woff|svg|gif)$ {
try_files $uri /index.php$request_uri;
add_header Cache-Control "public, max-age=15778463";
access_log off;
}
location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ {
try_files $uri /index.php$request_uri;
# Optional: Don't log access to other assets
access_log off;
}
}
And that of phpmyadmin:
server {
listen 80;
listen [::]:80;
root /var/www/phpmyadmin/;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;
server_name localhost;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
}
Try creating two test folders in /var/www/ (test1 and test2), each with an index.html file inside and modifying the nginx default file, but it didn't work for me either
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
index index.html;
location / {
return 410; # Default root of site won't exist.
}
location /test1/ {
alias /var/www/test1/;
try_files $uri $uri/ =404;
# any additional configuration for non-static content
}
location /test2/ {
alias /var/www/test2/;
try_files $uri $uri/ =404;
# any additional configuration for non-static content
}
}
As I said, I tried different solutions. Another problem I had was that it only redirected me to nextcloud, although I put phpmyadmin in the url. And the previous one that I already mentioned, that when I enter, download the index.php. Thank you.
Sorry for my English.
Simple add nextcloud.my and phpmyadmin.my to your .hosts file and listen domain name in Nginx.
The option that you proposed can also be made to work, but it is full of bugs and difficulties can occur during the transfer to work server.
I'm using Ubuntu 16.04, Nginx 1.10.3 and PHP 7.0. The example PHP applications are CodeIgniter 3.1.5.
I am trying to run a static page at the root of my site www.example.com(works) with multiple other CodeIgniter applications each running in subdirectories at www.example.com/client-a, www.example.com/client-b, etc.
The static page at my root runs fine, however when redirecting to the apps at the sub directories none of the stylesheets and scripts get loaded resulting in 404 errors. The routing works though.
The application files of the subsequent apps don't exist within one another.The application root exists in /var/www/example/public_html while the "nested" applications lives in /var/www/client_a/public_html, /var/www/client_b/public_html etc.
Here is my Nginx server block:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com www.example.com;
root /var/www/example/public_html;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location /client-a {
alias /var/www/client_a/public_html;
try_files $uri $uri/ #nested;
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
}
}
location #nested {
rewrite /client_a/(.*)$ /client_a/index.php?/$1 last;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 30d;
}
location ~ /\.ht {
deny all;
}
}
I'm using Laravel's valet on my local. I did not have a problem with this on my local. My remote server uses Ubuntu 16.04.
I have an index.php like so in my website's root:
<?php
require __dir__ . '/src/core/bootstrap.php';
require __dir__ . '/src/controllers/index.php';
src/core/bootstrap.php is stuff for composer and databases. But this is what src/controllers/index.php is:
<?php
session_start();
use App\UserTools\User;
use App\Core\Router;
$page = Router::load()->fetchPage();
include "src/controllers/{$page}.controller.php";
include "src/views/{$page}.view.php";
So, when users visit site.com, it goes to main since it's the home page. But, let's say, for example, they visit: site.com/about, then $page would be about and viola... routing. This was all taught to me on Laracasts, so excuse me if this seems... rudimentary.
The problem lies, when I visit site.com/api, it just shows me a blank page. Or, like, book?id=1 blank page. Or
Here is the nginx block from valet which tells the server what to do with files not found:
location / {
rewrite VALET_SERVER_PATH last;
}
How can I apply that to my site? I tried substituting VALET_SERVER_PATH with /var/www/html but I just got a Server Error 500.
Here is my current nginx block:
server {
server_name www.site.org;
return 301 $scheme://site.org$request_uri;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name site.org;
return 301 https://$server_name$request_uri;
}
server {
# SSL configuration
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
include snippets/ssl-site.org.conf;
include snippets/ssl-params.conf;
root /var/www/html;
index index.php index.html index.htm index.nginx-debian.html;
server_name site.org;
location / {
try_files $uri $uri/ /index.php;
}
location ~ /.well-known {
allow all;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
}
It works. But only for the front page. Yes, I have HTTPS enabled and www traffic gets re-directed to non-www URI.
Make changes on the following directives:
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
}
I'm trying to make a download script, that catches all requests to /files/ and forces a download. The script fully works, and downloads any file I throw at it. The problem is that when I try to pass a file with a .php extension through try_files, the following nginx config messes up:
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 443 ssl;
listen [::]:443 ssl;
server_name _;
root /var/www/localhost/public_html/;
index index.php index.html index.htm index.txt;
location /files/ {
try_files $uri $uri/ /.thedownloadscript.php?file=$uri;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME /public_html$fastcgi_script_name;
include fastcgi_params;
}
ssl_certificate /var/www/server.crt;
ssl_certificate_key /var/www/server.key;
}
/files/file.txt downloads the file.
/files/script.php throws a 404.
Both paths should be passed to the download script, but aren't.
I have tried removing the try_files from the "location ~ .php$" block, but that makes it output "No input file specified".
I hope somebody can help me out here.
Thanks in advance.
Any regex location block takes precedence over a prefix location block at the same level, unless the latter uses the ^~ modifier.
See this document for details.
Try:
location ^~ /files/ {
try_files $uri $uri/ /.thedownloadscript.php?file=$uri;
}
Note that the location ^~ /files/ block is a prefix location (and not a regex location).
According to nginx best practices you should isolate regexp locations:
location / {
location ~ \.php$ {
...
}
}
location /files/ {
try_files $uri $uri/ /.thedownloadscript.php?file=$uri;
}