Rewrite /ping to index.php/ping on nginx - php

I have started off with a docker image which has a processor for php files in place.
Requesting the normal index.php file goes without issue, but the routes I added are less accessible.
One of the routes for example is /time, accessing this is done (with the php builtin webserver) with
index.php/time, and it would return me the time. Now the docker I used has php working perfectly, just that I think one of the rules is breaking this functionality for me. tried a lot of changes, but cant seem to find the issue.
$app->get('/time', function ($request, $response, $args) {
// Current Unix timestamp
$seconds = time();
// Current Unix timestamp with microseconds
$milliseconds = microtime(true)*1000;
// Nanosecond precision
// This is default as per InfluxDB standard
$nanoseconds = $milliseconds*1000000;
// Return
return $response->withJson(array(
'seconds' => $seconds,
'milliseconds' => intval($milliseconds),
'nanoseconds' => intval($nanoseconds)
), 200);
});
This is the configuration of the nginx server I am serving in the docker. One of the things I have tried is adding the location /ping { line. But to no avail, /ping, /time keep redirecting to a 404 not found.
server {
listen 80; ## listen for ipv4; this line is default and implied
listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/html/public;
index index.php index.html index.htm;
# Make site accessible from http://localhost/
server_name _;
# Disable sendfile as per https://docs.vagrantup.com/v2/synced-folders/virtualbox.html
sendfile off;
# Add stdout logging
error_log /dev/stdout info;
access_log /dev/stdout;
# Add option for x-forward-for (real ip when behind elb)
#real_ip_header X-Forwarded-For;
#set_real_ip_from 172.16.0.0/12;
location /ping {
try_files $uri $uri/ /index.php/ping;
}
location / {
# First attempt to serve request as file, then
# as directory, then fall back to index.html
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
location = /404.html {
root /var/www/errors;
internal;
}
location ^~ /sad.svg {
alias /var/www/errors/sad.svg;
access_log off;
}
location ^~ /twitter.svg {
alias /var/www/errors/twitter.svg;
access_log off;
}
location ^~ /gitlab.svg {
alias /var/www/errors/gitlab.svg;
access_log off;
}
# pass the PHP scripts to FastCGI server listening on socket
#
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
location ~* \.(jpg|jpeg|gif|png|css|js|ico|webp|tiff|ttf|svg)$ {
expires 5d;
}
# deny access to . files, for security
#
location ~ /\. {
log_not_found off;
deny all;
}
location ^~ /.well-known {
allow all;
auth_basic off;
}
}

Related

Drupal Sites Shows content of the Index.php file

Im trying to move a drupal site I started on my localhost to a server at home. The database is both exported from my localhost and stored on the server.
The content of the nginx.conf file is as follows
events {
worker_connections 768;
# multi_accept on;
}
http{
server {
listen 443 ssl;
######## S S L CONFIGURATIONS ##################
ssl_certificate /etc/ssl/Nov2021/STAR_site.edu.co.crt;
ssl_certificate_key /etc/ssl/Nov2021/site.edu.co.key;
access_log /var/log/nginx/KNH_nginx.vhost.access.log;
error_log /var/log/nginx/KNH_nginx.vhost.error.log;
root /var/www/html/arctic_kittiwake;
index index.php index.html index.htm;
###################################################
server_name site.edu.co
location / {
#try_files $uri $uri/ /index.php?q=$uri&$args;
try_files $uri /index.php?q=$uri$args;
}
location /site/ {
if (!-e $request_filename){
rewrite ^/site/(.*)$ /site/index.php break;
}
}
location ~ \.php$ {
fastcgi_index index.php;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~ /\. {
deny all;
}
location ~* /(?:uploads|files)/.*\.php$ {
deny all;
}
}
}
The directory where this file is stored is the /etc/nginx and the drupal site is stored in the /var/www/html/arctic_kittiwake/ directory.
I also have php7.4-fpm and mariadb-10.3 installed.
You are missing connection with php-fpm.
Example:
# In Drupal 8, we must also match new paths where the '.php' appears in
# the middle, such as update.php/selection. The rule we use is strict,
# and only allows this pattern with the update.php front controller.
# This allows legacy path aliases in the form of
# blog/index.php/legacy-path to continue to route to Drupal nodes. If
# you do not have any paths like that, then you might prefer to use a
# laxer rule, such as:
# location ~ \.php(/|$) {
# The laxer rule will continue to work if Drupal uses this new URL
# pattern with front controllers other than update.php in a future
# release.
location ~ '\.php$|^/update.php' {
fastcgi_split_path_info ^(.+?\.php)(|/.*)$;
# Ensure the php file exists. Mitigates CVE-2019-11043
try_files $fastcgi_script_name =404;
# Security note: If you're running a version of PHP older than the
# latest 5.3, you should have "cgi.fix_pathinfo = 0;" in php.ini.
# See http://serverfault.com/q/627903/94922 for details.
include fastcgi_params;
# Block httpoxy attacks. See https://httpoxy.org/.
fastcgi_param HTTP_PROXY "";
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param QUERY_STRING $query_string;
fastcgi_intercept_errors on;
# PHP 5 socket location.
#fastcgi_pass unix:/var/run/php5-fpm.sock;
# PHP 7 socket location.
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
}
Full example is here https://www.nginx.com/resources/wiki/start/topics/recipes/drupal/

Configuring adminer with NGINX: "no port in upstream php" error

I have set up MeteorJs application with MongoDB backend in Digital Ocean. I am now trying to set up adminer so I can query MongoDB without opening input ports on my droplet. Everytime I try to reload the nginx settings, I get nginx: [emerg] no port in upstream "php" in /etc/nginx/sites-enabled/admin:32 error.
What am I missing?
/etc/nginx/sites-enabled/admin
server {
listen 80;
server_name 165.227.197.220;
return 301 https://$server_name$request_uri;
}
server {
server_name 165.227.197.220;
listen 443 ssl;
access_log /var/log/nginx/admin.access.log;
error_log /var/log/nginx/admin.error.log;
ssl_certificate /etc/nginx/ssl/admin.crt;
ssl_certificate_key /etc/nginx/ssl/admin.key;
root /var/www/admin;
index adminer.php;
# Get file here https://codex.wordpress.org/Nginx
include global/restrictions.conf;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd/admin;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php;
fastcgi_index adminer.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
/etc/nginx/global/restrictions.conf
# Global restrictions configuration file.
# Designed to be included in any server {} block.
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~ /\. {
deny all;
}
# Deny access to any files with a .php extension in the uploads directory
# Works in sub-directory installs and also in multisite network
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~* /(?:uploads|files)/.*\.php$ {
deny all;
}
You need to setup an upstream for example using wordpress that is what we have
upstream index_php_upstream {
server 127.0.0.1:8090; # NGINX Unit backend address for index.php with
# 'script' parameter
}
For more about upstream, so can view this article UpStream - Nginx

NGINX access multiple sites same IP url

I would like to know how I can have several sites on Nginx and be able to access each of them with the same IP (without the domain, since I am doing tests in a laboratory locally).
I have the server on a separate PC and I access it remotely from my computer using the IP. Both are on the same LAN.
In the directory /var/www/ I have two sites 'nextcloud' and 'phpmyadmin'. I would like to be able to enter both by placing (for example) 192.168.1.14/nextcloud and 192.168.1.14/phpmyadmin. Or having any other project in the www directory.
I tried all the solutions I found, but none of them worked for me. When I enter phpmyadmin for example, it gives me to download the page instead of entering it.
Within /etc/nginx/sites-enabled I have the two files, one from nextcloud:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/nextcloud/;
index index.php index.html index.htm;
server_name localhost;
client_max_body_size 512M;
fastcgi_buffers 64 4K;
location / {
root /var/www/nextcloud;
rewrite ^ /index.php$request_uri;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
}
location ~ ^/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
location ~ \.(?:css|js|woff|svg|gif)$ {
try_files $uri /index.php$request_uri;
add_header Cache-Control "public, max-age=15778463";
access_log off;
}
location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ {
try_files $uri /index.php$request_uri;
# Optional: Don't log access to other assets
access_log off;
}
}
And that of phpmyadmin:
server {
listen 80;
listen [::]:80;
root /var/www/phpmyadmin/;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;
server_name localhost;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
}
Try creating two test folders in /var/www/ (test1 and test2), each with an index.html file inside and modifying the nginx default file, but it didn't work for me either
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
index index.html;
location / {
return 410; # Default root of site won't exist.
}
location /test1/ {
alias /var/www/test1/;
try_files $uri $uri/ =404;
# any additional configuration for non-static content
}
location /test2/ {
alias /var/www/test2/;
try_files $uri $uri/ =404;
# any additional configuration for non-static content
}
}
As I said, I tried different solutions. Another problem I had was that it only redirected me to nextcloud, although I put phpmyadmin in the url. And the previous one that I already mentioned, that when I enter, download the index.php. Thank you.
Sorry for my English.
Simple add nextcloud.my and phpmyadmin.my to your .hosts file and listen domain name in Nginx.
The option that you proposed can also be made to work, but it is full of bugs and difficulties can occur during the transfer to work server.

Nginx - Redirecting all request including .php to single PHP script?

I've been Googling this for a while but can't seem to find a solution.
At the moment I have a config file setup on Nginx to send all requests regardless of file extension to a single index.php file. However, it ignores requests ending with .php and will throw a 404 if it's not there or, try to execute it if it is.
How can I configure Nginx to send .php requests to the index.php file too so I can use it to handle all file requests, not just non-PHP files?
My config file currently looks like the following:
server {
listen 80;
listen 443;
ssl on;
ssl_certificate /somecrt.crt;
ssl_certificate_key /somekey.key;
root /sites/;
index index.php;
server_name somesite.net;
access_log /sites/logs/access.log;
error_log /sites/logs/error.log;
location ~ /\. { deny all; }
location / {
# First attempt to serve request as file, then
# as directory then fall back to index.php
try_files $uri $uri/ /index.php?$args;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
location ~ \.php$ {
try_files $uri /index.php?$args =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
# fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
}
After some more Googling "nginx: Map single static URL to a PHP file" helped me figure out the solution. So the new config is now:
server {
listen 80;
listen 443;
ssl on;
ssl_certificate /somecrt.crt;
ssl_certificate_key /somekey.key;
root /sites/;
index index.php;
server_name somesite.net;
access_log /sites/logs/access.log;
error_log /sites/logs/error.log;
location ~ /\. { deny all; }
location / {
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
}
}
With this config, all requests will be sent to the single index.php.
Of course, this will include static files such as image files which will probably impact Nginx server performance. In that case, you might want to add another location block before it if you want to exclude certain kind of requests.
For example, to exclude jpgs and gifs:
location ~ \.(jpg|gif) {
try_files $uri =404;
}

Does nginx fastcgi_pass support variables?

I would like to use dynamic host resolution with nginx and fastcgi_pass.
When fastcgi_pass $wphost:9000; is set in the conf then nginx displays the error
[error] 7#7: *1 wordpress.docker could not be resolved (3: Host not found),
but when I set fastcgi_pass wordpress.docker:9000;it is working except for the fact the that after a wordpress restart nginx still points to an old ip.
server {
listen [::]:80;
include /etc/nginx/ssl/ssl.conf;
server_name app.domain.*;
root /var/www/html;
index index.php index.html index.htm;
resolver 172.17.42.1 valid=60s;
resolver_timeout 3s;
location / {
try_files $uri $uri/ /index.php?q=$uri&$args; ## First attempt to serve request as file, then as directory, then fall back to index.html
}
#error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
set $wphost wordpress.docker;
# pass the PHP scripts to FastCGI server listening on wordpress.docker
location ~ \.php$ {
client_max_body_size 25M;
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass $wphost:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
access_log off;
log_not_found off;
expires 168h;
}
# deny access to . files, for security
location ~ /\. {
access_log off;
log_not_found off;
deny all;
}
}
I have different virtual host configuration where I use proxy_pass http://$hostname; and in this setup everything is working as expected and the host is found.
After trying different options I wonder if fastcgi_pass does support variables
It does work if you pass the upstream as a variable instead, for example:
upstream fastcgi_backend1 {
server php_node1:9000;
}
upstream fastcgi_backend2 {
server php_node2:9000;
}
server {
listen 80;
server_name _;
set $FASTCGI_BACKEND1 fastcgi_backend1;
set $FASTCGI_BACKEND2 fastcgi_backend2;
location ~ ^/site1/index.php$ {
fastcgi_pass $FASTCGI_BACKEND1;
}
location ~ ^/site2/index.php$ {
fastcgi_pass $FASTCGI_BACKEND2;
}
}

Categories