$_SERVER['REMOTE_ADDR'] doesn't work with php-fpm and nginx - php

I don't know why with nginx this variable $_SERVER['REMOTE_ADDR'] doesn't echo an IP. On every other web server it works as it should.
Any suggestions?

I suspect it has something to do with the interface between nginx (the webserver) and fastcgi, which is the API in which PHP is running.
According to your info provided, the Server API is: FPM/FastCGI
I suggest you take a hard look at the details of how PHP is installed with nginx (you have not provided any).
If you do not require the performance of nginx, then you may find a pragmatic solution is to just use apache. I use nginx as a reverse proxy in front of apache, but that introduces some additional issues with getting the REMOTE_ADDR passed to PHP (notably, mod_rpaf).
Good luck!

#Michael, here is a project I maintain which provides the proper fastcgi parameters for interfacing Nginx with FPM. Hope it helps.
fastcgi_params on Github

These are from the conf file from nginx
user http;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
tcp_nodelay on;
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
server {
listen 80;
server_name www.fireangel.ro fireangel.ro;
access_log /var/log/nginx/localhost.access.log;
Default location
location / {
root /var/www/html/fireangel.ro/public_html;
index index.php;
}
Images and static content is treated different
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ {
access_log off;
expires 30d;
root /var/www/html/fireangel.ro/public_html;
}
Parse all .php file in the /srv/http directory
location ~ .php$ {
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_pass backend;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/html/fireangel.ro/public_html$fastcgi_script_name;
include fastcgi_params;
fastcgi_intercept_errors on;
fastcgi_ignore_client_abort off;
fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
}
Disable viewing .htaccess & .htpassword
location ~ /\.ht {
deny all;
}
}
upstream backend {
server 127.0.0.1:9000;
}
}

Related

How to fix 502 Bad Gateway Error in nginx php

When huge request hits on nginx server it returns 502 bad gateway error. I have tried multiple answer from stackoverflow including this How to fix 502 Bad Gateway Error in production(Nginx)? But nothing works for me. Someone help for me
worker_processes 1;
daemon off;
user root;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 330;
client_max_body_size 512M;
server_tokens off;
gzip on;
gzip_types application/json;
access_log /dev/stdout;
error_log /dev/stdout;
# Adding proxy timeout
proxy_read_timeout 330;
proxy_connect_timeout 330;
proxy_send_timeout 330;
server {
listen 80;
server_name _;
root /var/www/html/public;
index index.php index.html index.htm;
underscores_in_headers on;
access_log /dev/stdout;
error_log /dev/stdout;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
include /etc/nginx/fastcgi_params;
fastcgi_send_timeout 330;
fastcgi_read_timeout 330;
fastcgi_busy_buffers_size 16k;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
}
location / {
# try to serve file directly, fallback to index.php
try_files $uri /index.php$is_args$args;
}
}
}
When huge request hits probably means, your script (which is behind a loadbalancer) works longer than the LB timeout is... while your script is working (and hasn't answered yet), the LB will think it's crashed and drop the IP connection. BANG! Bad gateway.
LB timeouts are usually 60-120 seconds, but in rare cases up to 5 minutes.
What you can try:
Reduce the script runtime to be lower than the LB timeout
Send output to the client while working (traffic will have to pass through the proxy to keep the IP connection alive)
Change the concept (like put data to a offline queue)

Serving Laravel from a directory

I am currently using a javascript framework being served up using Nginx, for example on the following url
www.myjsapp.com
I am also using Laravel 5.6 to build an API.
Instead of building 2 hosts, one for the JS app and one for Laravel, I want to be able to serve up the Laravel API on the following URL.
www.jsapp.com/api
Is this possible or do I have to always use 2 hosts?
The nginx server block for myjsappcom is as follows;
server {
listen 80;
listen 443 ssl http2;
server_name .myjsapp.com;
root "/home/project/myjsapp";
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/myjsapp.com-error.log error;
sendfile off;
client_max_body_size 100m;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors off;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
}
location ~ /\.ht {
deny all;
}
ssl_certificate /etc/nginx/ssl/myjsapp.com.crt;
ssl_certificate_key /etc/nginx/ssl/myjsapp.com.key;
}
You can separate the servers using location blocks in your nginx config file:
different /location blocks will capture different url schemes and pass them to the respective servers (node or laravel).
server {
server_name mysjapp.com;
#other configurations like root, logs
location / {
#node server config
}
location /api {
#laravel server config
}
}

Nginx + PHP-FPM 7.1 - 504 Gateway Time-out

I'm running a nginx 1.12 and a php-fpm 7.1 as seperate docker containers on a synology nas and i get a 504 Gateway error if the php-script runs longer than 60s. I've tried already several nginx configuration parameters but the error still exists.
Here is my actual nginx config:
#user www-data;
#group http
worker_processes 1;
error_log /opt/data/logs/nginx_error.log notice;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#keepalive_timeout 30s;
sendfile on;
#tcp_nopush off;
tcp_nodelay on;
#gzip off;
send_timeout 300
server {
listen 80;
server_name "";
root /opt/php;
index index.php;
location /data/ {
sendfile on;
root /opt;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
# Mitigate https://httpoxy.org/ vulnerabilities
fastcgi_param HTTP_PROXY "";
fastcgi_pass php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_read_timeout 300;
#fastcgi_buffering off;
#fastcgi_keep_conn on;
#fastcgi_intercept_errors on;
#fastcgi_cache off;
#fastcgi_ignore_client_abort on;
}
location ~ ^/(status|ping)$ {
access_log off;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass php:9000;
}
}
}
The php-testscript:
<?php
sleep(65);
echo "done!";
file_put_contents("/opt/data/timetest.txt", "\nEnd", FILE_APPEND);
After 60s the browser shows up the 504 Gateway Time-out. The php-script is still running and is also writing the text to the file.
Nginx errorlog:
2017/07/22 08:16:32 [error] 8#8: *10 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 172.17.0.1, server: , request: "GET /timetest.php HTTP/1.1", upstream: "fastcgi://172.17.0.3:9000", host: "192.168.0.100:8081"
Has anyone an idea?
The question is probably why does your backend take so long to respond? Not sure about your usecase but normally it's not user-friendly to wait to long for a response.
To answer your question:
I found this link: https://easyengine.io/tutorials/php/increase-script-execution-time/
Add in /etc/php5/fpm/php.ini
max_execution_time = 300
Set in /etc/php5/fpm/pool.d/www.conf
request_terminate_timeout = 300
Set in /etc/nginx/nginx.conf
http {
#...
fastcgi_read_timeout 300;
#...
}
And in your config:
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_read_timeout 300;
}
And reload services
service php5-fpm reload
service nginx reload

The nginx only load the index.php

I have a problem with my nginx configuration. We have a Vagrant box at the firm. In this vagrant we have LXC containers for services like nginx container, php-fpm container, memcached container, mysql container... These are connecting to each other, nginx use php-fpm, php-fpm use memcached and mysql. I have access to the nginx outside the vagrant through https. Here is my nginx configuration:
nginx.conf:
user nginx nginx;
worker_processes 4;
pid /var/run/nginx.pid;
worker_rlimit_nofile 1024;
events {
worker_connections 2048;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
access_log "/var/log/nginx/access.log";
error_log "/var/log/nginx/error.log";
keepalive_timeout 120;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
deault.conf:
server {
listen *:80;
server_name vagrant.ceg.com;
return 301 'https://$server_name$request_uri';
}
https.conf:
server {
listen *:443;
ssl on;
ssl_certificate ....crt;
ssl_certificate_key ....key;
server_name vagrant.ceg.com www.vagrant.ceg.com;
root "/srv/www";
index index.php;
location / {
autoindex on;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_param ENVIRONMENT dev;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
if (-f $request_filename) {
fastcgi_pass 192.168.42.114:9000;
}
}
}
When I open it in the browser I get the index.php, but very slowly, and I got same errors on the console like this:
https://www.vagrant.ceg.com/cdn/util/scale/320/320/dev-employer-images/0636443e3076af9d24ba2b1711f57fb47b60f289.jpg Failed to load resource: net::ERR_CONNECTION_TIMED_OUT

How to create virtual host in nginx server ? and ajax call

I am using WT-NMP software with combination of php,mysql and ngnix server.
worker_processes 1;
events {
worker_connections 1024;
}
http {
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
ssi off;
#Timeouts
client_body_timeout 5;
client_header_timeout 5;
keepalive_timeout 25 25;
send_timeout 15s;
resolver_timeout 3s;
#Directive sets timeout period for connection with FastCGI-server. It should be noted that this value can't exceed 75 seconds.
fastcgi_connect_timeout 5s;
#Directive sets the amount of time for upstream to wait for a fastcgi process to send data. Change this directive if you have long running fastcgi processes that do not produce output until they have finished processing. If you are seeing an upstream timed out error in the error log, then increase this parameter to something more appropriate.
fastcgi_read_timeout 40s;
#Directive specifies request timeout to the server. The timeout is calculated between two write operations, not for the whole request. If no data have been written during this period then serve closes the connection.
fastcgi_send_timeout 15s;
fastcgi_buffers 8 32k;
fastcgi_buffer_size 32k;
#fastcgi_busy_buffers_size 256k;
#fastcgi_temp_file_write_size 256k;
open_file_cache off;
#php max upload limit cannot be larger than this
client_max_body_size 8m;
####client_body_buffer_size 1K;
client_header_buffer_size 1k;
large_client_header_buffers 2 1k;
types_hash_max_size 2048;
include nginx.mimetypes.conf;
default_type text/html;
##
# Logging Settings
##
access_log "c:/wt-nmp/log/nginx_access.log";
error_log "c:/wt-nmp/log/nginx_error.log" warn; #debug or warn
log_not_found on; #enables or disables messages in error_log about files not found on disk.
rewrite_log off;
#Leave this off
fastcgi_intercept_errors off;
gzip off;
index index.php index.htm index.html;
server {
listen 127.0.0.1:80 default_server;
listen 127.0.0.1:8080;
#listen [::1]:80 ipv6only=on;
server_name mylocalhost;
root "c:/wt-nmp/www/projectname";
autoindex on;
error_log "c:/wt-nmp/log/nginx_error.log";
allow 127.0.0.1;
#allow ::1;
deny all;
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
#tools are now served from wt-nmp/include/tools/
location ~ ^/tools/.*\.php$ {
root "c:/wt-nmp/include";
try_files $uri =404;
include nginx.fastcgi.conf;
fastcgi_pass php_farm;
}
location ~ ^/tools/ {
root "c:/wt-nmp/include";
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass php_farm;
include nginx.fastcgi.conf;
}
}
include domains.d/*.conf;
include nginx.phpfarm.conf;
}
when I am trying to access with "mylocalhost" its working fine when I am firing an event and call ajax method . It is giving page not found message
WT-NMP - portable Nginx Mysql Php development stack for Windows README.md states:
Starting only one PHP-CGI server with wt-nmp.exe --phpCgiServers=1 will result in slow ajax requests since Nginx will not be able to process PHP scripts simultaneous.
So, make sure you use the latest version of WT-NMP and choose at least 3 PHP-CGI servers.

Categories