I'm experiencing the weirdest problem. I've been working on a file upload problem for the last few days and have been using phpinfo() to track changes to INI settings. The last time I touched it was two days ago... it all worked then.
Today, phpinfo() is causing a 503 Service Temporarily Unavailable error. Here's the weird part: the website works fine! I's PHP + MySQL driven and I can move around in it just fine (well... other than the issues I'm working on). But as soon as I add phpinfo() to show up as the first thing in the <body> block... I get the error.
I've even tried creating a one-line file: <?php phpinfo(); ?> This dies with a 503 error, too.
The error log contains this:
[Fri Sep 15 14:22:31.192593 2017] [proxy_fcgi:error] [pid 2695] (104)Connection reset by peer: [client 67.161.220.240:44230] AH01075: Error dispatching request to :
I've restarted Apache and Nginx with no errors. Does anyone have an idea about what service or some such might have died on my machine to cause this?
NGINX Config File
#user nginx;
worker_processes 1;
#error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
#pid /var/run/nginx.pid;
include /etc/nginx/modules.conf.d/*.conf;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#tcp_nodelay on;
#gzip on;
#gzip_disable "MSIE [1-6]\.(?!.*SV1)";
server_tokens off;
include /etc/nginx/conf.d/*.conf;
}
# override global parameters e.g. worker_rlimit_nofile
include /etc/nginx/*global_params;
Is it possible that this is the problem:
Apache, FastCGI - Error 503
(i.e. a Fast CGI configuration issue?)
I solved this by logging into PLESK. I went to Tools & Settings -> Services Management.
Then I restarted PHP-FPM 5.6.33 service (this may vary depending on the version of PHP you are running).
After a re-start, phpinfo()worked fine. I'm not sure what the issue was. As you experienced, my website was working fine but it may have been causing other issues I'm not aware of.
Related
I have a task that has two containers on fargate: one java and one php/nginx container.
The java container maps its internal port 80 to the extenral port 8080 and the php container maps its internal port 80 to the external port 8081.
The task is using the awsvpc network mode. According to the aws documentation, they should be able to communicate with each other via localhost once they are on fargate.
I tried testing this by having the Java container send a request to the PHP container using "localhost:8081":
public boolean isPHPReachable() {
System.out.println("Attempting to make request to PHP");
try {
final URL url = new URL("http://localhost:8081/test");
final HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setRequestMethod("GET");
final int status = con.getResponseCode();
System.out.printf("Get request to PHP resulted in a %d: %s", status, con.getResponseMessage());
return status == 200;
} catch (final IOException err) {
err.printStackTrace();
return false;
}
The php code it should be hitting is this:
<?php
if ($_GET['q'] === 'test') {
echo('healthy');
die;
}
The 'q' is set by nginx (see below config).
However,I see that the cloudwatch logs are showing an eror when Java tries to send PHP the request:
22:55:04
Attempting to make request to PHP
22:55:04
java.net.ConnectException: Connection refused (Connection refused)
I tried to contact the PHP container directly, but I get timedout. This does not occur when I attempt to reach Java.
Potential Problem 1: Nginx is misconfigured
I suspect this might be an nginx issue. However, I am not able to reproduce the problem when I create the PHP container on my local machine -- it's reachable when I ping localhost:8081.
The problem is only occuring in Fargate.
Here is my configuration:
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
# Reject requests not matching any server block
listen 80 default_server;
server_name "";
return 410;
}
server {
listen 80;
root /var/www/public_html;
server_name localhost;
index index.php;
set_real_ip_from 0.0.0.0/0; # we trust that the webserver is firewalled
real_ip_header X-Forwarded-For;
set $webroot $document_root;
error_page 404 410 /404_empty.html;
autoindex off;
# Disable server signature
# (i.e. remove "nginx 1.10" or similar from error pages and response headers)
server_tokens off;
#################
# PHP locations #
#################
location / {
rewrite ^/(.*)$ /sites/all/index.php?q=$1;
error_page 404 /sites/all/index.php;
}
}
2. Misconfigured security groups:
I do not believe this is the case. The security group attached to this container allows inbound TCP requests to any port from any destination. It also allows all outbound requests.
Anyone spot an immediate issue? Any debugging tips?
I have nginx running as confirmed by loading an html page.
The difficulty is running a .php page in same location, however this giving a 404.
fpm is shown to be installed okay using php-fpm -t
Therefore I am very sure its in .conf or a serverblock.
My goal is to set everything using serverblocks (aka VirtualHosts) as so much easier to manage differing projects so have attempted to strip nginx.conf to a minimal:
#user nobody;
worker_processes 1;
error_log /error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
include /usr/local/etc/nginx/sites-enabled/*;
}
and the serverblock
server {
listen 80;
server_name localhost;
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
root /var/www;
index index.php index.html index.htm;
try_files $uri /index.php?&query_string;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_read_timeout 60;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
}
index.php fails simply with a message 'File not found' though not in the usual nginx format - just those words.
I had assumed this would be simple on a Linux based OS. It seems Apple have messed with it too much to make it viable.
I have expertise in this and have a Ubuntu based script which makes everything ready to go on this Linux flavour - apparently Apple no longer support developers.
Oh well lets see how long it takes to install a VirtualBox solution that I can edit on OSX.
This is an answer per se pending details from other people answering as I have had this working before - I just wanted a non proxy solution and was prepared to find a few days finding it. Alas Apple prevent it.
I'm trying to setup rutorrent on an up to date Arch Linux home server, but I can't get it running mostly I think from inexperience and also outdated documentation.
The problem:
I can't get rtorrent to talk to nxinx. When I've had rutorrent showing on nginx, it wouldn't be able to connect to rtorrent, and would give this error, among many other plugin errors:
Bad response from server: (200 [parsererror,getuisettings])
Nginx's error log:
2015/07/24 17:29:49 [error] 11837#0: *19 upstream prematurely closed connection while reading response header from upstream, client: 192.168.x.<my remote machine>, server: localhost, request: "GET / HTTP/1.1", upstream: "scgi://unix:/home/user/scgi.socket:", host: "192.168.x.<my local nginx server>"
When visiting nginx's URL index:
502 Bad Gateway
In rtorrent:
(17:50:40) Closed XMLRPC log.
Config Files:
Here are the relevant parts of my config files:
.rtorrent.rc:
#encoding_list utf-8
scgi_local = /home/user/scgi.socket
execute = chmod,ug=rw\,o=,/home/user/scgi.socket
execute = chgrp,http,/home/user/scgi.socket
#scgi_port = 127.0.0.1:5000
nginx.conf:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /etc/nginx/logs/access.log main;
error_log /etc/nginx/logs/error.log notice;
server {
listen 80;
server_name localhost;
#charset koi8-r;
location / {
root /usr/share/nginx/html/rutorrent;
index index.html index.htm;
include scgi_params;
scgi_pass unix:/home/user/scgi.socket;
}
full rutorrent config.php file:
http://pastebin.com/1tAZb6DM
Admittedly most of this is jargon to me. I'm familiar with rtorrent, somewhat familiar with nginx, and I know the basic theories of networking. XML, scgi/unix sockets, and php are all however beyond me (I only really know python), and I'm totally clueless as to where I would start to begin learning. I think this might have something to do with "RCP2" but I really have no idea. blog.abevoelker.com didn't have RCP2 set in nginx.conf.
Basically be gentle with me, and I really appreciate any help you folks have to offer me. I know I'm a newbie stumbling through the dark, but I'm attempting to learn.
Guides used/mashed together:
http://linoxide.com/ubuntu-how-to/setup-rtorrent-rutorrent/
https://blog.abevoelker.com/rtorrent_xmlrpc_over_nginx_scgi/
https://github.com/rakshasa/rtorrent/wiki/RPC-Setup-XMLRPC
https://github.com/Novik/ruTorrent/issues/974
Iv'e setup an Nginx php server on a linux REHL machine.
When accessing html files all goes well, but trying to access php file, the file is downloaded instead of being executed.
This is my nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
...and this is the server block:
server {
listen 80;
server_name {mywebsitename};
#access_log logs/host.access.log main;
location / {
root /usr/share/nginx/html/{mywebsitename}/;
}
location /ngx_status_2462 {
stub_status on;
access_log off;
allow all;
}
location ~ \.php$ {
# fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html/{mywebsitename}$fastcgi_script_name;
include fastcgi_params;
}
error_page 404 /404.html;
location = /404.html {
root /usr/share/nginx/html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
It might be because of the mimetype you're sending:
default_type application/octet-stream;
See: http://mimeapplication.net/octet-stream
I just had this exact same problem. I was using Ubuntu 12.04 and Linux Mint 14 so different OS but likely to have the same issues.
A couple of issues may happening. Firstly, you need to have php5-fpm installed (FastCGI Process Manager). I was trying to run it with my standard version of PHP but it was not working - http://www.php.net/manual/en/install.fpm.php
I also had Apache installed, and even if it weren't running it must have had some conflict because once I uninstalled Apache I was able to execute the PHP files.
I would also look at this line
fastcgi_pass 127.0.0.1:9000;
And consider changing it to
fastcgi_pass unix:/var/run/php5-fpm.sock;
Here is a detailed guide to installation of Nginx and PHP5-FPM for RHEL (and other OS's)
http://www.if-not-true-then-false.com/2011/install-nginx-php-fpm-on-fedora-centos-red-hat-rhel/
You need to change the user to nginx instead of apache in this file a/etc/php-fpm.d/www.conf
; Unix user/group of processes
; Note: The user is mandatory. If the group is not set, the default user's group
; will be used.
; RPM: apache Choosed to be able to access some dir as httpd
;user = apache
user = nginx
; RPM: Keep a group allowed to write in log dir.
;group = apache
group = nginx
and of course restart service php-fpm restart and service nginx restart
Comment out default_type application/octet-stream;
When I try to upload a file to my site, I'm getting the Nginx "413 Request Entity Too Large" error, however in my nginx.conf file I've already explicitly stated the max size to be about 250MB at the moment, and changed the max file size in php.ini as well (and yes, I restarted the processes). The error log gives me this:
2010/12/06 04:15:06 [error] 20124#0:
*11975 client intended to send too large body: 1144149 bytes, client:
60.228.229.238, server: www.x.com, request: "POST
/upload HTTP/1.1", host:
"x.com", referrer:
"http://x.com/"
As far as I know, 1144149 bytes isn't 250MB...
Is there something I'm missing here?
Here's the base Nginx config:
user nginx;
worker_processes 8;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
client_max_body_size 300M;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
gzip on;
gzip_static on;
gzip_comp_level 5;
gzip_min_length 1024;
keepalive_timeout 300;
limit_zone myzone $binary_remote_addr 10m;
# Load config files from the /etc/nginx/conf.d directory
include /etc/nginx/sites/*;
}
And the vhost for the site:
server {
listen 80;
server_name www.x.com x.com;
access_log /var/log/nginx/x.com-access.log;
location / {
index index.html index.htm index.php;
root /var/www/x.com;
if (!-e $request_filename) {
rewrite ^/([a-z,0-9]+)$ /$1.php last;
rewrite ^/file/(.*)$ /file.php?file=$1;
}
location ~ /engine/.*\.php$ {
return 404;
}
location ~ ^/([a-z,0-9]+)\.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
}
Not knowing the version of your nginx build and what modules it was built with makes this tough, but try the following:
Copy your client_max_body_size 300M; line into the location / { } part of your vhost config. I'm not sure if it's overriding the default (which is 1 MB) properly.
Are you using nginx_upload_module? If so make sure you have the upload_max_file_size 300MB; line in your config as well.
My setup was:
php.ini
...
upload_max_filesize = 8M
...
nginx.conf
...
client_max_body_size 8m;
...
The nginx showed the error 413 when it was uploaded.
Then I had an idea: I will not let nginx show the error 413, client_max_body_size set to a value greater than upload_max_filesize, thus:
php.ini
...
upload_max_filesize = 8M
...
nginx.conf
...
client_max_body_size 80m;
...
What happened?
When you upload smaller than 80MB nginx will not display the error 413, but PHP will display the error if the file is up to 8MB.
This solved my problem, but if someone upload a file larger than 80MB error 413 happens, nginx rule.
I also add that you could define it in the *.php location handler
location ~ ^/([a-z,0-9]+)\.php$ {
Being the "lower" one in the cascading level, it would be an easy way to see if the problem comes from your nginx config or modules.
It sure doesn't come from PHP because the 413 error "body too large" is really a NGinx error.
Try the following steps to resolve the error.
Open the Nginx configuration file (nginx.conf) in a text editor.
$ sudo nano /etc/nginx/nginx.conf
Add the directive client_max_body_size under the http block:
http {
# Basic Settings
client max body size 16M;
...
}
Open nginx default file in a text editor
$ sudo nano /etc/nginx/sites-enabled/default
Add the directive client_max_body_size under location block.
location / {
...
client_max_body_size 100M;
}
Restart Nginx using the following command.
$ sudo systemctl restart nginx
Optional:
If you have a time-consuming process running on the backend server then you have to adjust the timeout attribute of the server to avoid 504 timeout error.
Open the Nginx default file in a text editor
$ sudo nano /etc/nginx/sites-enabled/default
Add the directives proxy_connect_timeout, proxy_send_timeout proxy_read_timeout under the location block:
location /api {
client_max_body_size 100M;
proxy_connect_timeout 6000;
proxy_send_timeout 6000;
proxy_read_timeout 6000;
proxy_pass http://localhost:5001;
}
Restart Nginx using the following command.
$ sudo systemctl restart nginx