I'm trying to setup rutorrent on an up to date Arch Linux home server, but I can't get it running mostly I think from inexperience and also outdated documentation.
The problem:
I can't get rtorrent to talk to nxinx. When I've had rutorrent showing on nginx, it wouldn't be able to connect to rtorrent, and would give this error, among many other plugin errors:
Bad response from server: (200 [parsererror,getuisettings])
Nginx's error log:
2015/07/24 17:29:49 [error] 11837#0: *19 upstream prematurely closed connection while reading response header from upstream, client: 192.168.x.<my remote machine>, server: localhost, request: "GET / HTTP/1.1", upstream: "scgi://unix:/home/user/scgi.socket:", host: "192.168.x.<my local nginx server>"
When visiting nginx's URL index:
502 Bad Gateway
In rtorrent:
(17:50:40) Closed XMLRPC log.
Config Files:
Here are the relevant parts of my config files:
.rtorrent.rc:
#encoding_list utf-8
scgi_local = /home/user/scgi.socket
execute = chmod,ug=rw\,o=,/home/user/scgi.socket
execute = chgrp,http,/home/user/scgi.socket
#scgi_port = 127.0.0.1:5000
nginx.conf:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /etc/nginx/logs/access.log main;
error_log /etc/nginx/logs/error.log notice;
server {
listen 80;
server_name localhost;
#charset koi8-r;
location / {
root /usr/share/nginx/html/rutorrent;
index index.html index.htm;
include scgi_params;
scgi_pass unix:/home/user/scgi.socket;
}
full rutorrent config.php file:
http://pastebin.com/1tAZb6DM
Admittedly most of this is jargon to me. I'm familiar with rtorrent, somewhat familiar with nginx, and I know the basic theories of networking. XML, scgi/unix sockets, and php are all however beyond me (I only really know python), and I'm totally clueless as to where I would start to begin learning. I think this might have something to do with "RCP2" but I really have no idea. blog.abevoelker.com didn't have RCP2 set in nginx.conf.
Basically be gentle with me, and I really appreciate any help you folks have to offer me. I know I'm a newbie stumbling through the dark, but I'm attempting to learn.
Guides used/mashed together:
http://linoxide.com/ubuntu-how-to/setup-rtorrent-rutorrent/
https://blog.abevoelker.com/rtorrent_xmlrpc_over_nginx_scgi/
https://github.com/rakshasa/rtorrent/wiki/RPC-Setup-XMLRPC
https://github.com/Novik/ruTorrent/issues/974
Related
I have a task that has two containers on fargate: one java and one php/nginx container.
The java container maps its internal port 80 to the extenral port 8080 and the php container maps its internal port 80 to the external port 8081.
The task is using the awsvpc network mode. According to the aws documentation, they should be able to communicate with each other via localhost once they are on fargate.
I tried testing this by having the Java container send a request to the PHP container using "localhost:8081":
public boolean isPHPReachable() {
System.out.println("Attempting to make request to PHP");
try {
final URL url = new URL("http://localhost:8081/test");
final HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setRequestMethod("GET");
final int status = con.getResponseCode();
System.out.printf("Get request to PHP resulted in a %d: %s", status, con.getResponseMessage());
return status == 200;
} catch (final IOException err) {
err.printStackTrace();
return false;
}
The php code it should be hitting is this:
<?php
if ($_GET['q'] === 'test') {
echo('healthy');
die;
}
The 'q' is set by nginx (see below config).
However,I see that the cloudwatch logs are showing an eror when Java tries to send PHP the request:
22:55:04
Attempting to make request to PHP
22:55:04
java.net.ConnectException: Connection refused (Connection refused)
I tried to contact the PHP container directly, but I get timedout. This does not occur when I attempt to reach Java.
Potential Problem 1: Nginx is misconfigured
I suspect this might be an nginx issue. However, I am not able to reproduce the problem when I create the PHP container on my local machine -- it's reachable when I ping localhost:8081.
The problem is only occuring in Fargate.
Here is my configuration:
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
# Reject requests not matching any server block
listen 80 default_server;
server_name "";
return 410;
}
server {
listen 80;
root /var/www/public_html;
server_name localhost;
index index.php;
set_real_ip_from 0.0.0.0/0; # we trust that the webserver is firewalled
real_ip_header X-Forwarded-For;
set $webroot $document_root;
error_page 404 410 /404_empty.html;
autoindex off;
# Disable server signature
# (i.e. remove "nginx 1.10" or similar from error pages and response headers)
server_tokens off;
#################
# PHP locations #
#################
location / {
rewrite ^/(.*)$ /sites/all/index.php?q=$1;
error_page 404 /sites/all/index.php;
}
}
2. Misconfigured security groups:
I do not believe this is the case. The security group attached to this container allows inbound TCP requests to any port from any destination. It also allows all outbound requests.
Anyone spot an immediate issue? Any debugging tips?
I'm experiencing the weirdest problem. I've been working on a file upload problem for the last few days and have been using phpinfo() to track changes to INI settings. The last time I touched it was two days ago... it all worked then.
Today, phpinfo() is causing a 503 Service Temporarily Unavailable error. Here's the weird part: the website works fine! I's PHP + MySQL driven and I can move around in it just fine (well... other than the issues I'm working on). But as soon as I add phpinfo() to show up as the first thing in the <body> block... I get the error.
I've even tried creating a one-line file: <?php phpinfo(); ?> This dies with a 503 error, too.
The error log contains this:
[Fri Sep 15 14:22:31.192593 2017] [proxy_fcgi:error] [pid 2695] (104)Connection reset by peer: [client 67.161.220.240:44230] AH01075: Error dispatching request to :
I've restarted Apache and Nginx with no errors. Does anyone have an idea about what service or some such might have died on my machine to cause this?
NGINX Config File
#user nginx;
worker_processes 1;
#error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
#pid /var/run/nginx.pid;
include /etc/nginx/modules.conf.d/*.conf;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#tcp_nodelay on;
#gzip on;
#gzip_disable "MSIE [1-6]\.(?!.*SV1)";
server_tokens off;
include /etc/nginx/conf.d/*.conf;
}
# override global parameters e.g. worker_rlimit_nofile
include /etc/nginx/*global_params;
Is it possible that this is the problem:
Apache, FastCGI - Error 503
(i.e. a Fast CGI configuration issue?)
I solved this by logging into PLESK. I went to Tools & Settings -> Services Management.
Then I restarted PHP-FPM 5.6.33 service (this may vary depending on the version of PHP you are running).
After a re-start, phpinfo()worked fine. I'm not sure what the issue was. As you experienced, my website was working fine but it may have been causing other issues I'm not aware of.
I have nginx running as confirmed by loading an html page.
The difficulty is running a .php page in same location, however this giving a 404.
fpm is shown to be installed okay using php-fpm -t
Therefore I am very sure its in .conf or a serverblock.
My goal is to set everything using serverblocks (aka VirtualHosts) as so much easier to manage differing projects so have attempted to strip nginx.conf to a minimal:
#user nobody;
worker_processes 1;
error_log /error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
include /usr/local/etc/nginx/sites-enabled/*;
}
and the serverblock
server {
listen 80;
server_name localhost;
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
root /var/www;
index index.php index.html index.htm;
try_files $uri /index.php?&query_string;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_read_timeout 60;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
}
index.php fails simply with a message 'File not found' though not in the usual nginx format - just those words.
I had assumed this would be simple on a Linux based OS. It seems Apple have messed with it too much to make it viable.
I have expertise in this and have a Ubuntu based script which makes everything ready to go on this Linux flavour - apparently Apple no longer support developers.
Oh well lets see how long it takes to install a VirtualBox solution that I can edit on OSX.
This is an answer per se pending details from other people answering as I have had this working before - I just wanted a non proxy solution and was prepared to find a few days finding it. Alas Apple prevent it.
I've been having some problems with my NginX installation. I'm not getting any errors, however, I get the classic "500 - Internal Server Error" when I try to go to my localhost address.
This is my config:
user nobody; ## Default: nobody
worker_processes 5; ## Default: 1
error_log logs/error.log;
pid logs/nginx.pid;
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
include mime.types;
include fastcgi.conf;
index index index.html index.htm index.php;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
tcp_nopush on;
server_names_hash_bucket_size 128; # this seems to be required for some vhosts
server { # simple reverse-proxy
listen 80;
access_log logs/access.log main;
# serve static files
location ~ ^/(images|javascript|js|css|flash|media|static)/ {
root /Library/Testing/public_html;
expires 30d;
}
# pass requests for dynamic content to rails/turbogears/zope, et al
location / {
proxy_pass http://127.0.0.1:8080;
}
}
upstream big_server_com {
server 127.0.0.3:8000 weight=5;
server 127.0.0.3:8001 weight=5;
server 192.168.0.1:8000;
server 192.168.0.1:8001;
}
server { # simple load balancing
listen 80;
server_name big.server.com;
access_log logs/big.server.access.log main;
location / {
proxy_pass http://big_server_com;
}
}
}
What's the issue? I looked at other related SOF questions, but none fixed my problem. Thank you.
EDIT: My log is now saying: 2015/07/26 13:43:40 [error] 2494#0: *1 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /index.php HTTP/1.1", upstream: "http://127.0.0.1:8080/index.php", host: "localhost"
When I attempt to load the page "localhost/index.php"
I fixed my issue. It turns out I forgot to activate php-fpm: sudo php-fpm does the trick.
I am in the process of Dockerising my webserver/php workflow.
But because I am on Windows, I need to use a virtual machine. I chose boot2docker which is a Tiny Core Linux running in Virtualbox and adapted for Docker.
I selected three containers:
nginx: the official nginx container;
jprjr/php-fpm: a php-fpm container;
mysql: for databases.
In boot2docker, /www/ contains my web projects and conf/, which has the following tree:
conf
│
├───fig
│ fig.yml
│
└───nginx
nginx.conf
servers-global.conf
servers.conf
Because docker-compose is not available for boot2docker, I must use fig to automate everything. Here is my fig.xml:
mysql:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- 3306:3306
php:
image: jprjr/php-fpm
links:
- mysql:mysql
volumes:
- /www:/srv/http:ro
ports:
- 9000:9000
nginx:
image: nginx
links:
- php:php
volumes:
- /www:/www:ro
ports:
- 80:80
command: nginx -c /www/conf/nginx/nginx.conf
Here is my nginx.conf:
daemon off;
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
keepalive_timeout 65;
index index.php index.html index.htm;
include /www/conf/nginx/servers.conf;
autoindex on;
}
And the servers.conf:
server {
server_name lab.dev;
root /www/lab/;
include /www/conf/nginx/servers-global.conf;
}
# Some other servers (vhosts)
And the servers-global.conf:
listen 80;
location ~* \.php$ {
fastcgi_index index.php;
fastcgi_pass php:9000;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /srv/http$fastcgi_script_name;
}
So the problem now (sorry for all that configuration, I believe it was needed to clearly explain the problem): If I access lab.dev, no problem (which shows that the host is created in Windows) but if I try to access lab.dev/test_autoload/, I have a File not found.. I know this comes from php-fpm not being able to access the files, and the nginx logs confirm this:
nginx_1 | 2015/05/28 14:56:02 [error] 5#5: *3 FastCGI sent in stderr:
"Primary script unknown" while reading response header from upstream,
client: 192.168.59.3, server: lab.dev, request: "GET /test_autoload/ HTTP/1.1",
upstream: "fastcgi://172.17.0.120:9000", host: "lab.dev", referrer: "http://lab.dev/"
I know that there is a index.php in lab/test_autoload/ in both containers, I have checked. In nginx, it is located in /www/lab/test_autoload/index.php and /srv/http/lab/test_autoload/index.php in php.
I believe the problem comes from root and/or fastcgi_param SCRIPT_FILENAME, but I do not know how to solve it.
I have tried many things, such as modifying the roots, using a rewrite rule, adding/removing some /s, etc, but nothing has made it change.
Again, sorry for all this config, but I think it was needed to describe the environment I am in.
I finally found the answer.
The variable $fastcgi_script_name did not take into account the root (logical, as it would have included www/ otherwise. This means that a global file can not work. An example :
# "generated" vhost
server {
server_name lab.dev;
root /www/lab/;
listen 80;
location ~* \.php$ {
fastcgi_index index.php;
fastcgi_pass php:9000;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /srv/http$fastcgi_script_name;
# I need to add /lab after /srv/http because it requests PHP to look at the root of the web files, not in the lab/ folder
# fastcgi_param SCRIPT_FILENAME /srv/http/lab$fastcgi_script_name;
}
}
This meant that I couldn't write the lab/ at only one place, I needed to write at two different places (root and fastcgi_param) so I decided to write myself a small PHP script that used a .json and turned it into the servers.conf file. If anyone wants to have a look at it, just ask, it will be a pleasure to share it.
There's a mistake here:
fastcgi_param SCRIPT_FILENAME /srv/http/$fastcgi_script_name;
The good line is:
fastcgi_param SCRIPT_FILENAME /srv/http$fastcgi_script_name;
This is not the same thing.
Your nginx.conf is fully empty, where is the daemon off;
the user nginx line, worker_processes etc.. nginx need some configuration file before running the http.
In the http conf, same thing, is missing the mime types, the default_type, configuration of logs sendfile at on for boot2docker.
Your problem is clearly not a problem with Docker, but with the nginx configuration. Have you test your application with running docker run before using fig ? Did it worked ?