I have a task that has two containers on fargate: one java and one php/nginx container.
The java container maps its internal port 80 to the extenral port 8080 and the php container maps its internal port 80 to the external port 8081.
The task is using the awsvpc network mode. According to the aws documentation, they should be able to communicate with each other via localhost once they are on fargate.
I tried testing this by having the Java container send a request to the PHP container using "localhost:8081":
public boolean isPHPReachable() {
System.out.println("Attempting to make request to PHP");
try {
final URL url = new URL("http://localhost:8081/test");
final HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setRequestMethod("GET");
final int status = con.getResponseCode();
System.out.printf("Get request to PHP resulted in a %d: %s", status, con.getResponseMessage());
return status == 200;
} catch (final IOException err) {
err.printStackTrace();
return false;
}
The php code it should be hitting is this:
<?php
if ($_GET['q'] === 'test') {
echo('healthy');
die;
}
The 'q' is set by nginx (see below config).
However,I see that the cloudwatch logs are showing an eror when Java tries to send PHP the request:
22:55:04
Attempting to make request to PHP
22:55:04
java.net.ConnectException: Connection refused (Connection refused)
I tried to contact the PHP container directly, but I get timedout. This does not occur when I attempt to reach Java.
Potential Problem 1: Nginx is misconfigured
I suspect this might be an nginx issue. However, I am not able to reproduce the problem when I create the PHP container on my local machine -- it's reachable when I ping localhost:8081.
The problem is only occuring in Fargate.
Here is my configuration:
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
# Reject requests not matching any server block
listen 80 default_server;
server_name "";
return 410;
}
server {
listen 80;
root /var/www/public_html;
server_name localhost;
index index.php;
set_real_ip_from 0.0.0.0/0; # we trust that the webserver is firewalled
real_ip_header X-Forwarded-For;
set $webroot $document_root;
error_page 404 410 /404_empty.html;
autoindex off;
# Disable server signature
# (i.e. remove "nginx 1.10" or similar from error pages and response headers)
server_tokens off;
#################
# PHP locations #
#################
location / {
rewrite ^/(.*)$ /sites/all/index.php?q=$1;
error_page 404 /sites/all/index.php;
}
}
2. Misconfigured security groups:
I do not believe this is the case. The security group attached to this container allows inbound TCP requests to any port from any destination. It also allows all outbound requests.
Anyone spot an immediate issue? Any debugging tips?
Related
I'm trying to setup rutorrent on an up to date Arch Linux home server, but I can't get it running mostly I think from inexperience and also outdated documentation.
The problem:
I can't get rtorrent to talk to nxinx. When I've had rutorrent showing on nginx, it wouldn't be able to connect to rtorrent, and would give this error, among many other plugin errors:
Bad response from server: (200 [parsererror,getuisettings])
Nginx's error log:
2015/07/24 17:29:49 [error] 11837#0: *19 upstream prematurely closed connection while reading response header from upstream, client: 192.168.x.<my remote machine>, server: localhost, request: "GET / HTTP/1.1", upstream: "scgi://unix:/home/user/scgi.socket:", host: "192.168.x.<my local nginx server>"
When visiting nginx's URL index:
502 Bad Gateway
In rtorrent:
(17:50:40) Closed XMLRPC log.
Config Files:
Here are the relevant parts of my config files:
.rtorrent.rc:
#encoding_list utf-8
scgi_local = /home/user/scgi.socket
execute = chmod,ug=rw\,o=,/home/user/scgi.socket
execute = chgrp,http,/home/user/scgi.socket
#scgi_port = 127.0.0.1:5000
nginx.conf:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /etc/nginx/logs/access.log main;
error_log /etc/nginx/logs/error.log notice;
server {
listen 80;
server_name localhost;
#charset koi8-r;
location / {
root /usr/share/nginx/html/rutorrent;
index index.html index.htm;
include scgi_params;
scgi_pass unix:/home/user/scgi.socket;
}
full rutorrent config.php file:
http://pastebin.com/1tAZb6DM
Admittedly most of this is jargon to me. I'm familiar with rtorrent, somewhat familiar with nginx, and I know the basic theories of networking. XML, scgi/unix sockets, and php are all however beyond me (I only really know python), and I'm totally clueless as to where I would start to begin learning. I think this might have something to do with "RCP2" but I really have no idea. blog.abevoelker.com didn't have RCP2 set in nginx.conf.
Basically be gentle with me, and I really appreciate any help you folks have to offer me. I know I'm a newbie stumbling through the dark, but I'm attempting to learn.
Guides used/mashed together:
http://linoxide.com/ubuntu-how-to/setup-rtorrent-rutorrent/
https://blog.abevoelker.com/rtorrent_xmlrpc_over_nginx_scgi/
https://github.com/rakshasa/rtorrent/wiki/RPC-Setup-XMLRPC
https://github.com/Novik/ruTorrent/issues/974
I've been having some problems with my NginX installation. I'm not getting any errors, however, I get the classic "500 - Internal Server Error" when I try to go to my localhost address.
This is my config:
user nobody; ## Default: nobody
worker_processes 5; ## Default: 1
error_log logs/error.log;
pid logs/nginx.pid;
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
include mime.types;
include fastcgi.conf;
index index index.html index.htm index.php;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
tcp_nopush on;
server_names_hash_bucket_size 128; # this seems to be required for some vhosts
server { # simple reverse-proxy
listen 80;
access_log logs/access.log main;
# serve static files
location ~ ^/(images|javascript|js|css|flash|media|static)/ {
root /Library/Testing/public_html;
expires 30d;
}
# pass requests for dynamic content to rails/turbogears/zope, et al
location / {
proxy_pass http://127.0.0.1:8080;
}
}
upstream big_server_com {
server 127.0.0.3:8000 weight=5;
server 127.0.0.3:8001 weight=5;
server 192.168.0.1:8000;
server 192.168.0.1:8001;
}
server { # simple load balancing
listen 80;
server_name big.server.com;
access_log logs/big.server.access.log main;
location / {
proxy_pass http://big_server_com;
}
}
}
What's the issue? I looked at other related SOF questions, but none fixed my problem. Thank you.
EDIT: My log is now saying: 2015/07/26 13:43:40 [error] 2494#0: *1 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /index.php HTTP/1.1", upstream: "http://127.0.0.1:8080/index.php", host: "localhost"
When I attempt to load the page "localhost/index.php"
I fixed my issue. It turns out I forgot to activate php-fpm: sudo php-fpm does the trick.
I already setting up Nginx RTMP in ubuntu linux hosted by DigitalOcean. And currently running my laravel web application in localhost mode in my desktop. Everything seems work fine for the live streaming. I'm testing with my localhost JWPlayer and Open Broadcaster Software(OBS) for live streaming. It works. But whenever I need to record the streaming video to linux directory (/var/www), seems like nothing happen and no error at all after I hit stop streaming button in OBS.
I'm don't know how does the recording works, I try record manual and it has the link on it. I click start record, it comes out /var/rec/{mystream}.flv
This manual version of recording link embed in laravel website:
rtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
recorder rec1 {
record all manual;
record_suffix all.flv;
record_path /var/rec;
record_unique on;
}
}
}
}
Start Recording:
Start rec1
nginx config for http:
access_log logs/rtmp_access.log;
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
location /stat {
rtmp_stat all;
rtmp_stat_stylesheet stat.xsl;
}
location /stat.xsl {
root /var/www/;
}
location /control {
rtmp_control all;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
By the way
Plan B: I plan to store my recorded stream files to Amazon AWS s3. Anyone know how to do this with RTMP Nginx instead of using Wowza Amazon.
You can run inside shell script nginx conf. Check the permission first:
chown -R nobody:nogroup foldername
chmod -R 700 foldername
Shell script:
ffmpeg -v error -y -i "$1" -vcodec libx264 -acodec aac -f mp4 -movflags +faststart "/tmp/recordings/$2.mp4"
aws s3 cp "/tmp/recordings/$basname.mp4" "s3://bucketname/"
exec_record_done bash -c "/home/ubuntu/script/record.sh $path $basname";
You should check the permissions on the directory you're trying to record to (/var/rec in your case). Nginx, even though started up with sudo, spawns worker processes as user "nobody" by default. You can also try changing the user that the worker processes spawn as: https://serverfault.com/a/534512/102045
When i did this with my partner I would use
record_path /tmp/rec;
Then in the file I would set a crontab that permanently tries to send new files(videos) to his NextCloud FTP(In this case could be your amazon aws)
It seems like bhh1998 and akash jakhad answers are correct, although it seems that nowadays the nginx.conf file comes with nginx user as default, so instead of using nobody and nogroup, use only nginx instead. The command mentioned in previous answers would be like this:
chown -R nginx:nginx foldername
To be sure of the correct username, check your configuration file and see which user is being specified.
In addition to the user permission settings mentioned in other answers, I also had to change the path to end with a trailing slash i.e. /var/rec/ instead of /var/rec.
Use Case:- I am working on a web application which allows to create HTML templates and publish them on amazon S3.Now to publish the websites I use nginx as a proxy server.
What the proxy server does is,when a user enters the website URL,I want to identify how to check if the request comes from my application i.e app.mysite.com(This won't change) and route it to apache for regular access,if its coming from some other domain like a regular URL www.mysite.com(This needs to be handled dynamically.Can be random) it goes to the S3 bucket that hosts the template.
My current configuration is:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
charset utf-8;
keepalive_timeout 65;
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay off;
Default Server Block to catch undefined host names
server {
listen 80;
server_name app.mysite.com;
access_log off;
error_log off;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
}
}
}
Load all the sites
include /etc/nginx/conf.d/*.conf;
Updates as I was not clear enough :-
My question is how can I handle both the domains in the config file.My nginx is a proxy server on port 80 on an EC2 instance.This also hosts my application that runs on apache on a differnet port.So any request coming for my application will come from a domain app.mysite.com and I also want to proxy the hosted templates on S3 which are inside a bucket say sites.mysite.com/coolsite.com/index.html.So if someone hits coolsite.com I want to proxy it to the folder sites.mysite.com/coolsite.com/index.html and not to app.syartee.com.Hope I am clear
The other server block:
# Server for S3
server {
# Listen on port 80 for all IPs associated with your machine
listen 80;
# Catch all other server names
server_name _; //I want it to handle other domains then app.mysite.com
# This code gets the host without www. in front and places it inside
# the $host_without_www variable
# If someone requests www.coolsite.com, then $host_without_www will have the value coolsite.com
set $host_without_www $host;
if ($host ~* www\.(.*)) {
set $host_without_www $1;
}
location / {
# This code rewrites the original request, and adds the host without www in front
# E.g. if someone requests
# /directory/file.ext?param=value
# from the coolsite.com site the request is rewritten to
# /coolsite.com/directory/file.ext?param=value
set $foo 'http://sites.mysite.com';
# echo "$foo";
rewrite ^(.*)$ $foo/$host_without_www$1 break;
# The rewritten request is passed to S3
proxy_pass http://sites.mysite.com;
include /etc/nginx/proxy_params;
}
}
Also I understand I will have to make the DNS changes in the cname of the domain.I guess I will have to add app.mysite.com under the CNAME of the template domain name?Please correct if wrong.
Thank you for your time
Found this as a part of documentation but took a while to understand.
I had to add a default_server attribute in the second server block for rest of the domains to work.
If we config the server_name, nginx will serve the content based on config block that match the domain.
app.mysite.com
server {
listen 80;
server_name app.mysite.com;
# config for app.mysite.com
}
other websites
server {
listen 80 default_server; #To handle domains apart from the fixed domain(In my case app.mysite.com)
server_name _ ;
# config for coolsite.com, anotherdomain.com ...
}
I think you need to remove $foo from
rewrite ^(.*)$ $foo/$host_without_www$1 break;
With the foo you would pass
http://sites.mysite.com/coolsite.com/directory/file.ext?param=value
to
proxy_pass http://sites.mysite.com;
Just a guess
When I try to upload a file to my site, I'm getting the Nginx "413 Request Entity Too Large" error, however in my nginx.conf file I've already explicitly stated the max size to be about 250MB at the moment, and changed the max file size in php.ini as well (and yes, I restarted the processes). The error log gives me this:
2010/12/06 04:15:06 [error] 20124#0:
*11975 client intended to send too large body: 1144149 bytes, client:
60.228.229.238, server: www.x.com, request: "POST
/upload HTTP/1.1", host:
"x.com", referrer:
"http://x.com/"
As far as I know, 1144149 bytes isn't 250MB...
Is there something I'm missing here?
Here's the base Nginx config:
user nginx;
worker_processes 8;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
client_max_body_size 300M;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
gzip on;
gzip_static on;
gzip_comp_level 5;
gzip_min_length 1024;
keepalive_timeout 300;
limit_zone myzone $binary_remote_addr 10m;
# Load config files from the /etc/nginx/conf.d directory
include /etc/nginx/sites/*;
}
And the vhost for the site:
server {
listen 80;
server_name www.x.com x.com;
access_log /var/log/nginx/x.com-access.log;
location / {
index index.html index.htm index.php;
root /var/www/x.com;
if (!-e $request_filename) {
rewrite ^/([a-z,0-9]+)$ /$1.php last;
rewrite ^/file/(.*)$ /file.php?file=$1;
}
location ~ /engine/.*\.php$ {
return 404;
}
location ~ ^/([a-z,0-9]+)\.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
}
Not knowing the version of your nginx build and what modules it was built with makes this tough, but try the following:
Copy your client_max_body_size 300M; line into the location / { } part of your vhost config. I'm not sure if it's overriding the default (which is 1 MB) properly.
Are you using nginx_upload_module? If so make sure you have the upload_max_file_size 300MB; line in your config as well.
My setup was:
php.ini
...
upload_max_filesize = 8M
...
nginx.conf
...
client_max_body_size 8m;
...
The nginx showed the error 413 when it was uploaded.
Then I had an idea: I will not let nginx show the error 413, client_max_body_size set to a value greater than upload_max_filesize, thus:
php.ini
...
upload_max_filesize = 8M
...
nginx.conf
...
client_max_body_size 80m;
...
What happened?
When you upload smaller than 80MB nginx will not display the error 413, but PHP will display the error if the file is up to 8MB.
This solved my problem, but if someone upload a file larger than 80MB error 413 happens, nginx rule.
I also add that you could define it in the *.php location handler
location ~ ^/([a-z,0-9]+)\.php$ {
Being the "lower" one in the cascading level, it would be an easy way to see if the problem comes from your nginx config or modules.
It sure doesn't come from PHP because the 413 error "body too large" is really a NGinx error.
Try the following steps to resolve the error.
Open the Nginx configuration file (nginx.conf) in a text editor.
$ sudo nano /etc/nginx/nginx.conf
Add the directive client_max_body_size under the http block:
http {
# Basic Settings
client max body size 16M;
...
}
Open nginx default file in a text editor
$ sudo nano /etc/nginx/sites-enabled/default
Add the directive client_max_body_size under location block.
location / {
...
client_max_body_size 100M;
}
Restart Nginx using the following command.
$ sudo systemctl restart nginx
Optional:
If you have a time-consuming process running on the backend server then you have to adjust the timeout attribute of the server to avoid 504 timeout error.
Open the Nginx default file in a text editor
$ sudo nano /etc/nginx/sites-enabled/default
Add the directives proxy_connect_timeout, proxy_send_timeout proxy_read_timeout under the location block:
location /api {
client_max_body_size 100M;
proxy_connect_timeout 6000;
proxy_send_timeout 6000;
proxy_read_timeout 6000;
proxy_pass http://localhost:5001;
}
Restart Nginx using the following command.
$ sudo systemctl restart nginx