Nginx, Wordpress and URL rewriting problems - php

Trying to get Nginx and Wordpress to play nicely but it seems that they don't understand each other quite yet, especially in terms of pretty urls and rewriting.
I have the following snippet in my config file for nginx at the bottom (got it from Nginx's wiki page on WP), and I keep getting this error message in my error log, which makes me think it's not even trying to rewrite the location.
2011/04/11 09:02:29 [error] 1208#1256: *284 "c:/local/path/2011/04/10/hello-world/index.html" is not found (3: The system cannot find the path specified), client: 127.0.0.1, server: localhost, request: "GET /2011/04/10/hello-world/ HTTP/1.1", host: "dev.local:83"
If anyone can help give me direction or pointers or links or suggestions, that would be amazing because I'm seriously stuck. Thanks!
NGINX
worker_processes 1;
pid logs/nginx.pid;
events {
worker_connections 64;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
#gzip
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
# Some version of IE 6 don't handle compression well on some mime-types, so just disable for them
gzip_disable "MSIE [1-6].(?!.*SV1)";
# Set a vary header so downstream proxies don't send cached gzipped content to IE6
gzip_vary on;
server {
listen 83;
server_name localhost dev.local;
root c:/local/path;
index index.php;
location / {
try_files $uri $uri/ /index.php;
}
#pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
fastcgi_pass 127.0.0.1:521;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_intercept_errors on;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
}

Absolute paths like the one you specified gets translated into a /cygdrive/c/-path even if you don't have cygwin installed. For Windows I suggest you use relative paths if possible. Relative to the nginx directory.

Related

NGINX 403 Forbidden at Root

Setting up a site on AWS Lightsail using the Linux/NGINX install from Bitnami.
The root folder (/opt/bitnami/nginx/html) contains index.html as default, everything runs just fine. However, swapping that index file out for index.php returns 403 in chrome and logs the following error...
*42 directory index of "/opt/bitnami/nginx/html/" is forbidden
Index.php is executing just <?php phpinfo(); ?>.
Index.php is accessible in the browser by pointing to its path directly (site.com/index.php)
The contents of my config file nginx.conf are unmodified and as follow...
user daemon daemon;
worker_processes auto;
error_log "/opt/bitnami/nginx/logs/error.log";
events {
use epoll;
worker_connections 1024;
multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
client_body_temp_path "/opt/bitnami/nginx/tmp/client_body" 1 2;
proxy_temp_path "/opt/bitnami/nginx/tmp/proxy" 1 2;
fastcgi_temp_path "/opt/bitnami/nginx/tmp/fastcgi" 1 2;
scgi_temp_path "/opt/bitnami/nginx/tmp/scgi" 1 2;
uwsgi_temp_path "/opt/bitnami/nginx/tmp/uwsgi" 1 2;
access_log "/opt/bitnami/nginx/logs/access.log";
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_proxied any;
gzip_vary on;
gzip_types text/plain
text/xml
text/css
text/javascript
application/json
application/javascript
application/x-javascript
application/ecmascript
application/xml
application/rss+xml
application/atom+xml
application/rdf+xml
application/xml+rss
application/xhtml+xml
application/x-font-ttf
application/x-font-opentype
application/vnd.ms-fontobject
image/svg+xml
image/x-icon
application/atom_xml;
gzip_buffers 16 8k;
add_header X-Frame-Options SAMEORIGIN;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS;
include "/opt/bitnami/nginx/conf/bitnami/bitnami.conf";
}
the contents of the include "/opt/bitnami/nginx/conf/bitnami/bitnami.conf" are as follows...
# HTTP server
server {
listen 80;
server_name localhost;
include "/opt/bitnami/nginx/conf/bitnami/phpfastcgi.conf";
}
the contents of the include "/opt/bitnami/nginx/conf/bitnami/phpfastcgi.conf" are as follows...
location ~ \.php$ {
root html;
fastcgi_read_timeout 300;
fastcgi_pass unix:/opt/bitnami/php/var/run/www.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $request_filename;
include fastcgi_params;
}
Note: I also tried adding index index.php to the above.
Any ideas as to what might be going on here?
NOTE: Troubleshooting, i tried a stripped alternative to the nginx.config file referenced above, which resolved the 403 error but wouldn't do anything other than download the index.php file when visiting the root...
user daemon daemon;
worker_processes auto;
error_log "/opt/bitnami/nginx/logs/error.log";
events {
use epoll;
worker_connections 1024;
multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
client_body_temp_path "/opt/bitnami/nginx/tmp/client_body" 1 2;
proxy_temp_path "/opt/bitnami/nginx/tmp/proxy" 1 2;
fastcgi_temp_path "/opt/bitnami/nginx/tmp/fastcgi" 1 2;
scgi_temp_path "/opt/bitnami/nginx/tmp/scgi" 1 2;
uwsgi_temp_path "/opt/bitnami/nginx/tmp/uwsgi" 1 2;
access_log "/opt/bitnami/nginx/logs/access.log";
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
root html;
index index.php;
}
}
}
Resolved by updating "/opt/bitnami/nginx/conf/bitnami/phpfastcgi.conf" as follows...
location / {
root html;
index index.php;
}
location ~ \.php$ {
root html;
fastcgi_read_timeout 300;
fastcgi_pass unix:/opt/bitnami/php/var/run/www.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $request_filename;
include fastcgi_params;
}
Not my proudest moment, i'll admit.

PHP5-FPM keeps crashing on new micro instance, log is blank

I'm new to all of this, but can't keep my newly spun micro ec2 server up and running (running wordpress). The PHP-FPM log only has this with logging set to debug.
[17-Oct-2016 15:46:38] NOTICE: configuration file /etc/php5/fpm/php-fpm.conf test is successful
My nginx log is continuously filling with errors trying to connect to php5-fpm.sock (hundreds of entries per minute even though there is no one else accessing the site).
2016/10/17 16:32:16 [error] 26389#0: *7298 connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 191.96.249.80, server: mysiteredacted.com, request: "POST /xmlrpc.php HTTP/1.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "removed"
After restarting nginx and PHP-FPM the site works for a few minutes before throwing 502 Bad Gateway errors until I restart them both again.
I don't know where to begin with this. Here is my nginx config file:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
port_in_redirect off;
gzip on;
gzip_types text/css text/xml text/javascript application/x-javascript;
gzip_vary on;
include /etc/nginx/conf.d/*.conf;
}
Which also include this file in the /conf.d folder:
server {
## Your website name goes here.
server_name mysiteredacted.com www.mysiteredacted.com;
## Your only path reference.
root /var/www/;
listen 80;
## This should be in your http block and if it is, it's not needed here.
index index.html index.htm index.php;
include conf.d/drop;
location / {
# This is cool because no php is touched for static content
try_files $uri $uri/ /index.php?q=$uri&$args;
}
location ~ \.php$ {
fastcgi_buffers 8 256k;
fastcgi_buffer_size 128k;
fastcgi_intercept_errors on;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# fastcgi_pass unix:/dev/shm/php-fpm-www.sock;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
location ~* \.(css|js|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
}
The second file has this line:
fastcgi_pass unix:/var/run/php5-fpm.sock;
If that file does not exist it will throw this error.
Check this previous question: How to find my php-fpm.sock?
After hours of searching I finally figured it out.. Turns out it's some sort of brute force attack on /xmlrpc.php as indicated by the thousands of requests of "POST /xmlrpc.php HTTP/1.0".
It's a common WordPress attack. Thanks all.

Trouble with Nginx Rewrite Rules

I'm fairly new to Nginx, and I'm working on converting an .htaccess file into something nginx can make sense of. Everything's working well (mostly) - I can pull up the homepage just fine. The problem is when I get to a post page.
think similar to wordpress, URLs like:
http://www.example.com/12/post-title-in-slug-form
Where 12 is the post id, and obviously that string is the post slug. I'm trying to parse those as two separate arguments (id & slug) and pass them into index.php like I was successfully doing in apache. I'm getting a 404 page, though, and have confirmed it is because of the rewriterule. Here's what the entire config file looks like, with only the website name changed (for privacy):
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
server {
listen 80;
server_name example.com;
access_log off;
error_log on;
# deny access to .XYZ files
location ~ /\. {
return 403;
}
location ~ sitemap\.xml {
return 301 http://example.com/sitemap.php;
}
location ~ .php$ {
# Here you have to decide how to handle php
# Generic example configs below
# Uncomment and fix up one of the two options
# Option 1: Use FastCGI
fastcgi_index index.php;
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
location / {
try_files $uri $uri/ #router;
}
location #router {
rewrite ^/([0-9]+)/?(.*)?/?$ /index.php?id=$1&slug=$2 last;
}
}
}
Please let me know if you can spot what's throwing it off when it comes to parsing the individual posts into ids and slugs and passing them. Thanks!
You should add a / in the beginning and a / before index.php like this :
rewrite ^/([0-9]+)/?(.*)?/?$ /index.php?id=$1&slug=$2 last;
Note i also used $1 and $2
If what you posted is indeed the COMPLETE config file, then the setup is missing something to handle PHP files as the regexp looks to be fine.
I actually think the config you posted cannot be the full one or that there is something fundamental going on as that config should have thrown errors and failed to load and also, since you mentioned that your PHP was loading fine, then it cannot be the posted config serving your website.
A better config is attached below:
FYI, try_files ABC XYZ last; is not valid syntax and you need at least two options in try_files. Anyway, fixed those in the posted config as well.
server {
listen 80;
server_name example.com;
access_log off;
error_log on;
# deny access to .XYZ files
location ~ /\. {
return 403;
}
location ~ sitemap\.xml {
return 301 http://example.com/sitemap.php;
}
location ~ .php$ {
# Here you have to decide how to handle php
# Generic example configs below
# Uncomment and fix up one of the two options
# Option 1: Use FastCGI
#fastcgi_index index.php;
#include fastcgi_params;
#fastcgi_pass unix:/var/run/php5-fpm.sock;
# Option 2: Pass to Apache
# Proxy_pass localhost:APACHE_PORT
}
location / {
try_files $uri $uri/ #router;
}
location #router {
rewrite ^/([0-9]+)/?(.*)/?$ /index.php?id=$1&slug=$2 last;
}
}
You will need to fix the PHP handling bit and choose which setup you want to implement.
I think though that you need to verify that you only have one instance of nginx running and that it is what is serving your site.

Optimize nginx with minify for Wordpress

I recently rent a vServer and now I am playing around with nginx, fastcgi cache and my wordpress setup. It's running pretty fast right now but in all speed test I came across my js and css files. Is there some kind of minifying implemented in nginx I could use? Also my pictures galleries do have a lot of pictures, is there something else I could use to increase the performance? (all pictures are stripped down to a minimum file size already)
This is my nginx.config:
user www-data;
worker_processes 1;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
and my host file:
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=MYAPP:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
server {
listen 80;
root /var/www/blog;
index index.php;
server_name IPADRESS;
location / {
try_files $uri $uri/ /index.php?$args;
#Cache everything by default
set $no_cache 0;
#Don't cache POST requests
if ($request_method = POST)
{
set $no_cache 1;
}
#Don't cache if the URL contains a query string
if ($query_string != "")
{
set $no_cache 1;
}
#Don't cache the following URLs
if ($request_uri ~* "/(administrator/|login.php)")
{
set $no_cache 1;
}
#Don't cache if there is a cookie called PHPSESSID
if ($http_cookie = "PHPSESSID")
{
set $no_cache 1;
}
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ .php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_cache MYAPP;
fastcgi_cache_valid 200 60m;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)(\?ver=[0-9.]+)?$ {
expires 365d;
}
}
I don't want to install a plugin just for this stuff (keep Wordpress as simple as possible) so I am looking for the best basic setup for wordpress.

Reduce memory consumption in PHP while handling uploads by php input

I have nginx 1.0.5 + php-cgi (PHP 5.3.6) running.
I need to upload ~1GB files (1-5 parallel uploads must be).
I trying to create uploading of big files through ajax upload. Everything is working but PHP eating a lot of memory for each upload. I have set memory_limit = 200M, but it's working up to ~150MB size of uploaded file. If file is bigger - uploading fails. I can set memory_limit bigger and bigger, but I think it's wrong way, cause PHP can eat all memory.
I use this PHP code (it's simplified) to handle uploads on server side:
$input = fopen('php://input', 'rb');
$file = fopen('/tmp/' . $_GET['file'] . microtime(), 'wb');
while (!feof($input)) {
fwrite($file, fread($input, 102400));
}
fclose($input);
fclose($file);
/etc/nginx/nginx.conf:
user www-data;
worker_processes 100;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 2g;
# server_tokens off;
server_names_hash_max_size 2048;
server_names_hash_bucket_size 128;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/sites-enabled/srv.conf:
server {
listen 80;
server_name srv.project.loc;
# Define root
set $fs_webroot "/home/andser/public_html/project/srv";
root $fs_webroot;
index index.php;
# robots.txt
location = /robots.txt {
alias $fs_webroot/deny.robots.txt;
}
# Domain root
location / {
if ($request_method = OPTIONS ) {
add_header Access-Control-Allow-Origin "http://project.loc";
add_header Access-Control-Allow-Methods "GET, OPTIONS, POST";
add_header Access-Control-Allow-Headers "Authorization,X-Requested-With,X-File-Name,Content-Type";
#add_header Access-Control-Allow-Headers "*";
add_header Access-Control-Allow-Credentials "true";
add_header Access-Control-Max-Age "10000";
add_header Content-Length 0;
add_header Content-Type text/plain;
return 200;
}
try_files $uri $uri/ /index.php?$query_string;
}
#error_page 404 /404.htm
location ~ index.php {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $fs_webroot/$fastcgi_script_name;
include fastcgi_params;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param PATH_INFO $fastcgi_script_name;
add_header Pragma no-cache;
add_header Cache-Control no-cache,must-revalidate;
add_header Access-Control-Allow-Origin *;
#add_header Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-File-Name";
}
}
Anybody knows the way to reduce memory consumption by PHP?
Thanks.
There's a hack, which is about faking content type header, turning it from application/octet-stream to multipart/form-data. It will stop PHP from populating $HTTP_RAW_POST_DATA. More details https://github.com/valums/file-uploader/issues/61.
Have been in the same shoe before and this is what i did split the files into different chunks during the upload process.
I good example is using [1]: http://www.plupload.com/index.php "pulpload" or trying using a java applet http://jupload.sourceforge.net which also has resume capability when there are network issues etc.
The most important thing is that you want your files uploaded via a web browser there is noting stopping you from doing so in chunks
Why don't you try using flash to upload huge files. For example, you can try swfupload, which has good support for PHP.

Categories