I've got nginx server created with Docker. When I'm making changes to JS or CSS file, those appear after 30 - 60 seconds with force-refreshing in the browser (yes, browser cache is turned off). How to make them appear immediately? My system is Ubuntu 17.
nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
error_log off;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-available/*;
open_file_cache max=100;
client_max_body_size 4M;
}
daemon off;
And the server config:
server {
server_name l.site;
root /var/www/site;
index index.php;
location / {
try_files $uri #rewriteapp;
}
location #rewriteapp {
if (!-f $request_filename){
set $rule_0 1$rule_0;
}
if (!-d $request_filename){
set $rule_0 2$rule_0;
}
if ($request_filename !~ "-l"){
set $rule_0 3$rule_0;
}
if ($rule_0 = "321"){
rewrite ^/(.*)$ /index.php?url=$1 last;
}
}
# from UPDATE #1 ->
location ~* \.(?:css|js)$ {
expires off;
# don't cache it
proxy_no_cache 1;
# even if cached, don't try to use it
proxy_cache_bypass 1;
}
# <- from UPDATE #1
location ~ \.php(/|$) {
fastcgi_pass php-upstream;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /var/log/nginx/site_error.log;
access_log /var/log/nginx/site_access.log;
}
UPDATE #1
Added this to server and still it is not showing me updated files in the browser just after code change.
location ~* \.(?:css|js)$ {
expires off;
# don't cache it
proxy_no_cache 1;
# even if cached, don't try to use it
proxy_cache_bypass 1;
}
UPDATE #2
Used THIS configuration, and still... didn't helped.
location / {
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
}
Link to newest versions of files:
https://gist.github.com/ktrzos/1bbf2fd0161ce0e20541ccb18fe066a5
Try to disable cache using .htaccess. this is code from my live website. this should work.
<FilesMatch "\.(html|htm|js|css|php)>
FileETag None
Header unset ETag
Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
Header set Pragma "no-cache"
Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"
</FilesMatch>
When using docker on VM (VirtualBox or so) change nginx.conf property sendfile to off
http {
server_tokens off;
sendfile off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
error_log off;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-available/*;
open_file_cache max=100;
client_max_body_size 4M;
}
Related
Setting up a site on AWS Lightsail using the Linux/NGINX install from Bitnami.
The root folder (/opt/bitnami/nginx/html) contains index.html as default, everything runs just fine. However, swapping that index file out for index.php returns 403 in chrome and logs the following error...
*42 directory index of "/opt/bitnami/nginx/html/" is forbidden
Index.php is executing just <?php phpinfo(); ?>.
Index.php is accessible in the browser by pointing to its path directly (site.com/index.php)
The contents of my config file nginx.conf are unmodified and as follow...
user daemon daemon;
worker_processes auto;
error_log "/opt/bitnami/nginx/logs/error.log";
events {
use epoll;
worker_connections 1024;
multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
client_body_temp_path "/opt/bitnami/nginx/tmp/client_body" 1 2;
proxy_temp_path "/opt/bitnami/nginx/tmp/proxy" 1 2;
fastcgi_temp_path "/opt/bitnami/nginx/tmp/fastcgi" 1 2;
scgi_temp_path "/opt/bitnami/nginx/tmp/scgi" 1 2;
uwsgi_temp_path "/opt/bitnami/nginx/tmp/uwsgi" 1 2;
access_log "/opt/bitnami/nginx/logs/access.log";
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_proxied any;
gzip_vary on;
gzip_types text/plain
text/xml
text/css
text/javascript
application/json
application/javascript
application/x-javascript
application/ecmascript
application/xml
application/rss+xml
application/atom+xml
application/rdf+xml
application/xml+rss
application/xhtml+xml
application/x-font-ttf
application/x-font-opentype
application/vnd.ms-fontobject
image/svg+xml
image/x-icon
application/atom_xml;
gzip_buffers 16 8k;
add_header X-Frame-Options SAMEORIGIN;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS;
include "/opt/bitnami/nginx/conf/bitnami/bitnami.conf";
}
the contents of the include "/opt/bitnami/nginx/conf/bitnami/bitnami.conf" are as follows...
# HTTP server
server {
listen 80;
server_name localhost;
include "/opt/bitnami/nginx/conf/bitnami/phpfastcgi.conf";
}
the contents of the include "/opt/bitnami/nginx/conf/bitnami/phpfastcgi.conf" are as follows...
location ~ \.php$ {
root html;
fastcgi_read_timeout 300;
fastcgi_pass unix:/opt/bitnami/php/var/run/www.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $request_filename;
include fastcgi_params;
}
Note: I also tried adding index index.php to the above.
Any ideas as to what might be going on here?
NOTE: Troubleshooting, i tried a stripped alternative to the nginx.config file referenced above, which resolved the 403 error but wouldn't do anything other than download the index.php file when visiting the root...
user daemon daemon;
worker_processes auto;
error_log "/opt/bitnami/nginx/logs/error.log";
events {
use epoll;
worker_connections 1024;
multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
client_body_temp_path "/opt/bitnami/nginx/tmp/client_body" 1 2;
proxy_temp_path "/opt/bitnami/nginx/tmp/proxy" 1 2;
fastcgi_temp_path "/opt/bitnami/nginx/tmp/fastcgi" 1 2;
scgi_temp_path "/opt/bitnami/nginx/tmp/scgi" 1 2;
uwsgi_temp_path "/opt/bitnami/nginx/tmp/uwsgi" 1 2;
access_log "/opt/bitnami/nginx/logs/access.log";
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
root html;
index index.php;
}
}
}
Resolved by updating "/opt/bitnami/nginx/conf/bitnami/phpfastcgi.conf" as follows...
location / {
root html;
index index.php;
}
location ~ \.php$ {
root html;
fastcgi_read_timeout 300;
fastcgi_pass unix:/opt/bitnami/php/var/run/www.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $request_filename;
include fastcgi_params;
}
Not my proudest moment, i'll admit.
on jumpstarter.io
i have project php
i want dowload file larger from server via HTTP [php]
file size limit = 30 MB for post file
i can't config http {} of the nginx.conf file normally located at /etc/nginx/nginx.conf:
and php.ini
Editable :
NginX Configuration
server {
listen 0.0.0.0;
root /var/www;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php?$args;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# location ~* \\.(js|css|png|jpg|jpeg|gif|ico)$ {
# expires 2w;
# }
location ~* \\.php$ {
fastcgi_index index.php;
fastcgi_pass unix:/var/run/php-fpm.sock;
include /etc/nginx/fastcgi_params;
fastcgi_param PHP_VALUE "
display_errors=On
display_startup_errors=On
error_reporting=30719
";
}
}
Default: /etc/nginx/nginx.conf (can't edit)
user http;
worker_processes 2;
pid /var/run/nginx.pid;
daemon off;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 20m;
client_body_buffer_size 128k;
include /etc/nginx/mime.types;
# logging
access_log /var/log/nginx-access.log;
error_log /var/log/nginx-error.log;
# gzip
gzip on;
gzip_disable "msie6";
# vhost
include /etc/nginx/sites-enabled/*;
}
Anyone help me . thanks
I recently rent a vServer and now I am playing around with nginx, fastcgi cache and my wordpress setup. It's running pretty fast right now but in all speed test I came across my js and css files. Is there some kind of minifying implemented in nginx I could use? Also my pictures galleries do have a lot of pictures, is there something else I could use to increase the performance? (all pictures are stripped down to a minimum file size already)
This is my nginx.config:
user www-data;
worker_processes 1;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
and my host file:
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=MYAPP:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
server {
listen 80;
root /var/www/blog;
index index.php;
server_name IPADRESS;
location / {
try_files $uri $uri/ /index.php?$args;
#Cache everything by default
set $no_cache 0;
#Don't cache POST requests
if ($request_method = POST)
{
set $no_cache 1;
}
#Don't cache if the URL contains a query string
if ($query_string != "")
{
set $no_cache 1;
}
#Don't cache the following URLs
if ($request_uri ~* "/(administrator/|login.php)")
{
set $no_cache 1;
}
#Don't cache if there is a cookie called PHPSESSID
if ($http_cookie = "PHPSESSID")
{
set $no_cache 1;
}
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ .php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_cache MYAPP;
fastcgi_cache_valid 200 60m;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)(\?ver=[0-9.]+)?$ {
expires 365d;
}
}
I don't want to install a plugin just for this stuff (keep Wordpress as simple as possible) so I am looking for the best basic setup for wordpress.
I am trying to setup nginx as a caching reverse proxy, however it would appear that every request is been sent to the backend server, and nothing is been cached. i.e. the server logs on the backend show all the same file accesses.
Most of the files are either php with arguments passed on the url or images, all of which are been fetched all the time from the backend and never cached. Everything on this site can be cached.
My conf.d/default.conf
upstream xxxx {
server xxxx.com;
}
#
# The default server
#
server {
listen 80 default_server;
server_name _;
access_log /var/log/nginx/log/access.log main;
error_log /var/log/nginx/log/error.log;
root /usr/share/nginx/html;
index index.html index.htm;
location / {
## send request back to xxxx ##
proxy_pass http://xxxx;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
# expires 24h;
# add_header Cache-Control public;
proxy_ignore_headers Cache-Control Expires;
proxy_redirect off;
proxy_buffering off;
proxy_cache one;
proxy_cache_key backend$request_uri;
proxy_cache_valid 200 301 302 1440m;
proxy_cache_valid 404 1m;
proxy_cache_valid any 1440m;
proxy_cache_use_stale error timeout invalid_header updating;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
and my nginx.conf file
user nginx;
worker_processes 8;
worker_rlimit_nofile 8192;
error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;
events {
worker_connections 2048;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
server_names_hash_bucket_size 64;
sendfile on;
tcp_nopush on;
tcp_nodelay off;
#keepalive_timeout 0;
keepalive_timeout 65;
gzip on;
gzip_comp_level 9;
gzip_proxied any;
proxy_buffering on;
proxy_cache_path /usr/local/nginx/proxy levels=1:2 keys_zone=one:1024m inactive=7d max_size=700g;
proxy_temp_path /tmp/nginx/proxy;
proxy_buffer_size 4k;
proxy_buffers 100 8k;
proxy_connect_timeout 60;
proxy_send_timeout 60;
proxy_read_timeout 60;
include /etc/nginx/conf.d/*.conf;
}
Can anybody tell me what I've got wrong??
I ran into this problem as well, and I found
proxy_buffering off;
Will cause nginx to bypass cache and not save the file to disk.
Remove that line and then it works.
Your upstream server responses must be settings Cookies, please see
https://stackoverflow.com/a/10995522/482926
I have nginx 1.0.5 + php-cgi (PHP 5.3.6) running.
I need to upload ~1GB files (1-5 parallel uploads must be).
I trying to create uploading of big files through ajax upload. Everything is working but PHP eating a lot of memory for each upload. I have set memory_limit = 200M, but it's working up to ~150MB size of uploaded file. If file is bigger - uploading fails. I can set memory_limit bigger and bigger, but I think it's wrong way, cause PHP can eat all memory.
I use this PHP code (it's simplified) to handle uploads on server side:
$input = fopen('php://input', 'rb');
$file = fopen('/tmp/' . $_GET['file'] . microtime(), 'wb');
while (!feof($input)) {
fwrite($file, fread($input, 102400));
}
fclose($input);
fclose($file);
/etc/nginx/nginx.conf:
user www-data;
worker_processes 100;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 2g;
# server_tokens off;
server_names_hash_max_size 2048;
server_names_hash_bucket_size 128;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/sites-enabled/srv.conf:
server {
listen 80;
server_name srv.project.loc;
# Define root
set $fs_webroot "/home/andser/public_html/project/srv";
root $fs_webroot;
index index.php;
# robots.txt
location = /robots.txt {
alias $fs_webroot/deny.robots.txt;
}
# Domain root
location / {
if ($request_method = OPTIONS ) {
add_header Access-Control-Allow-Origin "http://project.loc";
add_header Access-Control-Allow-Methods "GET, OPTIONS, POST";
add_header Access-Control-Allow-Headers "Authorization,X-Requested-With,X-File-Name,Content-Type";
#add_header Access-Control-Allow-Headers "*";
add_header Access-Control-Allow-Credentials "true";
add_header Access-Control-Max-Age "10000";
add_header Content-Length 0;
add_header Content-Type text/plain;
return 200;
}
try_files $uri $uri/ /index.php?$query_string;
}
#error_page 404 /404.htm
location ~ index.php {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $fs_webroot/$fastcgi_script_name;
include fastcgi_params;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param PATH_INFO $fastcgi_script_name;
add_header Pragma no-cache;
add_header Cache-Control no-cache,must-revalidate;
add_header Access-Control-Allow-Origin *;
#add_header Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-File-Name";
}
}
Anybody knows the way to reduce memory consumption by PHP?
Thanks.
There's a hack, which is about faking content type header, turning it from application/octet-stream to multipart/form-data. It will stop PHP from populating $HTTP_RAW_POST_DATA. More details https://github.com/valums/file-uploader/issues/61.
Have been in the same shoe before and this is what i did split the files into different chunks during the upload process.
I good example is using [1]: http://www.plupload.com/index.php "pulpload" or trying using a java applet http://jupload.sourceforge.net which also has resume capability when there are network issues etc.
The most important thing is that you want your files uploaded via a web browser there is noting stopping you from doing so in chunks
Why don't you try using flash to upload huge files. For example, you can try swfupload, which has good support for PHP.