I have nginx 1.0.5 + php-cgi (PHP 5.3.6) running.
I need to upload ~1GB files (1-5 parallel uploads must be).
I trying to create uploading of big files through ajax upload. Everything is working but PHP eating a lot of memory for each upload. I have set memory_limit = 200M, but it's working up to ~150MB size of uploaded file. If file is bigger - uploading fails. I can set memory_limit bigger and bigger, but I think it's wrong way, cause PHP can eat all memory.
I use this PHP code (it's simplified) to handle uploads on server side:
$input = fopen('php://input', 'rb');
$file = fopen('/tmp/' . $_GET['file'] . microtime(), 'wb');
while (!feof($input)) {
fwrite($file, fread($input, 102400));
}
fclose($input);
fclose($file);
/etc/nginx/nginx.conf:
user www-data;
worker_processes 100;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 2g;
# server_tokens off;
server_names_hash_max_size 2048;
server_names_hash_bucket_size 128;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/sites-enabled/srv.conf:
server {
listen 80;
server_name srv.project.loc;
# Define root
set $fs_webroot "/home/andser/public_html/project/srv";
root $fs_webroot;
index index.php;
# robots.txt
location = /robots.txt {
alias $fs_webroot/deny.robots.txt;
}
# Domain root
location / {
if ($request_method = OPTIONS ) {
add_header Access-Control-Allow-Origin "http://project.loc";
add_header Access-Control-Allow-Methods "GET, OPTIONS, POST";
add_header Access-Control-Allow-Headers "Authorization,X-Requested-With,X-File-Name,Content-Type";
#add_header Access-Control-Allow-Headers "*";
add_header Access-Control-Allow-Credentials "true";
add_header Access-Control-Max-Age "10000";
add_header Content-Length 0;
add_header Content-Type text/plain;
return 200;
}
try_files $uri $uri/ /index.php?$query_string;
}
#error_page 404 /404.htm
location ~ index.php {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $fs_webroot/$fastcgi_script_name;
include fastcgi_params;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param PATH_INFO $fastcgi_script_name;
add_header Pragma no-cache;
add_header Cache-Control no-cache,must-revalidate;
add_header Access-Control-Allow-Origin *;
#add_header Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-File-Name";
}
}
Anybody knows the way to reduce memory consumption by PHP?
Thanks.
There's a hack, which is about faking content type header, turning it from application/octet-stream to multipart/form-data. It will stop PHP from populating $HTTP_RAW_POST_DATA. More details https://github.com/valums/file-uploader/issues/61.
Have been in the same shoe before and this is what i did split the files into different chunks during the upload process.
I good example is using [1]: http://www.plupload.com/index.php "pulpload" or trying using a java applet http://jupload.sourceforge.net which also has resume capability when there are network issues etc.
The most important thing is that you want your files uploaded via a web browser there is noting stopping you from doing so in chunks
Why don't you try using flash to upload huge files. For example, you can try swfupload, which has good support for PHP.
Related
I've got nginx server created with Docker. When I'm making changes to JS or CSS file, those appear after 30 - 60 seconds with force-refreshing in the browser (yes, browser cache is turned off). How to make them appear immediately? My system is Ubuntu 17.
nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
error_log off;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-available/*;
open_file_cache max=100;
client_max_body_size 4M;
}
daemon off;
And the server config:
server {
server_name l.site;
root /var/www/site;
index index.php;
location / {
try_files $uri #rewriteapp;
}
location #rewriteapp {
if (!-f $request_filename){
set $rule_0 1$rule_0;
}
if (!-d $request_filename){
set $rule_0 2$rule_0;
}
if ($request_filename !~ "-l"){
set $rule_0 3$rule_0;
}
if ($rule_0 = "321"){
rewrite ^/(.*)$ /index.php?url=$1 last;
}
}
# from UPDATE #1 ->
location ~* \.(?:css|js)$ {
expires off;
# don't cache it
proxy_no_cache 1;
# even if cached, don't try to use it
proxy_cache_bypass 1;
}
# <- from UPDATE #1
location ~ \.php(/|$) {
fastcgi_pass php-upstream;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /var/log/nginx/site_error.log;
access_log /var/log/nginx/site_access.log;
}
UPDATE #1
Added this to server and still it is not showing me updated files in the browser just after code change.
location ~* \.(?:css|js)$ {
expires off;
# don't cache it
proxy_no_cache 1;
# even if cached, don't try to use it
proxy_cache_bypass 1;
}
UPDATE #2
Used THIS configuration, and still... didn't helped.
location / {
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
}
Link to newest versions of files:
https://gist.github.com/ktrzos/1bbf2fd0161ce0e20541ccb18fe066a5
Try to disable cache using .htaccess. this is code from my live website. this should work.
<FilesMatch "\.(html|htm|js|css|php)>
FileETag None
Header unset ETag
Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
Header set Pragma "no-cache"
Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"
</FilesMatch>
When using docker on VM (VirtualBox or so) change nginx.conf property sendfile to off
http {
server_tokens off;
sendfile off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
error_log off;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-available/*;
open_file_cache max=100;
client_max_body_size 4M;
}
I'm fairly new to Nginx, and I'm working on converting an .htaccess file into something nginx can make sense of. Everything's working well (mostly) - I can pull up the homepage just fine. The problem is when I get to a post page.
think similar to wordpress, URLs like:
http://www.example.com/12/post-title-in-slug-form
Where 12 is the post id, and obviously that string is the post slug. I'm trying to parse those as two separate arguments (id & slug) and pass them into index.php like I was successfully doing in apache. I'm getting a 404 page, though, and have confirmed it is because of the rewriterule. Here's what the entire config file looks like, with only the website name changed (for privacy):
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
server {
listen 80;
server_name example.com;
access_log off;
error_log on;
# deny access to .XYZ files
location ~ /\. {
return 403;
}
location ~ sitemap\.xml {
return 301 http://example.com/sitemap.php;
}
location ~ .php$ {
# Here you have to decide how to handle php
# Generic example configs below
# Uncomment and fix up one of the two options
# Option 1: Use FastCGI
fastcgi_index index.php;
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
location / {
try_files $uri $uri/ #router;
}
location #router {
rewrite ^/([0-9]+)/?(.*)?/?$ /index.php?id=$1&slug=$2 last;
}
}
}
Please let me know if you can spot what's throwing it off when it comes to parsing the individual posts into ids and slugs and passing them. Thanks!
You should add a / in the beginning and a / before index.php like this :
rewrite ^/([0-9]+)/?(.*)?/?$ /index.php?id=$1&slug=$2 last;
Note i also used $1 and $2
If what you posted is indeed the COMPLETE config file, then the setup is missing something to handle PHP files as the regexp looks to be fine.
I actually think the config you posted cannot be the full one or that there is something fundamental going on as that config should have thrown errors and failed to load and also, since you mentioned that your PHP was loading fine, then it cannot be the posted config serving your website.
A better config is attached below:
FYI, try_files ABC XYZ last; is not valid syntax and you need at least two options in try_files. Anyway, fixed those in the posted config as well.
server {
listen 80;
server_name example.com;
access_log off;
error_log on;
# deny access to .XYZ files
location ~ /\. {
return 403;
}
location ~ sitemap\.xml {
return 301 http://example.com/sitemap.php;
}
location ~ .php$ {
# Here you have to decide how to handle php
# Generic example configs below
# Uncomment and fix up one of the two options
# Option 1: Use FastCGI
#fastcgi_index index.php;
#include fastcgi_params;
#fastcgi_pass unix:/var/run/php5-fpm.sock;
# Option 2: Pass to Apache
# Proxy_pass localhost:APACHE_PORT
}
location / {
try_files $uri $uri/ #router;
}
location #router {
rewrite ^/([0-9]+)/?(.*)/?$ /index.php?id=$1&slug=$2 last;
}
}
You will need to fix the PHP handling bit and choose which setup you want to implement.
I think though that you need to verify that you only have one instance of nginx running and that it is what is serving your site.
I am using WT-NMP software with combination of php,mysql and ngnix server.
worker_processes 1;
events {
worker_connections 1024;
}
http {
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
ssi off;
#Timeouts
client_body_timeout 5;
client_header_timeout 5;
keepalive_timeout 25 25;
send_timeout 15s;
resolver_timeout 3s;
#Directive sets timeout period for connection with FastCGI-server. It should be noted that this value can't exceed 75 seconds.
fastcgi_connect_timeout 5s;
#Directive sets the amount of time for upstream to wait for a fastcgi process to send data. Change this directive if you have long running fastcgi processes that do not produce output until they have finished processing. If you are seeing an upstream timed out error in the error log, then increase this parameter to something more appropriate.
fastcgi_read_timeout 40s;
#Directive specifies request timeout to the server. The timeout is calculated between two write operations, not for the whole request. If no data have been written during this period then serve closes the connection.
fastcgi_send_timeout 15s;
fastcgi_buffers 8 32k;
fastcgi_buffer_size 32k;
#fastcgi_busy_buffers_size 256k;
#fastcgi_temp_file_write_size 256k;
open_file_cache off;
#php max upload limit cannot be larger than this
client_max_body_size 8m;
####client_body_buffer_size 1K;
client_header_buffer_size 1k;
large_client_header_buffers 2 1k;
types_hash_max_size 2048;
include nginx.mimetypes.conf;
default_type text/html;
##
# Logging Settings
##
access_log "c:/wt-nmp/log/nginx_access.log";
error_log "c:/wt-nmp/log/nginx_error.log" warn; #debug or warn
log_not_found on; #enables or disables messages in error_log about files not found on disk.
rewrite_log off;
#Leave this off
fastcgi_intercept_errors off;
gzip off;
index index.php index.htm index.html;
server {
listen 127.0.0.1:80 default_server;
listen 127.0.0.1:8080;
#listen [::1]:80 ipv6only=on;
server_name mylocalhost;
root "c:/wt-nmp/www/projectname";
autoindex on;
error_log "c:/wt-nmp/log/nginx_error.log";
allow 127.0.0.1;
#allow ::1;
deny all;
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
#tools are now served from wt-nmp/include/tools/
location ~ ^/tools/.*\.php$ {
root "c:/wt-nmp/include";
try_files $uri =404;
include nginx.fastcgi.conf;
fastcgi_pass php_farm;
}
location ~ ^/tools/ {
root "c:/wt-nmp/include";
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass php_farm;
include nginx.fastcgi.conf;
}
}
include domains.d/*.conf;
include nginx.phpfarm.conf;
}
when I am trying to access with "mylocalhost" its working fine when I am firing an event and call ajax method . It is giving page not found message
WT-NMP - portable Nginx Mysql Php development stack for Windows README.md states:
Starting only one PHP-CGI server with wt-nmp.exe --phpCgiServers=1 will result in slow ajax requests since Nginx will not be able to process PHP scripts simultaneous.
So, make sure you use the latest version of WT-NMP and choose at least 3 PHP-CGI servers.
I recently rent a vServer and now I am playing around with nginx, fastcgi cache and my wordpress setup. It's running pretty fast right now but in all speed test I came across my js and css files. Is there some kind of minifying implemented in nginx I could use? Also my pictures galleries do have a lot of pictures, is there something else I could use to increase the performance? (all pictures are stripped down to a minimum file size already)
This is my nginx.config:
user www-data;
worker_processes 1;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
and my host file:
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=MYAPP:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
server {
listen 80;
root /var/www/blog;
index index.php;
server_name IPADRESS;
location / {
try_files $uri $uri/ /index.php?$args;
#Cache everything by default
set $no_cache 0;
#Don't cache POST requests
if ($request_method = POST)
{
set $no_cache 1;
}
#Don't cache if the URL contains a query string
if ($query_string != "")
{
set $no_cache 1;
}
#Don't cache the following URLs
if ($request_uri ~* "/(administrator/|login.php)")
{
set $no_cache 1;
}
#Don't cache if there is a cookie called PHPSESSID
if ($http_cookie = "PHPSESSID")
{
set $no_cache 1;
}
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ .php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_cache MYAPP;
fastcgi_cache_valid 200 60m;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)(\?ver=[0-9.]+)?$ {
expires 365d;
}
}
I don't want to install a plugin just for this stuff (keep Wordpress as simple as possible) so I am looking for the best basic setup for wordpress.
I don't know why with nginx this variable $_SERVER['REMOTE_ADDR'] doesn't echo an IP. On every other web server it works as it should.
Any suggestions?
I suspect it has something to do with the interface between nginx (the webserver) and fastcgi, which is the API in which PHP is running.
According to your info provided, the Server API is: FPM/FastCGI
I suggest you take a hard look at the details of how PHP is installed with nginx (you have not provided any).
If you do not require the performance of nginx, then you may find a pragmatic solution is to just use apache. I use nginx as a reverse proxy in front of apache, but that introduces some additional issues with getting the REMOTE_ADDR passed to PHP (notably, mod_rpaf).
Good luck!
#Michael, here is a project I maintain which provides the proper fastcgi parameters for interfacing Nginx with FPM. Hope it helps.
fastcgi_params on Github
These are from the conf file from nginx
user http;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
tcp_nodelay on;
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
server {
listen 80;
server_name www.fireangel.ro fireangel.ro;
access_log /var/log/nginx/localhost.access.log;
Default location
location / {
root /var/www/html/fireangel.ro/public_html;
index index.php;
}
Images and static content is treated different
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ {
access_log off;
expires 30d;
root /var/www/html/fireangel.ro/public_html;
}
Parse all .php file in the /srv/http directory
location ~ .php$ {
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_pass backend;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/html/fireangel.ro/public_html$fastcgi_script_name;
include fastcgi_params;
fastcgi_intercept_errors on;
fastcgi_ignore_client_abort off;
fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
}
Disable viewing .htaccess & .htpassword
location ~ /\.ht {
deny all;
}
}
upstream backend {
server 127.0.0.1:9000;
}
}