Running CGI scripts on NGINX - php

I know this question has already been asked, but there were no clear answers on that question (How to run CGI scripts on Nginx) that would help me. In my case, I have installed NGINX by using source code, and have fixed my .config file so that I can read .php files using FASTCGI successfully. However, I am having quite some issues when it comes to running CGI scripts. I know I have FAST CGI installed and set up, so am I supposed to be naming these .cgi files .fcgi instead? Or am I supposed to include someway for the .cgi file to know that it is working with FAST CGI?? I tried toying around with the nginf.conf file to include .fcgi, and it looks something like this right now:
worker_processes 2;
pid logs/nginx.pid;
error_log syslog:server=unix:/dev/log,facility=local7,tag=nginx,severity=error;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log syslog:server=unix:/dev/log,facility=local7,tag=nginx,severity=info combined;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
root /home/parallels/Downloads/user_name/nginx/html;
location / {
index index.html index.htm new.html;
autoindex on;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
#fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
include fastcgi_params;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
}
location ~ \.pl|fcgi$ {
try_files $uri =404;
gzip off;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.pl;
#fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
#error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
However, whenever I run a .fcgi script such as
#!/usr/bin/perl
print "Content-type: text/html\n\n";
print "<html><body>Hello, world.</body></html>";
I am greeted with a screen that looks like this:
I'm pretty sure this is not normal; I should just be seeing Hello, world. on my screen, not all the code as well. Please let me know if my thinking that is actually wrong and this is supposed to be the correct output.
Additionally, on a side note, if I had this as my files.fcgi file:
#!/usr/bin/perl
my $output = `ls`;
print $output
Running something like this returns a list of all files in the directory that the .fcgi file is located in. Is there anyway I could display this on the web browser? Looking at examples online, it seems like people have been able to just run file.fcgi on their browser and see the output of the shell command (which led me to believe I'm doing something wrong, because when I run it on the command line it lists all the files but on the browser, it just prints out my code). Does anyone know what I could possibly doing wrong, assuming I am doing something wrong. If you need any more information, please let me know!
Thank you, and have a good day!

nginx does not support CGI scripts, and cannot launch FastCGI scripts on its own — it can only connect to FastCGI processes that are already running.
If you want to run CGI scripts, use a web server that supports them, such as Apache. While there are some workarounds, they will just confuse you at this stage.

Search for "fastcgi wrapper" to find various programs designed to bridge the gap between "modern" webservers that don't like spawning processes to handle requests and traditional CGI programs.
[nginx ] one socket [wrapper ] ,-- forks a single subprocess
[runs ] == connection ==> [also runs ] ---- running your CGI program
[continuously] per request [continuously] `-- for each request
While the standard CGI API is "for each request, the server calls your program with env vars to describe the request and with the body (if any) on stdin and your program is expected to emit a response on stdout and exit", the fcgi API expects your program to be constantly running and handle requests handed to it on a socket -- in that way, it's really more like a server. See http://en.wikipedia.org/wiki/FastCGI

Related

Nginx fastcgi reverse proxy for a PHP API, without need to serve static files

I'm trying to setup an nginx reverse proxy, deployed as a networked image in ECS. Behind it is a PHP API, running in a separate php-fpm container listening on port 9000. The nginx docs and examples I can find seem to suggest that it has to be configured with a root path within the server block in the config.
The following is an example (from here) of the nginx config that seems to be commonly used in this scenario:
server {
listen 80;
root /var/www/html/public; # is this really needed if proxying all requests to php?
server_name _;
index index.php;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
Looking at it from a performance perspective, when a request comes into an API endpoint such as /api/foobar, in this particular scenario there is no point in nginx trying to check the filesystem for an index file inside this folder, since we know for sure there is no such folder. Ideally it should bypass this unnecessary step and simply proxy the request straight to the other container that's running php-fpm, letting it deal with route matching and 404's if an unexpected route is requested.
So, my question is - given a requirement to only serve API requests via a PHP application with "virtual routes" served up via a single index.php entry point (such as Slim) with no need to serve up any static files from a "root" path - is there a way to configure nginx to just proxy all incoming requests using fastcgi_pass?

Run multiple nginx on one dedicated server ubuntu

Is it possible to run multiple NGINX on a single Dedicated server?
I have a dedicated server with 256gb of ram, and I am running multiple PHP scripts on it but it's getting hangs because of memory used with PHP.
when I check
free -m
it's not even using 1% of memory.
So, I am guessing its has some to do with NGINX.
Can I install multiple NGINX on this server and use them like
5.5.5.5:8080, 5.5.5.5:8081, 5.5.5.5:8082
I have already allocated 20 GB memory to PHP, but still not working Properly.
Reason :- NGINX gives 504 Gateway Time-out
Either PHP or NGINX is misconfigured
You may run multiple instances of nginx on the same server provided that some conditions are met. But this is not the solution you should look for (also this may not solve your problem at all).
I got my Ubuntu / PHP / Nginx server set this way (it actually also runs some Node.js servers in parallel). Here is a configuration example which works fine on a AWS EC2 medium instance (m3).
upstream xxx {
# server unix:/var/run/php5-fpm.sock;
server 127.0.0.1:9000 max_fails=0 fail_timeout=10s weight=1;
ip_hash;
keepalive 512;
}
server {
listen 80;
listen 8080;
listen 443 ssl;
#listen [::]:80 ipv6only=on;
server_name xxx.mydomain.io yyy.mydomain.io;
if ( $http_x_forwarded_proto = 'http' ) {
return 301 https://$server_name$request_uri;
}
root /home/ubuntu/www/xxxroot;
index index.php;
location / {
try_files $uri $uri/ /index.php;
}
location ~ ^/(status|ping)$ {
access_log off;
allow 127.0.0.1;
#allow 1.2.3.4#your-ip;
#deny all;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass adn;
#fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
#fastcgi_param SCRIPT_FILENAME /xxxroot/$fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME $request_filename;
#fastcgi_param DOCUMENT_ROOT /home/ubuntu/www/xxxroot;
# send bad requests to 404
#fastcgi_intercept_errors on;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
Hope it helps,
I think you are running into a timeout. Your PHP-Scripts seams to run to long.
Check following:
max_execution_time in your php.ini
request_terminate_timeout in www.conf of your PHP-FPM configuration
fastcgi_read_timeout in http section or location section of your nginx configuration.
Nginx is designed more to be used as a reverse proxy or load balancer than to control application logic and run php scripts. Running multiple instances of nginx that each execute php isn't really playing to the server application's strengths. As an alternative, I'd recommend using nginx to proxy between one or more apache instances, which are better suited to executing heavy php scripts. http://kbeezie.com/apache-with-nginx/ contains information on getting apache and nginx to play nicely together.

magento not able to login in admin panel

I am new to magento. When i try to login in admin panel it gives me below error. It works when i off my session auto start off. But doing this my other application don't work on server. I am using magento 1.9 version. To add i am running nginx not apache
Fatal error: Mage_Admin_Model_Observer::actionPreDispatchAdmin(): The script tried to execute a method or access a property of an incomplete object. Please ensure that the class definition "Mage_Admin_Model_User" of the object you are trying to operate on was loaded before unserialize() gets called or provide a __autoload() function to load the class definition in /var/www/html/magento/magento/app/code/core/Mage/Admin/Model/Observer.php on line 62
Ok, so for your information, ngnix is not parsing htaccess at all and Magento heavily rely on htaccess for security.
Before even considering your problem, please know that, if your server have anything else than a local access, you are at risk, because, as you can see in the app/etc/local.xml file, accessible to everyone, you are giving the world your database access.
Please have a complete reading of this document : http://info.magento.com/rs/magentocommerce/images/MagentoECG-PoweringMagentowithNgnixandPHP-FPM.pdf where you can find a basic ngnix configuration for Magento :
server {
listen 80 default;
server_name magento.lan www.magento.lan; # like ServerName in Apache
root /var/www/magento; # document root, path to directory with files
index index.html index.php;
autoindex off; # we don’t want users to see files in directories
location ~ (^/(app/\|includes/\|lib/\|/pkginfo/\|var/\|report/config.
xml)\|/\.svn/\|/\.git/\|/.hta.+) {
deny all; #ensure sensitive files are not accessible
}
location / {
try_files $uri $uri/ /index.php?$args; # make index.php handle requests for
/
access_log off; # do not log access to static files
expires max; # cache static files aggressively
}
location \~\* \.(jpeg\|jpg\|gif\|png\|css\|js\|ico\|swf)$ {
try_files $uri $uri/ #proxy; # look for static files in root directory and
ask backend if not successful
expires max;
access_log off;
}
location #proxy {
fastcgi_pass fpm_backend; # proxy everything from this location to backend
}
location \~\.php$ {
try_files $uri =404; # if reference to php executable is invalid return 404
expires off; # no need to cache php executable files
fastcgi_read_timeout 600;
fastcgi_pass fpm_backend; # proxy all requests for dynamic content to
# backend configured in upstream.conf
fastcgi_keep_conn on; # use persistent connects to backend
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root${fastcgi_script_name};
fastcgi_param MAGE_RUN_CODE default; # Store code is defined in
#administration > Configuration > Manage Stores
fastcgi_param MAGE_RUN_TYPE store;
}
}
Then, when and only when you have an access denied on the file app/etc/local.xml, please consider adding ngnix tag to your question, then an user with more ngnix knowledge can maybe help you further than me (since it is more a sysadmin job than a "coder" like me job).
All I can say is : it looks like if you add fastcgi_param PHP_VALUE "session.auto_start=0"; under the section
location \~\.php$ {
fastcgi_param PHP_VALUE "session.auto_start=0";
#... more come here but I'm shortening just for the specific problem
}
That should do the trick.
Can you Please clear your Cache, and restart mysqld and clear browser cache also.
canyou please share your website link,
Magento will not work with session.auto_start enabled, because some action would take place before the session start.
A workaround if you really don't want to disable it for your other app is to edit the .hatccess of your Magento and add php_flag session.auto_start 0 in it.

Vagrant+Ubuntu 14.04+Nginx+HHVM = slow + crashing

As per my last question, I'm trying to speed up Laravel by running it under HHVM.
This required me to update my server to 64-bit, so I'm running Trusty64 now. I installed HHVM and Nginx via deb packages. I'm not entirely sure my nginx configuration is right, I scraped this off the net:
server {
listen 80 default_server;
root /vagrant/public;
index index.php index.html index.htm;
server_name localhost;
access_log /var/log/nginx/localhost.laravel-access.log;
error_log /var/log/nginx/locahost.laravel-error.log error;
charset utf-8;
location / {
try_files \$uri \$uri/ /index.php?\$query_string;
}
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
error_page 404 /index.php;
include /etc/nginx/hhvm.conf; # The HHVM Magic Here
}
And my site does load the first few times I hit it. It's now taking more than twice as long to load than with the built-in PHP server. After several refreshes, the page stops loading altogether, nginx gives a 504 gateway timeout, I can no longer SSH in to my server, and vagrant takes several minutes just to shut down. Whatever it's doing, it's completely killing my server.
I heard HHVM uses some kind of JIT and requires warming up, then kicks in after several loads? Could this be what's destroying my server? How do I fix that?
Post Update: I must eat my words!!!!
I moved the laravel code from a share virtual box folder to a non-shared directory, now HHVM loads the laravel welcome screen in less than 10ms once JIT kicks in.
There is a startup config setting that indicates the number of request needed before HHVM JIT kicks in.
Here's the ini file I use for my HHVM:
pid = /var/run/hhvm/pid
hhvm.server.file_socket=/var/run/hhvm/hhvm.sock
hhvm.server.type = fastcgi
hhvm.server.default_document = index.php
hhvm.log.level = Warning
hhvm.log.always_log_unhandled_exceptions = true
hhvm.log.runtime_error_reporting_level = 8191
hhvm.log.use_log_file = true
hhvm.log.file = /var/log/hhvm/error.log
hhvm.repo.central.path = /var/run/hhvm/hhvm.hhbc
hhvm.mysql.typed_results = false
hhvm.eval.jit_warmup_requests = 1
The jit_warmup_requests = 1 indicates one reload before optimization, but you can set it to 0 for it to immediately kick in. I think if not specified, it's like 10 requests or something.
Regardless, I've the same setup as you, nginx, hhvm 3.0.1, laravel, Ubuntu 14, on a VirtualBox image using a shared folder. I use the same shared folder with a second image running PHP-FPM5.5. The PHP-5.5 loads the laravel 'you have arrived' page in 50ms or so, while the HHVM version is more on the order of 250ms--much less perfomance than expected. I'm going to test running the code in a non-shared directory, but I doubt this is the issue.
I suspect it has to do with the amount of code that must be evaluated at runtime vs. compile time. I know if I run code with lots of magic methods and variable-variables HHVM doesn't exactly shine. Laravel has some funky autoloading going on to achieve it's hiding of namespaces and such and this might have some impact, but I'd need to go deeper to make a strong stand on this.
I have this in my nginx config file to pass scripts to HHVM (note I use sockets, not TCP and don't use hack yet.)
location ~ \.php$ {
fastcgi_pass unix:/var/run/hhvm/hhvm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}

Nginx and FastCGI downloads PHP files instead of processing them

I'm running on Windows 7 (64-bit), with PHP 5.4.12, and Nginx 1.5.8.
I have read many tutorials on setting this up, and troubleshooting this issue, which is that when requesting a PHP file from my localhost, it downloads it as a file instead of displaying the PHP page. Below is my nginx.conf file:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 8081;
server_name localhost;
access_log C:/nginx/logs/access.log;
error_log C:/nginx/logs/error.log;
root C:/nginx/html;
fastcgi_param REDIRECT_STATUS 200;
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
}
I'm running nginx.exe manually through the command prompt.
I've also tried starting php-cgi.exe manually first at a separate command prompt, like so:
C:\php5.4.12\php-cgi.exe -b 127.0.0.1:9000
The php file I'm requesting is within C:/nginx/html, and I'm requesting it as:
http://localhost:8081/info.php
And it downloads it. The contents of this PHP file are:
<?php
phpinfo();
?>
How can I possibly get my PHP scripts to run in this environment. Anyone have experience with this?
Try to change default_type application/octet-stream; to default_type text/html;
Maybe your php-script does not set a content MIME type and it goes from nginx.
It was http2 enabled on port 80 for me too. Disabling it solved the issue.
Try placing " * " here
location ~* \.php$ {
There is something wrong with your paths, and nginx does not know the path accessed via URL is the path it should pass through "fastcgi_pass". Therefore, it gives the file for download.
Check your error log from :
C:/nginx/logs/error.log;
Do you have a "C:/nginx/html/info.php;"?
I found that if you have the http2 directive for port 80 on the server.
http2 works only under https. Therefore, if you remove http2, it should solve your issue.

Categories