I've been fiddling around with this for quite some time now, and I can't really get a grasp on how nginx+hhvm maps my requests.
Basiclly, I have an API on api.example.com that I would like to call with the Accept: application/vnd.com.example.api.v1+json for version 1 and application/vnd.com.example.api.v2+json for version 2. The API itself is a PHP application that I'll be running using a fresh install of HHVM. All requests will be handeled by index.php.
The folder structure looks like this:
api.example.com/
index.php (content: fail)
v1/
index.php (content: v1)
v2/
index.php (content: v2)
Whenever I use my REST client to access api.example.com/test with the v1 accept header I get the v1 response back. When I the use the accept header for v2, it shows the v2. So everything is correct. If I don't supply any accept header I get redirected to example.com
The NGINX configuration looks like this
map $http_accept $api_version {
default 0;
"application/vnd.com.example.api.v1+json" 1;
"application/vnd.com.example.api.v2+json" 2;
}
server {
# listen to :80 is already implied.
# root directory
root /var/www/api.example.com/;
index index.html;
server_name api.example.com;
include hhvm.conf;
location / {
if ($api_version = 0) {
# redirect to example.com if applicable
# Accept-header is missing
return 307 http://example.com;
}
try_files /v$api_version/$uri /v$api_version/$uri/ /v$api_version/index.php?$args;
}
# Prevent access to hidden files
location ~ /\. {
deny all;
}
}
the hhvm.conf file is included below. It is derivated or somewhat exact functionality of the default hhvm.conf included with hhvm.
location ~ \.(hh|php)$ {
fastcgi_keep_conn on;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
This is my problem
If I try to access api.example.com/index.php I get the "fail" response, even if I am expecting v1 for the v1 accept header and v2 for the v2 accept header. Everything else seems to work fine, even index.html maps correctly to it's subdirectory.
What I've tried
I've tried using
root /var/www/api.example.com/v$api_version/;
in the configuration, but that only gives me 404 errors from NGINX. I believe what I'm looking for is actually changing the root path, but I haven't got my head around how to make it work. I've also tried removing index parameters in both the nginx configuration and the hhvm.conf, but that does not seem to help. I've also tried a ton of different configurations and I've had at least 20-30 tabs of stackoverflow open to solve this, but I'm clearly missing something (probably rather simple) here. I've also tried moving the hhvm include inside the location block.
The setup
Debian 7,
nginx/1.2.1,
hhvm 3.2.0
Oh, and it's my first time actually asking a question here. :) hope I've formatted everything correctly.
What are the contents of hhvm.conf ?
I'm assuming Fast CGI is being used to proxy requests to the HHVM server. So your hhvm.conf might look something like this:
root /var/www/api.example.com;
index index.php;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
Which should be wrapped by a location directive.
So based on your config shown, what I think is happening is you must be matching php scripts to the HHVM location directive, which is fine, but in doing so your try_files setting, which seems to be in charge of doing the API version to file system mapping, is not being processed.
Without your hhvm.conf it's hard to say what to do next, but I suspect you need to focus on the root value inside the location directive that contains the HHVM fastcgi settings.
UPDATE
So I have the concept of an API version derived from the header mapping to a filesystem working for me on nginx + HHVM. Here is my nginx config for HHVM:
location / {
root /var/www/html/hh/v$api_version;
index index.php;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
Really though, the location of / is no better than your one - in fact, yours is probably better as you probably don't want HHVM serving static files etc. But this is working for me - combined with the map you have in your original post, when I curl -H 'Accept: application/vnd.com.example.api.v2+json' localhost, I get the expected response from an index.php file inside the version's directory.
I think what you need to do is update your HHVM nginx config with a dynamically generated root declaration like mine above. If you're still getting a 404, try this: in /etc/init.d/hhvm, find the ADDITIONAL_ARGS= var, make it ADDITIONAL_ARGS="-vServer.FixPathInfo=true". I'm not sure exactly what it does, but I've come across it before and it had fixed a weird 404 problem I was having in the past (where the 404 was coming from HHVM, not Apache/nginx).
Related
I've looked at dozens of other questions and references on the web - and by all my calculations, my setup should work, but it doesn't.
I have nginx installation with php-fpm. If I try to access a .php file, it runs correctly and I get the correct results. I got this in my config file:
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
Now, I want to setup my web app so that /somedir/file automatically executes /somdir/file.php while still displaying /somdir/file in the browser's address bar. So I modified my config to contain the following:
location / {
try_files $uri $uri/ $uri.php?query_string
}
This kind of works, that is, the server does access the .php file. Yet, instead of executing it using the existing location ~ \.php$ block above, it simply spits the php source code as the download into the browser. If I append the .php manually to the requested URL, then the php is executed.
It feels as if once the server matches try_files to $uri.php, it then does not do another pass at locations to see that what it needs to do with the php files. I tried putting the php block above and below the location /, but it makes no difference.
How can I get the php to be executed?
You want file.php to be treated as an index file, so that domain.com/dir/file.php works on domain.com/dir/ ?
Why not just rename it to index.php?
You can do this by adding this param on your location block:
index index.html file.php index.php;
If not, you might want to look into writing a rewrite rule for nginx to map domain.com/dir/ to domain.com/dir/file.php (but you have to do it for each dir that you need it to work)
I'm working on building a website using MAMP + nginx + phalcon. I have everything set up and phalcon is working, but it only sees the IndexController and the indexAction. If I add more controllers and try to navigate to them I get a 404 error. Also if I add another action to the IndexController I can not get to it and I get a 404 error again. Here is my nginx config file:
server {
listen 7888 default_server;
root "/dev/testing/public";
location / {
index index.html index.php;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/Applications/MAMP/Library/logs/fastcgi/nginxFastCGI.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
My index.php is the same as the given index in the phalcon documentation.
Any advice/insights would be greatly appreciated.
Your nginx conf should point all requests to your bootstrap file (usually your index.phpin your public folder) passing the _url param containing the original path requested.
That's how the Phalcon router works out of box.
The right configuration block may vary depending on your project's file structure. So you'll need to read Phalcon's Nginx Installation Notes to see which configurations are needed to your particular situation.
Try to check the documentation of phalcon nginx here https://docs.phalconphp.com/en/latest/reference/nginx.html, I was able to use that kind of configuration because we're passing _url param
I'm trying to setup a dev server with nginx for two projects, one in rails and one in PHP. I want a base URL (dev.example.com) for both projects and a sub location for each one (dev.example.com/rails_proj and dev.example.com/php_proj). My nginx conf is like the following:
server {
listen 80;
server_name dev.example.com;
passenger_enabled on;
passenger_app_env development;
passenger_buffer_response off;
root /var/www/dev;
location ~ ^/rails_proj {
root /public;
passenger_base_uri /rails_proj;
passenger_app_root /var/www/dev/rails_proj;
passenger_document_root /var/www/dev/rails_proj/public;
}
location ~ ^/php_proj {
root /web;
try_files $uri /app_dev.php$is_args$args;
location ~ \.php(/|$) {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
}
The rails project works fine but the PHP project gives me an "file not found" when I try to access dev.example.com/php_proj/app_dev.php and in the log It says: FastCGI sent in stderr: "Primary script unknown". I found issues related to It and I've tried a number of ways but I can't come up with something that works for both projects. How can I fix this?
It might be easier to manage as two server blocks with an extra subdomain for each. It also reduces potential confusion introduced by multiple regex locations:
server {
server_name rails.dev.example.com;
return 200 "Hello from rails app.
";
}
server {
server_name php.dev.example.com;
return 200 "Hello from php app.
";
}
I'm sure you've moved on or figured this out by now, but for posterity's sake: the root directive in your location block overrides your server root directive. It's not relative to it. The way you have your conf, the project files would need to be located in the absolute path /web/php_proj on your web server.
I can't quite determine the way your local paths are set up, but if /var/www/dev/rails_proj/public is the directory that contains your app root, you may instead need to do something like this:
location /rails_proj {
alias /var/www/dev/rails_proj/public;
}
Using alias instead of root will strip the /rails_proj from the beginning of the request path and serve files relative to the alias path. For example, if you request http://dev.example.com/rails_proj/php/test.php, it will serve the file /var/www/dev/rails_proj/public/php/test.php
Also, I would definitely recommend changing your top level location paths from regex to standard prefix paths.
location ~ ^/php_proj is effectively identical to location /php_proj except that using regex messes with the way nginx decides what location to serve. Regex paths are more performance intensive, act on a first-match basis rather than best match, and will take priority over all prefix path locations.
Another thing to note: using $document_root$fastcgi_script_name may not always work as expected. Especially if using the alias directive instead of root. Using the variable $request_filename instead is preferable in most cases.
I am quite new to Magento and I have a case where a client has a running ecommerce website built with Magento and is running on Amazon AWS EC2 server.
If I can locate the web root (/var/www/) and download everything from it (say via FTP), I should be able to boot up my virtual machine with LAMP installed, put every single files in there and it should run, right?
So I did just that, and Magento is giving me an error. I am guessing there are configs somewhere that needs to be changed in order to make it work, or even a lot of paths need to be changed etc. Also databases to be replicated. What are the typical things I should get right? Database is one, and the EC2 instance uses nginx while I use plain Apache, I guess that won't work straight out of the box, and I probably will have to install nginx and a lot of other stuffs.
But basically, if I setup the environment right, it should run. Is my assumption correct?
Thanks for answering!
There is just two or three config changes is enough to move magento another server.
For Database settings, modify app-->etc-->local.xml file
If you changed your URL you need to edit database in core_config_data table. In this change value for web/unsecure/base_url and web/secure/base_url to your current base url
For Apache server :
3. Some times you need to modify .htaccess file. Its located at root directory. Change your rewrite engine settings (if need) to your current root directory like this,
**RewriteBase /magento/**
For NGINX serevr :
If your web server is NGINX then, your config should be like this,
server {
root /home/magento/web/;
index index.php;
server_name magento.example.com;
location / {
index index.html index.php;
try_files $uri $uri/ #handler;
expires 30d;
}
location ~ ^/(app|includes|lib|media/downloadable|pkginfo|report/config.xml|var)/ { internal; }
location /var/export/ { internal; }
location /. { return 404; }
location #handler { rewrite / /index.php; }
location ~* .php/ { rewrite ^(.*.php)/ $1 last; }
location ~* .php$ {
if (!-e $request_filename) { rewrite / /index.php last; }
expires off;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param MAGE_RUN_CODE default;
fastcgi_param MAGE_RUN_TYPE store;
include fastcgi_params;
}
}
For more information about this follow bellow links,
CLICK ME
Here's a detailed guide how to move Magento to a new server: http://www.magentocommerce.com/wiki/groups/227/moving_magento_to_another_server (I used it once and it worked perfectly)
Based on your text I assume you forgot to copy the database (it would be great if you can post the error message here). Only copying the files isn't enough, the database has to be copied to.
In the database you also have to adjust the server url. In the config file (app/etc/local.xml) you only have to update the database settings.
/edit: MySQLDumper is a tool to backup and restore MySQL-Databases with PHP. So you don't need phpMyAdmin (but you still need web access). Backup on the old server, restore on the new. The database settings can be found in app/etc/local.xml.
Situation
Hello, I'm confused as to PHP's expected/default behavior regarding extensionless PHP files, and/or URL requests that "go past" the actual file that (I want to) processes the request (i.e., PHP's default "fallback" actions before it resorts to completely 404-ing). Here's my situation:
My directory structure on my local server (running nginx 1.5.3 with very basic PHP 5.5.1 setup) looks like the following:
/index
/index.php
/other
/other.php
/rootdir/index
/rootdir/index.php
/rootdir/other
/rootdir/other.php
The contents of all eight files are the same:
<?php
echo $_SERVER['PHP_SELF'] . ', ' . $_SERVER['REQUEST_URI'];
?>
BUT, hitting the respective endpoint produces some strange (to me) results.
Research
GET /index.php
'/index.php, /index.php' # Makes sense...
GET /index.php/something_else
'/index.php, /index.php/something_else' # Also makes sense...
GET /index/something_else
'/index.php, /index/something_else' # Let's call this ANOMALY 1... (see below)
GET /something_else
'/index.php, /something_else' # ANOMALY 2
GET /other.php
'/other.php, /other.php' # Expected...
GET /other.php/something_else
'/index.php, /other.php/something_else' # ANOMALY 3
GET /rootdir/index.php
'/rootdir/index.php, /rootdir/index.php' # Expected...
GET /rootdir/index.php/something_else
'/index.php, /rootdir/index.php/something_else' # ANOMALY 4
GET /rootdir/other.php
'/rootdir/other.php, /rootdir/other.php' # Expected...
GET /rootdir/other.php/something_else
'/index.php, /rootdir/other.php/something_else' # ANOMALY 5
My understanding is that the server redirects to /index.php when it is unable to find what the user is looking for at the request URI; that much makes sense... what I don't understand is:
Why it will do this despite my not having a dedicated 404 page set up (I didn't tell it to try /index.php before 404-ing; I want it to display a legit, non-custom 404 page if something isn't found and/or can't be processed. I figured it should display the default server 404 page when it couldn't find something... apparently that's not always the case...?)
Why it doesn't try /rootdir/index.php when it can't find something within the /rootdir/ subdirectory.
Questions
Would somebody be able to shed some light on what PHP's logic is (or maybe it's nginx's doing; I haven't been able to figure that out yet) with regards to addresses that are not found? Why am I seeing what I am seeing? (Specifically with respect to Anomalies #4 and #5. I expected it to use /rootdir/index.php for handling it's "404," or I expected a real 404 page; the fallback to /index.php was unexpected.)
As a direct corollary (corollical?) question, how can I go about simulating extensionless PHP files that will handle hits that occur "below" them (e.g. in the Anomaly #1 case; that's actually exactly what I want, though it wasn't quite what I expected) without relying on .htaccess, mod_rewriting, or redirects? Or is that a silly question? :-)
References
I'm trying to roll out a custom implementation for handling requests like /some_dir/index.php/fake_subdir and /some_other_dir/index.php/fake_subdir (i.e., different "fallback handlers") without relying on Apache, but the logic behind PHP's (or nginx's?) default fallback behavior is eluding me. These pages are primarily where this question stems from:
Pretty URLs without mod_rewrite, without .htaccess
https://stackoverflow.com/a/975343/2420847
GET /other.php/something_else
This is called PATH_INFO in Apache. As Apache's scanning down a URL's "directories", it will return the first file (or execute the first script) that is actually a file/script, e.g.
GET /foo/bar/baz/index.php/a/b/c/
^--dir
^--dir
^---dir
^---script
^^^^^^^--- path_info
In real terms, the real request is for
GET /foo/bar/baz/index.php
and then Apache will take the unused trailing portion of the directory structure of the URL and turn it into path info. In PHP, you'll have
$_SERVER['REQUEST_URI'] = '/foo/bar/baz/index.php';
$_SERVER['PATH_INFO'] = 'a/b/c';
Well, I figured out what was going on: I had a misconfigured nginx server.
Marc B's answer, though related to Apache, prompted me to check out my PATH_INFO value — lo and behold, PATH_INFO was nonexistent, even for requests like GET /other.php/something_else.
Some more searching turned up the answer. Using nginx's fastcgi_split_path_info to split the URI into a) the path to the script and b) the path that appears following the script name will fail if done after a use of the try_files directive, due to its rewriting the URI to point to a file on the disk. This obliterates any possibility of obtaining a PATH_INFO string.
As a solution, I did three things (some of these are probably redundant, but I did them all, just in case):
In a server's location block, make use of fastcgi_split_path_info before setting up PATH_INFO via
fastcgi_param PATH_INFO $fastcgi_path_info;
...and do both before making use of try_files.
Store $fastcgi_path_info as a local variable in case it ever changes later (e.g. with set $path_info $fastcgi_path_info;)
When using try_files, don't use $uri... use $fastcgi_script_name. (Pretty sure this was my biggest mistake.)
Sample Configuration
server {
# ... *snip* ...
location ~ \.php {
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_index index.php;
fastcgi_intercept_errors on;
fastcgi_pass 127.0.0.1:9000;
try_files $fastcgi_script_name =404;
}
# ... *snip* ...
}
Where fastcgi_params contains:
set $path_info $fastcgi_path_info;
fastcgi_param PATH_INFO $path_info;
fastcgi_param PATH_TRANSLATED $document_root$path_info;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# ... *snip* ...
Reference
The following links explain the problem pretty well and provide an alternate nginx configuration:
http://trac.nginx.org/nginx/ticket/321
http://kbeezie.com/php-self-path-nginx/