I've been developing a website with Codeigniter on my local machine, running an Apache server. I did not realise the production server would be running Nginx. When attempting to run CI, I now run into the problem that those pretty URLs with segments in them do not work. I keep getting a 404 page.
I have no experience with Nginx, but I found a few code snippits via Google that I tried.
I'm in a shared hosting situation, meaning I have limited configuration options. This results in the configuration interface rejecting most of the configuration snippits I've copy-pasted into it in an attempt to get it to work.
So far, I've found out it rejects the keywords server_name, root and include, which seem to appear in every single solution I've found.
As I have little knowledge on the subject, I'm not sure if what I'm trying to do (i.e. get Codeigniter up and running with slash-separated URL parts rather than a query string) is even possible when I'm not able to use the afformentioned keywords.
Is there a 'default' piece of Nginx configuration available for Codeigniter that might help me out here? Or is my situation too limited to even allow for a solution? Should I just ask my host for help?
EDIT: Note that I'm not trying to remove index.php from my URLs to make them more appealing - I'm not at that point yet. This is about URL segments in general - you know, the default behaviour of Codeigniter.
Then you say "Slash-separated URLs", you must understand, that it's just some URL, that is leads to non-existing file (for example, site.com/controller/action/param1/value1 leads to "folders" /controller/action/param1 and "file" value1), and tghis situation is solved by using mod_rewrite in apache - it just rewrites any URL, that points to non-existing file to a url, pointing to index.php. So, In nginx you need just the same.
In your nginx configuration you have to add this locations:
# for rewriting non-existing url-s to index.php
location / {
try_files $uri $uri/ /index.php?$args;
}
and
#location, that enables to process *.php files thorugh php-fpm (fastcgi)
#it's possible that you already have this block configured, and don't need to change it.
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9999;
fastcgi_index index.php; include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_ignore_client_abort off;
}
Also, If it doesn't help, just google "Configure nginx for codeigniter", or, ask your webhoster support "please, make changes to config so url rewriting could start to work"
Related
I am building a Probe for Faveo helpdesk which is an open source ticket system to manage customer support in real time and is developed using laravel framework for PHP.
The aim of the probe is to check minimum server requirements which are required to install and run Faveo on the server. To run Faveo the redirection module(eg. 'mod_rewrite' in Apache server) for URL's must be enabled on server. I have to check is this module is enabled or not on different servers like Nigix, Apache and IIS.
Currently I am able to check 'mod_rewrite' on Apache server which uses Apache handler using php function "apache_get_modules". But this is not working on the servers which uses handles other than Apache handler to run php (for example CGI/FCGI/suPHP).
Can anyone tell me how can I check the 'mod_rewrite' module of the server irrespective of the handler they use to run PHP? Also how can I check the same on different servers like NIGIX and IIS?
Nginx nor IIS provide any way of checking this, this is because in both cases, PHP runs as Fast-CGI and have no knowledge of what is terminating the http connection.
Nginx fundamentally has to have it's "rewrite module" to perform basic things (try_files is a part of the ngx_http_core module).
On Laravel the configuration in nginx is very simple to make "pretty urls":
server {
listen 80;
root /var/www/public;
try_files $uri /index.php$is_args$args;
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
fastcgi_keep_conn on;
}
}
On Windows w/IIS there's no way to determine if the Rewrite module is installed (at least not to my knowledge), here you'll probably just have to document the fact that IIS requires a special module for it to work: https://www.iis.net/downloads/microsoft/url-rewrite
I'm trying to setup a dev server with nginx for two projects, one in rails and one in PHP. I want a base URL (dev.example.com) for both projects and a sub location for each one (dev.example.com/rails_proj and dev.example.com/php_proj). My nginx conf is like the following:
server {
listen 80;
server_name dev.example.com;
passenger_enabled on;
passenger_app_env development;
passenger_buffer_response off;
root /var/www/dev;
location ~ ^/rails_proj {
root /public;
passenger_base_uri /rails_proj;
passenger_app_root /var/www/dev/rails_proj;
passenger_document_root /var/www/dev/rails_proj/public;
}
location ~ ^/php_proj {
root /web;
try_files $uri /app_dev.php$is_args$args;
location ~ \.php(/|$) {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
}
The rails project works fine but the PHP project gives me an "file not found" when I try to access dev.example.com/php_proj/app_dev.php and in the log It says: FastCGI sent in stderr: "Primary script unknown". I found issues related to It and I've tried a number of ways but I can't come up with something that works for both projects. How can I fix this?
It might be easier to manage as two server blocks with an extra subdomain for each. It also reduces potential confusion introduced by multiple regex locations:
server {
server_name rails.dev.example.com;
return 200 "Hello from rails app.
";
}
server {
server_name php.dev.example.com;
return 200 "Hello from php app.
";
}
I'm sure you've moved on or figured this out by now, but for posterity's sake: the root directive in your location block overrides your server root directive. It's not relative to it. The way you have your conf, the project files would need to be located in the absolute path /web/php_proj on your web server.
I can't quite determine the way your local paths are set up, but if /var/www/dev/rails_proj/public is the directory that contains your app root, you may instead need to do something like this:
location /rails_proj {
alias /var/www/dev/rails_proj/public;
}
Using alias instead of root will strip the /rails_proj from the beginning of the request path and serve files relative to the alias path. For example, if you request http://dev.example.com/rails_proj/php/test.php, it will serve the file /var/www/dev/rails_proj/public/php/test.php
Also, I would definitely recommend changing your top level location paths from regex to standard prefix paths.
location ~ ^/php_proj is effectively identical to location /php_proj except that using regex messes with the way nginx decides what location to serve. Regex paths are more performance intensive, act on a first-match basis rather than best match, and will take priority over all prefix path locations.
Another thing to note: using $document_root$fastcgi_script_name may not always work as expected. Especially if using the alias directive instead of root. Using the variable $request_filename instead is preferable in most cases.
I've been fiddling around with this for quite some time now, and I can't really get a grasp on how nginx+hhvm maps my requests.
Basiclly, I have an API on api.example.com that I would like to call with the Accept: application/vnd.com.example.api.v1+json for version 1 and application/vnd.com.example.api.v2+json for version 2. The API itself is a PHP application that I'll be running using a fresh install of HHVM. All requests will be handeled by index.php.
The folder structure looks like this:
api.example.com/
index.php (content: fail)
v1/
index.php (content: v1)
v2/
index.php (content: v2)
Whenever I use my REST client to access api.example.com/test with the v1 accept header I get the v1 response back. When I the use the accept header for v2, it shows the v2. So everything is correct. If I don't supply any accept header I get redirected to example.com
The NGINX configuration looks like this
map $http_accept $api_version {
default 0;
"application/vnd.com.example.api.v1+json" 1;
"application/vnd.com.example.api.v2+json" 2;
}
server {
# listen to :80 is already implied.
# root directory
root /var/www/api.example.com/;
index index.html;
server_name api.example.com;
include hhvm.conf;
location / {
if ($api_version = 0) {
# redirect to example.com if applicable
# Accept-header is missing
return 307 http://example.com;
}
try_files /v$api_version/$uri /v$api_version/$uri/ /v$api_version/index.php?$args;
}
# Prevent access to hidden files
location ~ /\. {
deny all;
}
}
the hhvm.conf file is included below. It is derivated or somewhat exact functionality of the default hhvm.conf included with hhvm.
location ~ \.(hh|php)$ {
fastcgi_keep_conn on;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
This is my problem
If I try to access api.example.com/index.php I get the "fail" response, even if I am expecting v1 for the v1 accept header and v2 for the v2 accept header. Everything else seems to work fine, even index.html maps correctly to it's subdirectory.
What I've tried
I've tried using
root /var/www/api.example.com/v$api_version/;
in the configuration, but that only gives me 404 errors from NGINX. I believe what I'm looking for is actually changing the root path, but I haven't got my head around how to make it work. I've also tried removing index parameters in both the nginx configuration and the hhvm.conf, but that does not seem to help. I've also tried a ton of different configurations and I've had at least 20-30 tabs of stackoverflow open to solve this, but I'm clearly missing something (probably rather simple) here. I've also tried moving the hhvm include inside the location block.
The setup
Debian 7,
nginx/1.2.1,
hhvm 3.2.0
Oh, and it's my first time actually asking a question here. :) hope I've formatted everything correctly.
What are the contents of hhvm.conf ?
I'm assuming Fast CGI is being used to proxy requests to the HHVM server. So your hhvm.conf might look something like this:
root /var/www/api.example.com;
index index.php;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
Which should be wrapped by a location directive.
So based on your config shown, what I think is happening is you must be matching php scripts to the HHVM location directive, which is fine, but in doing so your try_files setting, which seems to be in charge of doing the API version to file system mapping, is not being processed.
Without your hhvm.conf it's hard to say what to do next, but I suspect you need to focus on the root value inside the location directive that contains the HHVM fastcgi settings.
UPDATE
So I have the concept of an API version derived from the header mapping to a filesystem working for me on nginx + HHVM. Here is my nginx config for HHVM:
location / {
root /var/www/html/hh/v$api_version;
index index.php;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
Really though, the location of / is no better than your one - in fact, yours is probably better as you probably don't want HHVM serving static files etc. But this is working for me - combined with the map you have in your original post, when I curl -H 'Accept: application/vnd.com.example.api.v2+json' localhost, I get the expected response from an index.php file inside the version's directory.
I think what you need to do is update your HHVM nginx config with a dynamically generated root declaration like mine above. If you're still getting a 404, try this: in /etc/init.d/hhvm, find the ADDITIONAL_ARGS= var, make it ADDITIONAL_ARGS="-vServer.FixPathInfo=true". I'm not sure exactly what it does, but I've come across it before and it had fixed a weird 404 problem I was having in the past (where the 404 was coming from HHVM, not Apache/nginx).
I am quite new to Magento and I have a case where a client has a running ecommerce website built with Magento and is running on Amazon AWS EC2 server.
If I can locate the web root (/var/www/) and download everything from it (say via FTP), I should be able to boot up my virtual machine with LAMP installed, put every single files in there and it should run, right?
So I did just that, and Magento is giving me an error. I am guessing there are configs somewhere that needs to be changed in order to make it work, or even a lot of paths need to be changed etc. Also databases to be replicated. What are the typical things I should get right? Database is one, and the EC2 instance uses nginx while I use plain Apache, I guess that won't work straight out of the box, and I probably will have to install nginx and a lot of other stuffs.
But basically, if I setup the environment right, it should run. Is my assumption correct?
Thanks for answering!
There is just two or three config changes is enough to move magento another server.
For Database settings, modify app-->etc-->local.xml file
If you changed your URL you need to edit database in core_config_data table. In this change value for web/unsecure/base_url and web/secure/base_url to your current base url
For Apache server :
3. Some times you need to modify .htaccess file. Its located at root directory. Change your rewrite engine settings (if need) to your current root directory like this,
**RewriteBase /magento/**
For NGINX serevr :
If your web server is NGINX then, your config should be like this,
server {
root /home/magento/web/;
index index.php;
server_name magento.example.com;
location / {
index index.html index.php;
try_files $uri $uri/ #handler;
expires 30d;
}
location ~ ^/(app|includes|lib|media/downloadable|pkginfo|report/config.xml|var)/ { internal; }
location /var/export/ { internal; }
location /. { return 404; }
location #handler { rewrite / /index.php; }
location ~* .php/ { rewrite ^(.*.php)/ $1 last; }
location ~* .php$ {
if (!-e $request_filename) { rewrite / /index.php last; }
expires off;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param MAGE_RUN_CODE default;
fastcgi_param MAGE_RUN_TYPE store;
include fastcgi_params;
}
}
For more information about this follow bellow links,
CLICK ME
Here's a detailed guide how to move Magento to a new server: http://www.magentocommerce.com/wiki/groups/227/moving_magento_to_another_server (I used it once and it worked perfectly)
Based on your text I assume you forgot to copy the database (it would be great if you can post the error message here). Only copying the files isn't enough, the database has to be copied to.
In the database you also have to adjust the server url. In the config file (app/etc/local.xml) you only have to update the database settings.
/edit: MySQLDumper is a tool to backup and restore MySQL-Databases with PHP. So you don't need phpMyAdmin (but you still need web access). Backup on the old server, restore on the new. The database settings can be found in app/etc/local.xml.
Situation
Hello, I'm confused as to PHP's expected/default behavior regarding extensionless PHP files, and/or URL requests that "go past" the actual file that (I want to) processes the request (i.e., PHP's default "fallback" actions before it resorts to completely 404-ing). Here's my situation:
My directory structure on my local server (running nginx 1.5.3 with very basic PHP 5.5.1 setup) looks like the following:
/index
/index.php
/other
/other.php
/rootdir/index
/rootdir/index.php
/rootdir/other
/rootdir/other.php
The contents of all eight files are the same:
<?php
echo $_SERVER['PHP_SELF'] . ', ' . $_SERVER['REQUEST_URI'];
?>
BUT, hitting the respective endpoint produces some strange (to me) results.
Research
GET /index.php
'/index.php, /index.php' # Makes sense...
GET /index.php/something_else
'/index.php, /index.php/something_else' # Also makes sense...
GET /index/something_else
'/index.php, /index/something_else' # Let's call this ANOMALY 1... (see below)
GET /something_else
'/index.php, /something_else' # ANOMALY 2
GET /other.php
'/other.php, /other.php' # Expected...
GET /other.php/something_else
'/index.php, /other.php/something_else' # ANOMALY 3
GET /rootdir/index.php
'/rootdir/index.php, /rootdir/index.php' # Expected...
GET /rootdir/index.php/something_else
'/index.php, /rootdir/index.php/something_else' # ANOMALY 4
GET /rootdir/other.php
'/rootdir/other.php, /rootdir/other.php' # Expected...
GET /rootdir/other.php/something_else
'/index.php, /rootdir/other.php/something_else' # ANOMALY 5
My understanding is that the server redirects to /index.php when it is unable to find what the user is looking for at the request URI; that much makes sense... what I don't understand is:
Why it will do this despite my not having a dedicated 404 page set up (I didn't tell it to try /index.php before 404-ing; I want it to display a legit, non-custom 404 page if something isn't found and/or can't be processed. I figured it should display the default server 404 page when it couldn't find something... apparently that's not always the case...?)
Why it doesn't try /rootdir/index.php when it can't find something within the /rootdir/ subdirectory.
Questions
Would somebody be able to shed some light on what PHP's logic is (or maybe it's nginx's doing; I haven't been able to figure that out yet) with regards to addresses that are not found? Why am I seeing what I am seeing? (Specifically with respect to Anomalies #4 and #5. I expected it to use /rootdir/index.php for handling it's "404," or I expected a real 404 page; the fallback to /index.php was unexpected.)
As a direct corollary (corollical?) question, how can I go about simulating extensionless PHP files that will handle hits that occur "below" them (e.g. in the Anomaly #1 case; that's actually exactly what I want, though it wasn't quite what I expected) without relying on .htaccess, mod_rewriting, or redirects? Or is that a silly question? :-)
References
I'm trying to roll out a custom implementation for handling requests like /some_dir/index.php/fake_subdir and /some_other_dir/index.php/fake_subdir (i.e., different "fallback handlers") without relying on Apache, but the logic behind PHP's (or nginx's?) default fallback behavior is eluding me. These pages are primarily where this question stems from:
Pretty URLs without mod_rewrite, without .htaccess
https://stackoverflow.com/a/975343/2420847
GET /other.php/something_else
This is called PATH_INFO in Apache. As Apache's scanning down a URL's "directories", it will return the first file (or execute the first script) that is actually a file/script, e.g.
GET /foo/bar/baz/index.php/a/b/c/
^--dir
^--dir
^---dir
^---script
^^^^^^^--- path_info
In real terms, the real request is for
GET /foo/bar/baz/index.php
and then Apache will take the unused trailing portion of the directory structure of the URL and turn it into path info. In PHP, you'll have
$_SERVER['REQUEST_URI'] = '/foo/bar/baz/index.php';
$_SERVER['PATH_INFO'] = 'a/b/c';
Well, I figured out what was going on: I had a misconfigured nginx server.
Marc B's answer, though related to Apache, prompted me to check out my PATH_INFO value — lo and behold, PATH_INFO was nonexistent, even for requests like GET /other.php/something_else.
Some more searching turned up the answer. Using nginx's fastcgi_split_path_info to split the URI into a) the path to the script and b) the path that appears following the script name will fail if done after a use of the try_files directive, due to its rewriting the URI to point to a file on the disk. This obliterates any possibility of obtaining a PATH_INFO string.
As a solution, I did three things (some of these are probably redundant, but I did them all, just in case):
In a server's location block, make use of fastcgi_split_path_info before setting up PATH_INFO via
fastcgi_param PATH_INFO $fastcgi_path_info;
...and do both before making use of try_files.
Store $fastcgi_path_info as a local variable in case it ever changes later (e.g. with set $path_info $fastcgi_path_info;)
When using try_files, don't use $uri... use $fastcgi_script_name. (Pretty sure this was my biggest mistake.)
Sample Configuration
server {
# ... *snip* ...
location ~ \.php {
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_index index.php;
fastcgi_intercept_errors on;
fastcgi_pass 127.0.0.1:9000;
try_files $fastcgi_script_name =404;
}
# ... *snip* ...
}
Where fastcgi_params contains:
set $path_info $fastcgi_path_info;
fastcgi_param PATH_INFO $path_info;
fastcgi_param PATH_TRANSLATED $document_root$path_info;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# ... *snip* ...
Reference
The following links explain the problem pretty well and provide an alternate nginx configuration:
http://trac.nginx.org/nginx/ticket/321
http://kbeezie.com/php-self-path-nginx/