background
last month I designed my first REST API as my effort to learn php, it was a chat messenger, to provide access to different APIs I relied on the folder and file structure, i.e. the URL was pointing to a actual file in server path and that file was a complete script to do it's job, so all that the app using API had to do was to call the specific URL and provide proper input,
Question
Lately, I read about Routers, and this question came to my mind, Why should I use routers and do I really need them?
my points so far:
_Without using Routers the API design and access is easy, the Apache handles the routing(If I am not wrong).
_With routers code is more complex however I can have some code running without the actual script being engaged, so I have more control over API access and such.
While Yassine has pointed out an answer elsewhere which contains a lot of information, it is a bit misleading (and wrong) in some regards.
A router is commonly used as part of a front controller architecture pattern. This funnels all incoming requests to a specific handler which will normally then pass the request on to a specific bit of functionality via the router.
The advantage of doing this is that it avoids duplication of code - specifically to handle things like session management, authentication, authorization, and templating.
Consider, for example:
<?php
$sitedown=$_SERVER['DOCUMENT_ROOT'] . "/sitedown.php";
if (file_exists($sitedown)) {
require_once($sitedown);
exit;
}
require_once("session/use_memcache_sessions.inc");
require_once("session/authorization.inc");
require_once("router.inc");
require_once("template/pagelayout.inc");
session_start();
begin_page_template($_SESSION['user_prefs']);
if (is_authorized($_SERVER['PHP_SELF'], $_SESSION['user_groups'])) {
route_request($_SERVER['PHP_SELF']);
} else {
show_unauth_message();
}
end_page_template();
While you could just drop lots of self-contained scripts onto your filesystem, with each one acting as an entry point, each of them would need to implement the logic above. If you decided to start using redis for your session management, or change the templating, you would need to change each script to accomodate the new behaviour.
Related
There is a number of answer of how to do it, but I can't find a reason or a set of reasons why it's a nice thing to do.
This is called Front Controller Pattern. There are several benefits including:
making sure that all common resources for all pages are included.
website resource is managed and the access can be more easily restricted (e.g. admin only)
makes the web application as a complete whole package, where common things such as session, session cookie and page access control are shared.
You have one central entry point for your application. There is usually one application behind a website, so it feels quite uncomfortable to access it through many many different single scripts.
comfort, using the same bootstrapping code for all your pages without the danger of forgetting to include something in some of your files.
bootstrapping: the code you run at the beginning of each page, like session_start, db connection, ACL checks etc
I agree with the other users answers, in particular, the #thephpdeveloper user focused the attention on an important detail: the Front Controller Patter.
Generally, when you use a framework like Zend Framework, all requests are received by an index.php files wich is responsible to initialize the environment and analize the url requested (extract the module, controller, action, and so on). In such case the index.php file can be viewed as a web application start point.
I see many MVC implementations for websites have a single-entry point such as an index.php file and then parses the URL to determine which controller to run. This seems rather odd to me because it involves having to rewrite the URL using Apache rewrites and with enough pages that single file will become bloated.
Why not instead just to have the individual pages be the controllers? What I mean is if you have a page on your site that lists all the registered members then the members.php page users navigate to will be the controller for the members. This php file will query the members model for the list of members from the database and pass it in to the members view.
I might be missing something because I have only recently discovered MVC but this one issue has been bugging me. Wouldn't this kind of design be preferable because instead of having one bloated entry-file that all pages unintuitively call the models and views for a specific page are contained, encapsulated, and called from its respective page?
From my experience, having a single-entry point has a couple of notorious advantages:
It eases centralized tasks such as resource loading (connecting to the db or to a memcache server, logging execution times, session handling, etc). If you want to add or remove a centralized task, you just have to change a singe file, which is the index.php.
Parsing the URL in PHP makes the "virtual URL" decoupled from the physical file layout on your webserver. That means that you can easily change your URL system (for example, for SEO purposes, or for site internationalization) without having to actually change the location of your scripts in the server.
However, sometimes having a singe-entry point can be a waste of server resouces. That applies obviously to static content, but also when you have a set of requests that have a very specific purpose and just need a very little set of your resorces (maybe they don't need DB access for instance). Then you should consider having more than one entry point. I have done that for the site I am working on. It has an entry point for all the "standard" dynamic contents and another one for the calls to the public API, which need much less resources and have a completely different URL system.
And a final note: if the site is well-implemented, your index.php doesn't have to become necessarily bloated :)
it is all about being DRY, if you have many php files handling requests you will have duplicated code. That just makes for a maintenance nightmare.
Have a look at the 'main' index page for CakePHP, https://github.com/cakephp/cakephp/blob/master/app/webroot/index.php
no matter how big the app gets, i have never needed to modify that. so how can it get bloated?
When deeplinking directly into the controllers when using an MVC framework it eliminates the possibility of implementing controller plugins or filters, depending on the framework you are using. Having a single point of entry standardizes the bootstrapping of the application and modules and executing previously mentioned plugins before a controller is accessed.
Also Zend Framework uses its own URL rewriting in the form of Routing. In the applications that use Zend Framework I work on have an .htaccess file of maybe 6 lines of rewriterules and conditions.
A single entry point certainly has its advantages, but you can get pretty much the same benefit from a central required file at the top of every single page that handles database connections, sessions, etc. It's not bloated, it conforms to DRY principles (except for that one require line), it seperates logic and presentation and if you change file locations, a simple search and replace will fix it.
I've used both and I can't say one is drastically better or worse for my purposes.
Software engineers are influencing the single point of entry paradigm. "Why not instead just to have the individual pages be the controllers?"
Individual pages are already Controllers, in a sense.
In PHP, there is going to be some boilerplate code that loads for every HTTP request: autoloader require statement (PSR-4), error handler code, sessions, and if you are wise, wrapping the core of your code in a try/catch with Throwable as the top exception to catch. By centralizing code, you only need to make changes in one place!
True, the centralized PHP will use at least one require statement (to load the autoloader code), but even if you have many require statements they will all be in one file, the index.php (not spread out over a galaxy of files on under the document root).
If you are writing code with security in mind, again, you may have certain encoding checks/sanitizing/validating that happens with every request. Using values in $_SERVER or filter_input_array()? Then you might as well centralize that.
The bottom line is this. The more you do on every page, the more you have a good reason to centralize that code.
Note, that this way of thinking leads one down the path of looking at your website as a web application. From the web application perspective, a single point of entry is justifiable because a problem solved once should only need to be modified in one place.
hi
I am working on a great website (social network with php) and I've decided to create only one php page, (index.php), but this php page will contain php if conditions and statments of the $_GET value,and will display the page requered (but in the same page index.php).
This means that the code(javascript+xhtml+php) will be very huge (nearly all the project in one page).
I will also use the Htaccess to rewrite the urls of those pages to avoid any malicious requests (so it will appear just like a normal website).
But, before doing so, I just want to know about the advantages and downsides of this technique, seeing it from all other sides (security, server resources, etc...)
thank you
I think what you're trying to do is organize your code properly and effectively, which I commend.
However if I understand correctly, you're going to put all of your javascript, html, and PHP in one file, which is really bad. You want your code to be modular, not lumped together in a single file.
I think you should look into using a framework (eg Zend) - PHP frameworks are specifically designed to help your code remain organized, modular, and secure. Your intent (organizing your code effectively) is great, but your idea for how to organize your code isn't very good. If you're absolutely adament about not using a framework (for example if this is a learning/school project), you should at least make sure you're following best practices.
This approach is not good because of server resource usage. In order to get access to say jQuery.js your web server is going to:
Determine that jQuery.js actually passes through index.php
Pass index.php through the php parser
Wait for php to generate a response.
Serve that response.
Or, you could serve it this:
Determine jQuery.js exists in /var/www/mysite/jQuery.js
Serve it as the response.
Likewise for anything that's "static" i.e. isn't generated from PHP directly. The bigger the number of ifs in the PHP script, the more tests will need be done to find your file.
You do not need to pass your static content through some form of url routing; only your dynamic content. For real speed, its better to generate responses ready as well, called caching, particularly if the dynamic content is expensive in terms of cpu cycles to generate. Other caching techniques include leaving frequently accessed database data in memory, which is what memcached does.
If you're developing a social network, these things really do matter. Heck, facebook wrote a PHP-to-C++ compiler to save clock cycles.
I second the framework recommendation because it really will make code organisation easier and might integrate with a caching-based solution.
In terms of PHP frameworks, there are many. Here's a list of many web application frameworks in many languages and from the same page, the PHP ones. Take a look and decide which you like best. That's what I did and I ended up learning Python to use Django.
Came by this question searching so since the best answer is old, here is more modern one, from this question
Why use a single index.php page for entire site?
A front controller (index.php) ensures that everything that is common to the whole site (e.g. authentication) is always correctly handled, regardless of which page you request. If you have 50 different PHP files scattered all over the place, it's difficult to manage that. And what if you decide to change the order in which the common library files get loaded? If you have just one file, you can change it in one place. If you have 50 different entry points, you need to change all of them.
Someone might say that loading all the common stuff all the time is a waste of resources and you should only load the files that are needed for this particular page. True. But today's PHP frameworks make heavy use of OOP and autoloading, so this "waste" doesn't exist anymore.
A front controller also makes it very easy for you to have pretty URLs in your site, because you are absolutely free to use whatever URL you feel like and send it to whatever controller/method you need. Otherwise you're stuck with every URL ending in .php followed by an ugly list of query strings, and the only way to avoid this is to use even uglier rewrite rules in your .htaccess file. Even WordPress, which has dozens of different entry points (especially in the admin section), forces most common requests to go through index.php so that you can have a flexible permalink format.
Almost all web frameworks in other languages use single points of entry -- or more accurately, a single script is called to bootstrap a process which then communicates with the web server. Django works like that. CherryPy works like that. It's very natural to do it this way in Python. The only widely used language that allows web applications to be written any other way (except when used as an old-style CGI script) is PHP. In PHP, you can give any file a .php extension and it'll be executed by the web server. This is very powerful, and it makes PHP easy to learn. But once you go past a certain level of complexity, the single-point-of-entry approach begins to look a lot more attractive.
It will be a hell of a mess.
You also wont be able to upgrade parts of the website or work on them without messing with the whole thing.
You will not be able to apply some programming architecture like MVC.
It could theoretically be faster, because you have only one file that needs to be fetched from disk, but only under the assumption that all or at least almost all the code is going to be executed.
So you will have to load and compile the whole file for every single request, also the parts that are not needed. so it will slow you down.
What you however CAN do is have a single point of entry where all requests originate from. That helps controlling a lot and is called a bootstrap file.
But most importantly:
Why would you want that?
From what I know most CMSes (and probably all modern ones) are made so that the requested page is the same index.php, but that file is just a dispatcher to other sections. The code is written properly in different files that are built together with includes.
Edit: If you're afraid your included scripts are vulnerable the solutions is trivial. Put them outside of the web root.
Simplistic example:
<?php
/* This folder shouldn't even be in the site root,
it should be in a totally different place on the server
so there is no way someone could request something from it */
$safeRoot = '/path/to/safe/folder/';
include $safeRoot.'all_pages_need_this.php'; // aka The Bootstrap //
switch($_GET['page']){
case 'home':
include $safeRoot.'home.module.php';
break;
case 'blog':
include $safeRoot.'blog.module.php';
break;
case 'store':
include $safeRoot.'store.module.php';
break;
default:
include $safeRoot.'404.module.php';
}
This means that the code(javascript+xhtml+php) will be very huge (nearly all the project in one page).
Yes and it'll be slow.
So you're not going to have any HTML cacheing?
It's all purely in one file, hard to update and slow to interpret? geesh, good luck.
What you are referring to is called single point of entry and is something many web applications (most notably the ones built following the MVC pattern) use.
The code of your point of entry file doesn't have to be huge as you can simply include() other files as needed. For example:
<?php
if ($_GET['module'] == 'messages') {
include('inbox.php');
}
if ($_GET['module'] == 'profile') {
include('profile.php');
} etc..
I'm looking for some thin layer on top of handling HTTP requests that can easily do routing to different backends, based on the uri / rest verb / actual service location / .... This layer should also handle encoding into whatever the requested format is (xml / json / returning binary data / etc.).
The most important point though is to make it pluggable into some backend - whether it's a message queue, job dispatcher, external process, or something completely different. They should be handled with minimal wrapper for the needed message translation.
So basically, that would be a customisable request dispatcher with some magic on top. Does something like that exist as a separate application now?
Edit: Almost forgot - it would be great if it was written in PHP... but if something else matches the description, I'd have a look too.
Don't know about PHP, but if Java and/or Python are acceptable options for you, you should take a look at RESTx, which was designed for the simple and fast creation of RESTful services. RESTx is fully open source, GPLv3 licensed.
I agree that many frameworks are all about object creation and mapping, which often can be very annoying and get in the way. RESTx, however, is about the data, the automatic conversion of content types and so on. With RESTx you can write custom components in either Java or Python. These components can take care of access to databases, custom APIs, legacy data, cloud services, etc. RESTx examines the code and automatically produces a self documented, discoverable, RESTful API. It's all links you can follow. Take a look at how to take a tour of the server with a web browser.
The key is that you can POST parameter sets to those components which are then stored and accessible under a new URI. You access the URI, the parameters get applied to the component and you get the output back. Thus, you can rapidly create new RESTful web services and resources. You can access other resources easily from within your component's code and it doesn't cause an additional HTTP request.
I'm the lead developer for RESTx, so if you have any questions about it, please contact me on the forums (links to those are on our web site).
Zed Shaw of Mongrel fame is attempting to do just this. He's creating Mongrel2 (still in development), essentially a universal frontend for web application backends. It allows you to plug in any program that can send and receive 0MQ or HTTP messages like a reverse proxy.
It also uses a sane configuration file system: SQLite. No more messing around with Apache config files with weird syntax.
It's written in C, so it may not be as easy to deploy as a language like PHP, but it certainly scales very well.
If you're not satisfied with Mongrel2, it's relatively easy to roll your own. I've used nodejitsu's node-http-proxy for one of my own projects. It's simple and fast. Plus, you can write your routing rules using regular old if statements.
I'm new to StackOverflow so it won't let me embed more than one hyperlink, haha.
I have been in web programming for 2 years (Self taught - a biology researcher by profession). I designed a small wiki with needed functionalities and a scientific RTE - ofcourse lot is expected. I used mootools framework
and AJAX extensively.
I was always curious when ever I saw the query strings passed from URL. Long encrypted query string directly getting passed to the server. Especially Google's design is such. I think this is the start of providing a Web Service to a client - I guess.
Now, my question is : is this a special, highly professional, efficient / advanced web design technique to communicate queries via the URL ?
I always felt that direct URL based communication is faster. I tried my bit and could send a query through the URL directly. here is the link: http://sgwiki.sdsc.edu/getSGMPage.php?8
By this , the client can directly link to the desired page instead of searching and / or can automate. There are many possibilities.
The next request: Can I be pointed to such technique of web programming?
oops: I am sorry, If I have not been able to convey my request clearly.
Prasad.
I think this is the start of providing
a Web Service to a client - I guess.
No not really, although it can be. Its used to have a central entry point to the entire application. Its a common practice and has all kinds of benefits, but its obviously not required. Often thes days though even a normal url you see may not actual be a physical page in the application.. each part of the path may actuall be mapped to a variable through rewriting and routing on the server side. For example the URL of this question:
http://stackoverflow.com/questions/2557535/general-web-programming-designing-question
Might map to something like
http://stackoverflow.com/index.php?module=questions&action=view&question=2557535&title=general-web-programming-designing-question
is this a special, highly
professional, efficient / advanced web
design technique to communicate
queries via the URL ?
Having a centralized page through which all functions within an application are accessed is part of the Front Controller Pattern - a common pattern in applications generally used as part of the overall Model, View, Controller (MVC) pattern. In MVC, the concerns of the application are divided into the model which holds the business logic. These models are then used by the controller to perform a set of tasks which can produce output. This output is then rendered to the client (browser, window manager, etc..) via the view layer.
I think that essentially what you are asking about is query strings. In a url after a page, there may be a question mark after which, there may be URL parameters (generally called GET request parameters.)
http://www.google.com/search?q=URL+parameter
Generally, processing this would be done on the server-side. For example, in PHP, one could use the following:
$_GET['q']
The aforementioned code would be the value of the variable. Alternatively, to do this client-side, one could use anchors. Replace the question mark with a hash sign #
Since this is used for anchors, when a URL is changed to have an anchor tag, the page is not refreshed. This allows for a completely AJAX-driven page to manipulate the URL without refreshing. This method is often used also for enabling back-button support for AJAX pages.
In JavaScript, one can use the onload handler as an opportunity to read the URL of the page and get the hash part of the URL. The page could then make a request back to the server to read any neccessary data.
It's a consequence of using a front controller architecture. This fits neatly with the idea of a wiki where the same code is used to render multiple different wiki pages - the content is defined by the data.
Using the query part of the URL for the page selection criteria is not the only solution. e.g. if you are using apache then you could implement:
http://sgwiki.sdsc.edu/getSGMPage.php?8
as
http://sgwiki.sdsc.edu/getSGMPage.php/8
(you'll need to add your own parsing to get the value out.
Alternatively, you can use mod_rewrite to map components out of the path back into the query.
There's no particular functional/performance reason for adopting any of these strategies. Although it is recommended, where the URL is idempotent, that each page be addressable via a GET operation (also useful for SEO).
C.