http://www.php.net/manual/en/features.remote-files.php
The only time I could ever think of doing include("http://someotherserver/foo.php") would be as some sort of weird intra-server service interface, but even then I could think of a million different ways that were safer to accomplish the same thing. Still, my specific question is, has anyone seen remote includes in a production environment and did it make any sense doing so?
Edit:
To clear something up, I would cause physical injury to befall anyone who ever tried to use remote includes in a production environment I worked on... So yes I know this is a nightmarish security hole. Just trying to figure out why its still there versus other weird ideas like magic quotes and global variables.
While I've never seen this in real life, I could imagine a farm with separate physical servers with no shared file system. You could possibly have one server with the all the code ie api.domain.com and the other servers include from it. It would make deployments easier if you have tens or hundreds of sepearate sites. But as alex said, it's asking to be hacked.
Remote file execution is extremely dangerous... I've never used it on my servers, and I can't imagine a valid reason to put your, ahem, balls into the basket that someone else controls. That's just asking to be hacked.
No, I didn't. It's going to the bear's mouth.
I suppose the possiblity to include/require remote files is a consequence of allow_url_fopen -- which was introduced in PHP 4.0.x.
Though, considering the security risks of remote-inclusion, a new directive, allow_url_include was introduced in PHP 5.2 : now, this one determines whether you can remote include/require, while the first ones only impacts fopen and the like -- which is nice : it allows an admin to disable remote inclusion, while keeping remote opening.
As others, I didn't ever see remote-require/include used in real-case scenario, while I, of course, often see situations where remote-opening is used -- bad thing is I sometimes see servers with allow_url_fopen disabled because of security reasons that don't exist anymore :-(
Related
I have been working on a project which had been split over several servers and so php scripts had been run through a url interface. e.g. to resize an image I would call a script on one server either from the same or from one of the other servers as
file_get_contents('http://mysite.com/resizeimg.php?img=file.jpg&x=320&y=480');
now, this works but we are upgrading to a new server structure where the code can be on the same machine. So instead of all these wrapper functions I could just include and call a function. My question is: is it worth the overhead of rewriting the code to do this?
I do care about speed, but don't worry about security -- I already I have a password system and certain scripts only accept from certain ips. I also care about the overhead of rewriting code but cleaner more understandable code is also important. What are the trade offs that people see here and ultimately is it worth it to rewrite it?
EDIT: I think that I am going to rewrite it then to include the functions. Does anyone know if it is possible to include between several servers of the same domain? Like if there is a server farm where I have 2-3 servers can I have some basic functionality on one of them that the others can access but no one else could access from the outside?
is it worth the overhead of rewriting the code to do this?
Most likely yes - a HTTP call will always be slower (and more memory intensive) than directly embedding the generating library.
I am using wamp server to develop a website using php, mysql, PDO, html and css.
My wamp server is using PHP 5.3.5, MySQL 5.5.8 and Apache 2.2.17, I am also using InnoDB for transactions.
Considering that my internet hosting provider has at least these versions of php, mysql, apache, and supports InnoDB will the website I build act in the exact same way.
Is it possible to design a website in wamp and then expect several errors when going live? And if so how is this overcome?
Thanks.
As others note, there are many potential hiccups (but I view them as learning opportunities.) But I've done it this way for over five years and have yet to find a difference that wasn't easily overcome. Just stick to the middle of the road, use defaults as much as makes sense, and have fun. It's a great way to explore the platform.
There are many things that can go wrong, most of them having to do with how the web server and PHP are built and configured.
The simplest example is PHP's safe mode: there are many things that safe mode does not allow, and turning it off may not be an option if you are on a shared host. Another example is which extensions are enabled in PHP (your app may require one which the host does not have).
Of course this is all moot if you rent the whole server (or a VM), as in this case you will be able to do whatever you please.
For completeness, I have to mention that there might be platform-specific differences in behavior resulting from the same library (which PHP uses to provide some functionality) compiling into different behavior on different platforms (think platform sniffing in C with #ifdef). I have been bitten by this in the past, but the possibility is not large enough to worry about it beforehand.
A lot of the issues can be worked around by moving constants into config files, like Jon says. Some issues will be less in your control and harder to diagnose. For instance, the output buffer control may be configured differently outside the DocumentRoot you have access to. This can cause confusing problems when you try to write headers out when other content has already been sent out. Similar issues with the timeout numbers, etc.
If a phpinfo() dump is shown to an end user, what is the worst that a malicious user could do with that information? What fields are most unsecure? That is, if your phpinfo() was publicly displayed, after taking it down, where should you watch/focus for malicious exploits?
Knowing the structure of your filesystem might allow hackers to execute directory traversal attacks if your site is vulnerable to them.
I think exposing phpinfo() on its own isn't necessarily a risk, but in combination with another vulnerability could lead to your site becoming compromised.
Obviously, the less specific info hackers have about your system, the better. Disabling phpinfo() won't make your site secure, but will make it slightly more difficult for them.
Besides the obvious like being able to see if register_globals is On, and where files might be located in your include_path, there's all the $_SERVER ($_SERVER["DOCUMENT_ROOT"] can give clues to define a relative pathname to /etc/passwd) and $_ENV information (it's amazing what people store in $_ENV, such as encryption keys)
The biggest problem is that many versions make XSS attacks simple by printing the contents of the URL and other data used to access it.
http://www.php-security.org/MOPB/MOPB-08-2007.html
A well-configured, up-to-date system can afford to expose phpinfo() without risk.
Still, it is possible to get hold of so much detailed information - especially module versions, which could make a cracker's life easier when newly-discovered exploits come up - that I think it's good practice not to leave them up. Especially on shared hosting, where you have no influence on everyday server administration.
Hackers can use this information to find vulnerabilities and hack your site.
Honestly, not much. Personally, I frequently leave phpinfo() pages up.
If you have some serious misconfigurations (e.g. PHP is running as root), or you're using old and vulnerable versions of some extensions or PHP itself, this information will be more exposed. On the other hand, you also wouldn't be protected by not exposing phpinfo(); you should have instead take care of having your server up-to-date and correctly configured.
I just noticed a PHP config parameter called allow_url_include, which allows you to include a PHP file hosted elsewhere as you would a locally. This seems like a bad idea, but "why is this bad" is too easy a question.
So, my question: When would this actually be a good option? When it would actually be the best solution to some problem?
Contrary to the other responders here, I'm going to go with "No". I can't think of any situation where this would make a good idea.
Some quick responses to the other ideas:
Licensing : would be very easy to circumvent
Single library for multiple servers: I'm sorry but this is a very dumb solution to something that should be solved by syncing files from for example a
sourcecontrol system
packaging / distribution system
build system
or a remote filesystem. NFS was mentioned
Remote library from google: nobody has a benefit to a slow non-caching PHP library loading over PHP. This is not (asynchronous) javascript
I think I covered all of them..
Now..
your question was about 'including a file hosted elsewhere', which I think you should never attempt. However, there are uses for allow_url_include. This setting covers more than just http://. It also covers user-defined protocol handlers, and I believe even phar://. For these there a quite a bit of valid uses.
The only things I can think of are:
for a remote library, for example the google api's.
Alternately, if you are something like facebook, with devs in different locations, and the devs use includes from different stage servers (DB, dev, staging, production).
Once again during dev, to a third party program that is in lots of beta transition, so you always get the most recent build without having to compile yourself (for example, using a remote tinymce beta that you are going to be building against that will be done before you reach production).
However, if the remote server goes down, it kills the app, so for most people, not a good idea for production usage.
Here is one example that I can think of.
At my organization my division is responsible for both the intranet and internet site. Because we are using two different servers and in our case two different subdomains then I could see a case for having a single library that is used by both servers. This would allow both servers to use the same class. This wouldn't be a security problem because you have complete control over both servers and would be better than trying to maintain two versions of the same class.
Since you have control over the servers, and because having an external facing server and internal server requires seperation (because of the firewall) then, this would be a better solution than trying to keep a copy of the same class in two locations.
Hmm...
[insert barrel scraping noise here]
...you could use this is a means of licensing software - in that the license key, etc. could be stored on the remote system (managed by the seller). By doing this, the seller would retain control of all the systems attempting to access the key.
However, as you say the list of reasons this is a horrifying idea outweigh any positives in my mind.
I've been reading up on this subject for a while. Suddenly the day has come where this solution is a necessity, not just a dream.
Through my reading, I've seen the popular differences being (file based, memcached, shared memory (mm), sql table, and custom).
The original idea we thought of was using a ZFS or AFS mounted on each of the application servers (LAMP boxes), and pointing the session.save_path php.ini setting to a directory from that mounted path.
I'd like to hear of success stories.
John Campbell's answer here should help
What is the best way to handle sessions for a PHP site on multiple hosts?
The point he makes about NOT using only Memcached is important.
Also, as I mentioned in that question, you may want to consider the session clustering that comes with Zend Platform - but there are significant licensing costs associated with that solution.
I think storing your sessions in a database (like MySQL or PostgreSQL) will involve the least headaches, especially if you already have a DB for whatever it is your app does.
Memcached may also help, since it can store data across multiple machines, but I don't have any experience with it.
I have been using file based on sessions on shared servers for over 5 years with no problems. We have some sessions that can become quite large (>10MB) and file based works very well. Typically our shared servers store the session files for each site in chrooted directories so only root can access them all. We have found this to be very reliable and have had no problems. Although you loose some of the functionality of database or memcached, there is a reason why it's the PHP default.
If you're looking into a Memcached solution for sessions - maybe you should check out Repcached. Should reduce any problems with losing sessions if servers get rebooted, etc.
about repcached
"repcached" is patch set which adds data replication feature to memcached 1.2.x.
Note: I haven't actually tried repcached yet, but thought it was worth looking into.
I'm a bit biased, but I'd recommend HTTP_Session2. (I'm working on this package) While we support traditional session handling through files we also support database (MySQL, PostgreSQL, SQlite etc. through PEAR::MDB2) and also memcached.
Personally, we use the database-handler and we serve up to 100,000 users/day with no larger issues. I think optimization-wise, I'd go memcached next, but database is great for an intermediat fix which doesn't require you to bendover backwards. :-)
By the way, for more info on memcached, please check my answer on How to manage session variables in a web cluster?.
EDIT
Since you asked, here is an example (more in the API docs):
$options = array('memcache' => $memcache);
Where $memcache is an instance of PECL::Memcache, which is required. I know we lack an example, and we'll improve on that. In the meantime, our source code has pretty good documenation inline, so for example the check out the API documentation.