I'm trying to understand the actual risks of allow_url_include, or find some practical alternative for this scenario:
Server A has a php-based web page, which fetches data from a number of remote servers (Servers B to J) about their current status. Server A then parses the data returned and displays a summary. The code to get the data and send it back is a PHP script which resides on Servers B to J, and as more servers are added, is becoming a pain to keep up to date - whenever a new feature is required on the summary page, that file must be updated on every server to match what the summary code expects to be sent back.
One obvious solution is to include the code to get data, so that the code on Servers B to J looks like:
include("http://ServerA/stats/getData.php.source");
echo base64_encode(serialize(getData()));
But pretty much every SO question regarding allow_url_include just says "Don't do it". I've struggled to find specific risks associated with this, and how to mitigate them.
The goal here is to have all the code on Server A, so that maintenance / feature additions become much easier to handle. An nfs mount might be practical, but seems a little excessive for a single file. Writing a script on Server A to use ssh to push new code to each server is also a possibility, but would slow down the development cycle.
There are no other developers on Servers B to J, so is allow_url_include really such a risk? What else could it do?
If the application layer and system layer are BOTH secured to an extent that you believe no one can get in - then there is nothing wrong with allowing something like allow_url_include
This can be extremely complicated, as you will need a layer on the outside of your application monitoring incoming requests. However it is not impossible!
Other things to help :
PhpSecInfo
theDevShed
Unless you are 100% sure you can secure your server to a more than reasonable level then I would suggest using an alternative such as cURL instead.
Related
I've a rather odd requirement: I want users to be able to verify the live source code of a web app before they input data or extract data from it.
Or, on a more higher level, the users need to be reasonably assured of what is being done (and not done) in the back end. Of course, if you inspect the stream from a process external to the web server, this becomes a useless exercise. But I only need a reasonable level of assurance.
What are the options? I'm willing to use pretty much any server side language/platform, provided it serves the purpose better than the alternatives. It cannot be a method that can be used to easily spoof the source code -- there has to be some assurance that the code is live and not a separate copy (something equivalent to making /var/www/app and apache conf world-readable, but not exactly).
Update: this should be read-only
Giving them access to your Git sources is simple and straightforward. If you cannot convince them that you deploy what you show, you lose anyway. There is no way to prove that with a more convoluted system either (short of giving them write access!)
No server-side solution will do. If the users don't trust the server to begin with then showing them some code will not convince them that the code is actually what processes their input, or that no one is listening in on the traffic or on the server-side process.
If the server is not a trusted platform as far as the users are concerned, then you will have to execute the code somewhere the users do trust. On a trusted 3rd-party, or even better on the user's machine itself. Be that as a downloable module they can inspect and run themselves (something interpreted, most likely, like Python or node) or even better: in their browser.
Let's say your writing a PHP application that will be hosted in a load-balanced/multi-server setup. What are the things you need to know in order to ensure smooth operation? Right now the only thing I think will be an issue is PHP sessions (i.e., you must use a custom database handler for it). Anything else?
Let's turn this into an answer:
In my experience, the overwhelming majority of PHP applications is not or not only constrained by PHP horsepower on the webserver, but at least as much by backing store, i.e. Database and/or files.
So load balancing a PHP application without carefull analysis bears the potential to make things worse: Hit the weakest link in the chain with more and more load.
So the first - and IMHO most important "thing to know when writing a web app hosted in a load-balanced server" is the load pattern, and its potential for balancing. If your app performs bad, you load-balance it on more servers, then find out you now have more servers waiting for the DB, you are in trouble.
Here is an out-of-the blue checklist, please reagrd it as a brainstorm (or a brainfart) only:
First: Are you really CPU-bound?
Which pages are hit most (see your log)
For the top N of these (with a suitable N) check the processing pattern: Where do the CPU cycles go?
What would be the side effects of making sessions, uploads, file storage (add whatever you use) shared and would it be offset by the load balancing?
Comments welcome, I am very sure to have not even scratched the surface!
Edit
Just thought of something that bit me once in this context: Resource locking. Brace yourself for a higher degree of concurrency, if you go multi-server
File uploads/downloads could be also an issue - you probably would need them to be visible all servers
I'm currently building a website, using PHP, and looking into securing the website fully. Currently, and in the future, I don't plan on using SQL, or even user-submitted input anywhere on the website - the PHP pages are basically simply in place for ease in piecing together several different HTML fragments to create a full page, using requires.
What type of vulnerabilities, if any, should I be aware of to protect against?
There are no vulnerabilities in the situation you've outlined.
If you are using any query string variables to load pages, they may need to be secured. For example: article.php?page=php-page-security.
Otherwise just make sure that your server software is updated regularly to the latest versions, and access to the web server is properly secured. It sounds like your code is pretty basic and you aren't doing any processing in PHP, so you should be fine.
This is a huge topic that can't be answered in a single post. Some tips:
Secure physical access to your web server (your hosting provider should handle this)
Minimize remote access to the server. Setup a firewall, use proper passwords, regularly run updates
Secure your code (PHP and javascript). Make sure to "clean" any qstrings you might process and never use eval. Consider using a framework to simplify this step.
Keep server logs and review them regularly for mischief.
This is just a jumping point. A google search for "web application security" will turn up troves of information. Good luck!
Possible exploits are in the overall server security.
As you use PHP in that simple manner, there's a risk that you do not know it well enough and might overlook some hole: like user input option, or file access rights which would allow a bad guy to upload his php to the server.
PHP offers too much for a simple task of including files. More capabilities == more risk.
I'd use server-side includes for the sake of assembling several files into one web page, and just disable php — faster, more secure.
You should be sure that your software (e.g. webserver, operating system, PHP) is up-to-date, with the latest security patches and updates. You can also hide PHP (read the official guide or [search Google])(http://www.google.com/search?q=hiding+php)
By combining all the advice you get from the answers here, your application will be something more that perfectly safe.
As #Toast said, you had better block incoming traffic and only allow port 80 by using a firewall (Netfilter/iptables on Linux), except if you want to enable additional services, such as FTP.
And in case you want the data travelling between the server and the client to be safe, then HTTPS is the best solution.
If you're not basing the "piecing together" on any kind of user-provided data, and not including any user-provided data into the page, then you're about as vulnerable as a plain static .html file.
That means you're not doing:
include($_GET['pageName']); // hello total server compromise
or
echo "Hello, ", $_GET['username']; // hello cross-site-scripting!
and the like.
I asked a recent question regarding the use of readfile() for remotely executing PHP, but maybe I'd be better off setting out the problem to see if I'm thinking the wrong way about things, so here goes:
I have a PHP website that requires users to login, includes lots of forms, database connections and makes use of $_SESSION variables to keep track of various things
I have a potential client who would like to use the functionality of my website, but on their own server, controlled by them. They would probably want to restyle the website using content and CSS files local to their server, but that's a problem for later
I don't want to show them my PHP code, since that's the value of what I'd be providing.
I had thought to do this with calls to include() from the client's server to mine, which at least keeps variable scope intact, but many sites (and the PHP docs) seem to recommend readfile(), file_get_contents() or similar. Ideally I'd like to have a simple wrapper file on the client's server for each "real" one on my server.
Any suggestions as to how I might accomplish what I need?
Thanks,
ColmF
As suggested, comment posted as an answer & modified a touch
PHP is an interpretive language and as such 'reads' the files and parses them. Yes it can store cached byte code in certain cases but it's not like the higher level languages that compile and work in bytecode. Which means that the php 'compiler' requires your actual source code to work. Check out zend.com/en/products/guard which might do what you want though I believe it means your client has to use the Zend Server.
Failing that sign a contract with the company that includes clauses of not reusing your code / etc etc. That's your best protection in this case. You should also be careful though, if you're using anything under an 'open source' license your entire app may be considered open source and thus this is all moot.
This is not a non-standard practice for many companies. I have produced software I'm particularly proud of and a company wants to use it. As they believe in their own information security for either 'personal' reasons or because they have to comply to a standard such as PCI there are times my application must run in their environments. I have offered my products as 'web services' where they query my servers with data and recieve responses. In that case my source is completely protected as this is no different than any other closed API. In every case I have licensed the copy to the client with provisions that they are not allowed to modify nor distribute it. This is a legal binding contract and completely expected from the clients side of things. Of course there were provisions that I would provide support etc etc but that's neither here nor there.
Short answers:
Legal agreement, likely your best bet from everyone's point of view
Zend guard like product, never used it so I can't vouch for it
Private API but this won't really work for you as the client needs to host it
Good luck!
If they want it wholly contained on their server then your best bet is a legal solution not a technical one.
You license the software to them and you make sure the contract states the intellectual property belongs to you and it cannot be copied/distributed etc without prior permission (obviously you'll need some better legalese than that, but you get the idea).
Rather than remote execution, I suggest you use a PHP source protection system, such as Zend Guard, ionCube or sourceguardian.
http://www.zend.com/en/products/guard/
http://www.ioncube.com/
http://www.sourceguardian.com/
Basically, you're looking for a way to proxy your application out to a remote server (i.e.: your clients). To use something like readfile() on the client's site is fine, but you're still going to need multiple scripts on their end. Basically, readfile scrapes what's available at a particular file path or URL and pipes it to the end user. So if I were to do readfile('google.com'), it would output the source code for Google's homepage.
Assuming you don't just want to have a dummy form on your clients' sites, you're going to need to have some code hanging out on their end. The code is going to have to intercept the form submissions (so you'll need a URL parameter on the page you're scraping with readfile to tell your code that the form submission URL is your client's site and not your own). This page (the form submission handler page) will need to make calls back to your own site. Think something like this:
readfile("https://your.site/whatever?{$_SERVER['QUERY_STRING']}");
Your site is then going to process the response and then pass everything back to your clients' sites.
Hopefully I've gotten you on the right path. Let me know if I was unclear; I realize this is a lot of info.
I think you're going to have a hard time with this unless you want some kind of funny wrapper that does curl type requests to your server. Especially when it comes to handling things like sessions and cookies.
Are you sure a PHP obfuscator wouldn't be sufficient for what you are doing?
Instead of hosting it yourself, why not do what most php applications do and simply distribute the program to your client with an auto-update feature? Hosting it yourself is complicated, from management of websites to who is paying for the hosting.
If you don't want it to be distributed, then find a pre-written license that allows you to do this. If you can't find one then it's time to talk to a lawyer.
You can't stop them from seeing your code. You can make it very hard for them to understand your code, which is a good second best. See our SD PHP Obfuscator for a tool that will scramble the identifiers and the whitespacing in the code, making it much more difficult to understand.
I'm trying to find a efficient way to watch the server log on a webpage, i don't mind building an app i just can't work out the best way to do it.
Is there a way to keep a stream open to a file with php and to the browser? or will it have to be done by polling the file every x seconds?
Thanks in advance,
Shadi
The best solution is definitely AJAX in some capacity. The only way to have the server "push" to you the way you describe (maintain an open stream) would require the HTTP connection to remain open which would ultimately trigger timeouts and consume a lot of resources. I would look into the Cometd library. The downside to this is that I believe it depends on Java although the site does mention perl, python and "other languages." In the worst case, you could use a specific jetty implementation just for log monitoring on a specific port. Regardless, that framework would most likely be your best bet.
Any web-based chat mechanism essentially uses a push architecture and would be good to look at for some inspiration. In this case, instead of users creating messages that are fired to other users, the server creates the events (when a log message is generated). Check out this article on Facebook chat for some insight into how they do it. Google chat might be worth looking into if you can find some stuff on the architecture.
For the actual logging, I'm not sure if you are in need of help for that, but log4php which is currently under incubation might be a good place to start as it provides you with a configuration that can simultaneously log to an arbitrary number of "loggers" like database, file, socket, etc. You could likely find one that would allow you to tie it into whatever push framework you elect to use.
Good luck!
Remember that the web model is essentially stateless (disconnected). Having that in mind when a client submits a request, the server processes the request and then send a response accordingly. You can have track of the clients action using cookies and/or sessions, but the resources reserved for a request are released after the response is submitted back.
I think that the best way to meet your goal, is to develop a web services that checks for the status of the log and fetch the diff (if any). Your app may consist of a web page with a div that will display the diff from the web service.
A script with a timer will trigger the call to the web service.
I will try to do something like this in a few weeks, and I will post the entire solution on moropo blog (spanish). You can ask for a post translation using the comments.
The best way to do it is to use AJAX to pull the file content every x seconds, giving the illusion of real time.
If you do want real time, you can use an XMPP server, but from what I can see, the first solution is far sufficient and does't require a lot of work.
Try wonlog.
https://www.npmjs.com/package/wonlog
You can stream multiple log files to a web browser.