I have an Amazon ec2 instance (linux).
I'd like you (yes, you) to be able to upload a PHP file and then serve it live on www.mydomain.com/yourname. I'd also like to be able to do this for numerous other people (www.mydomain.com/theirname).
I'm worried that you (or they, let's not point fingers) could do malicious things (purposefully or accidentally). For example, an infinite loop, reading/writing outside of one's root directory, taking the server down, running system commands, etc. This is what I would try if I wanted to be malicious.
Is there any way to set up PHP/apache/user permissions, or maybe search through their code before serving it, so that being malicious would at least be much, much harder?
Among other things, you'll definitely want to adjust your PHP.ini to include this:
disable_functions =exec,passthru,shell_exec,system,proc_open,popen,curl_exec,curl_multi_exec,parse_ini_file,show_source
This will prevent the execution of those functions within any PHP files that utilize this .ini
I would also enable open_basedir support to lock down users to within their own directories so they can't use something like:
require_once '../../another_user/index.php';
or
$notMyFile = file_get_contents('../../another_user/config.php');
There's no bulletproof way of doing this.
First of all, no syscalls.
Secondly, timeout for each script.
And, you'll probably also want to keep a outside "quit button" in your hands so you can pull the plug if you see something going wrong.
PHP is a very large language, and having others run code on your server is a very difficult thing to do safely.
Have a look at Runkit Sandbox
Instantiating the Runkit_Sandbox class creates a new thread with its own scope and program stack. Using a set of options passed to the constructor, this environment may be restricted to a subset of what the primary interpreter can do and provide a safer environment for executing user supplied code.
http://php.net/manual/en/runkit.sandbox.php
Keep in mind that any resources you provide to a sandboxed environment can and eventually will be abused. If users should not be able to affect each other's results, for example, and you do provide users with a database, give each a different database in their sandbox with different credentials.
Related
I would like to upload files (php sites/applications) to given directory and run them there, within my web server. However, already a simple shell_exec call can cause serious consequences.
All the things I can think of are setting the pages directory outside the public_html and setting the permissions automatically so that the user running that page doesn't have any rights outside it.
Other mediocre solution I've found so far is runkit_sandbox, which looks quite unsecure solution, specially as it seems to be abandoned.
Is there really no way? Not even with full shell access (shell scripts)?
No. There are a virtually unlimited number of malicious behaviors that user-uploaded code can engage in, many of which closely resemble legitimate behavior (e.g, sending mail vs. sending spam; accessing external APIs vs. perpetrating a DDoS; running a command-line utility vs. running an exploit; managing files vs. deleting everything). There is generally no "canned" way to do this, and definitely none within PHP.
http://www.suphp.org/Home.html
Use open_basedir to restrict your users to a certain folder. In this way it can execute his files with his ownership, but it will not be able to see anything more than his folder.
Look here for more informations:
http://www.randombugs.com/php/apache-dynamic-virtual-hosting-php-security.html
http://www.php.net/manual/en/ini.core.php#ini.open-basedir
I'm attempting to build an application in PHP to help me configure new websites.
New sites will always be based on a specific "codebase", containing all necessary web files.
I want my PHP script to copy those web files from one domain's webspace to another domain's webspace.
When I click a button, an empty webspace is populated with files from another domain.
Both domains are on the same Linux/Apache server.
As an experiment, I tried using shell and exec commands in PHP to perform actions as "root".
(I know this can open major security holes, so it's not my ideal method.)
But I still had similar permission issues and couldn't get that method to work either.
But I'm running into permission/ownership issues when copying across domains.
Maybe a CGI script is a better idea, but I'm not sure how to approach it.
Any advice is appreciated.
Or, if you know of a better resource for this type of information, please point me toward it.
I'm sure this sort of "website setup" application has been built before.
Thanks!
i'm also doing something like this. Only difference is that i'm not making copies of the core files. the system has one core and only specific files are copied.
if you want to copy files then you have to take in consideration the following:
an easy (less secured way) is to use the same user for all websites
otherwise (in case you want to provide different accesses) - you must create a different owner for each website. you must set the owner/group for the copied files (this will be done by root).
for the new website setup:
either main domain will run as root, and then it will be able to execute a new website creation, or if you dont want your main domain to be root, you can do the following:
create a cronjob (or php script that runs in a loop under CLI), that will be executed by root. it will check some database record every 2 minutes for example, and you can add from your main domain a record with setup info for new hosted website (or just execute some script that gains root access and does it without cron).
the script that creates this can be done in php. it can be done in any language you wish, it doesn't really matter as long as it gets the correct access.
in my case i'm using the same user since they are all my websites. disadvantage is that OS won't create restrictions, my php code will (i'm losing the advantage of users/groups permissions between different websites).
notice that open_basedir can cause you some hassle, make sure you exclude correct paths (or disable it).
also, there are some minor differences between fastCGI and suPHP (i believe it won't cause you too much trouble).
Can someone suggest some basic advice on dealing with web applications that interact with configuration files like httpd.conf, bind zone files, etc.
I understand that it's bad practice, in fact very dangerous to allow arbitrary execution of code without fully validating it and so on. But say you are tasked to write a small app that allows one to add vhosts to an apache configuration.
Do you have your code execute with full privileges, do you write future variables into a database and have a cron job (with full privileges) execute a script that pulls the vars from the database and throws them into a template config file, etc.
Some thoughts & contributions on this issue would be appreciated.
tl;dr - how can you securely write a web app to update/create entries in a config file like apache's httpd.conf, etc.
I'm not a Unix security guru, but some basic things to think of:
Make sure your web app runs as a specific user, and make sure that user has privileged rights only to those files which it is supposed to modify.
Do not allow arbitrary inputs to be added to the files, have strict forms where each field is validated to contain only things it should contain, like a-z and 0-9 only, etc.
Use HTTPS to access the site.
I'm sure there is more to come from the real gurus.
I understand that it's bad practice, in fact very dangerous to allow arbitrary execution of code without fully validating it and so on.
True.
But say you are tasked to write a small app that allows one to add vhosts to an apache configuration.
Unrelated to the first point. Totally unrelated. Indeed, why is the first point even in there?
Adding vhosts is a simple script. You simply write the script and get it to work. It requires extraordinary privileges. But it's not "arbitrary execution of code". And it will be "fully validatated" (Whatever that means. You write it. You validate it.)
This is not a good choice for a "web app". Nor is it a good choice for a daemon. Indeed, it's really hard to see the connection between "add vhosts to an apache configuration" and "web applications that control daemons."
It's just a script that just updates a file. Nothing special. It requires privileges, so only a select few people can run it. Nothing special there either. Use sudo.
Do you have your code execute with full privileges,
Obviously. The script can't update the vhosts without some privileges.
Unless by "your code" you don't mean the script that updates the vhosts. If you mean something else, like a web page which allows someone to runt he script which updates the vhosts. In which case, you've conflated the script with the web app that runs the script.
do you write future variables into a database and have a cron job (with full privileges) execute a script that pulls the vars from the database and throws them into a template config file, etc.
Sure. People do that. It seems terribly complex.
Use celery instead of rolling your own background processor. http://ask.github.com/celery/getting-started/introduction.html
For a simple web application, I'd like to be able to take advantage of several UNIX features that are stable, have a long history, and are well proven in production, rather than having to write my own code. Take users, for example. Rather than having an entire user, group, permission infrastructure in my webapp, I'd like to be able to simply piggyback on top of the equivalent features of UNIX.
Is there a PHP library that will allow me to register users, log them in, manage permissions, etc. ?
It's really not a good idea to fumble around with the user and permission settings of the actual system that is hosting your site. If you want to protect individual directories of your site, you're better off using .htaccess files. If OTOH you're working with virtual URLs, you'll have a hard time mapping the UNIX directory permissions to them anyway.
Based on your comment to deceze's answer, are you looking for something like PHP's filesystem functions?
Then, there is system and its related functions, which gives access to Unix commands, but I'd recommend other ways of doing things if possible.
Edit: In response to the comments about needing user and group functionality:
Firstly, in case your plan is to let web users have access to the whole file system (or even just their regular log in directories), I just want to advise against that - lots of security concerns (eg. if someone else gets into a user's account, they could delete everything to which they have access).
The little experience I have with handling users in PHP was in some beginner level training. In that class, we had our users in an SQL database and used sessions. We didn't use SSL, but I'd advise some sort of crypto when passing passwords around.
If you're using Apache, it can handle the authentication for you. Other server software can probably do the same, but Apache is all I've ever worked with. I can't address whether Apache can handle sessions itself and don't have the time to research it right now.
If php or your webserver is running with root rights it should be no problem to use this functions.
For security reasons I would strongly recommend to reimplement these things or using any existing php library instead!!
It seems there are standard functions for interfacing with Kerberos or Radius in php.
These both have a long history and are well proven in production, while being separate from the system users.
In ASPNET, I grew to love the Application and Cache stores. They're awesome. For the uninitiated, you can just throw your data-logic objects into them, and hey-presto, you only need query the database once for a bit of data.
By far one of the best ASPNET features, IMO.
I've since ditched Windows for Linux, and therefore PHP, Python and Ruby for webdev. I use PHP most because I dev several open source projects, all using PHP.
Needless to say, I've explored what PHP has to offer in terms of caching data-objects. So far I've played with:
Serializing to file (a pretty slow/expensive process)
Writing the data to file as JSON/XML/plaintext/etc (even slower for read ops)
Writing the data to file as pure PHP (the fastest read, but quite a convoluted write op)
I should stress now that I'm looking for a solution that doesn't rely on a third party app (eg memcached) as the apps are installed in all sorts of scenarios, most of which don't have install rights (eg: a cheap shared hosting account).
So back to what I'm doing now, is persisting to file secure? Rule 1 in production server security has always been disable file-writing, but I really don't see any way PHP could cache if it couldn't write. Are there any tips and/or tricks to boost the security?
Is there another persist-to-file method that I'm forgetting?
Are there any better methods of caching in "limited" environments?
Serializing is quite safe and commonly used. There is an alternative however, and that is to cache to memory. Check out memcached and APC, they're both free and highly performant. This article on different caching techniques in PHP might also be of interest.
Re: Is there another persist-to-file method that I'm forgetting?
It's of limited utility but if you have a particularly beefy database query you could write the serialized object back out to an indexed database table. You'd still have the overhead of a database query, but it would be a simple select as opposed to the beefy query.
Re: Is persisting to file secure? and cheap shared hosting account)
The sad fact is cheap shared hosting isn't secure. How much do you trust the 100,500, or 1000 other people who have access to your server? For historic and (ironically) security reasons, shared hosting environments have PHP/Apache running as a unprivileged user (with PHP running as an Apache module). The security rational here is if the world facing apache process gets compromised, the exploiters only have access to an unprivileged account that can't screw with important system files.
The bad part is, that means whenever you write to a file using PHP, the owner of that file is the same unprivileged Apache user. This is true for every user on the system, which means anyone has read and write access to the files. The theoretical hackers in the above scenario would also have access to the files.
There's also a persistent bad practice in PHP of giving a directory permissions of 777 to directories and files to enable the unprivileged apache user to write files out, and then leaving the directory or file in that state. That gives anyone on the system read/write access.
Finally, you may think obscurity saves you. "There's no way they can know where my secret cache files are", but you'd be wrong. Shared hosting sets up users in the same group, and most default file masks will give your group users read permission on files you create. SSH into your shared hosting account sometime, navigate up a directory, and you can usually start browsing through other users files on the system. This can be used to sniff out writable files.
The solutions aren't pretty. Some hosts will offer a CGI Wrapper that lets you run PHP as a CGI. The benefit here is PHP will run as the owner of the script, which means it will run as you instead of the unprivileged user. Problem averted! New Problem! Traditional CGI is slow as molasses in February.
There is FastCGI, but FastCGI is finicky and requires constant tuning. Not many shared hosts offer it. If you find one that does, chances are they'll have APC enabled, and may even be able to provide a mechanism for memcached.
I had a similar problem, and thus wrote a solution, a memory cache written in PHP. It only requires the PHP build to support sockets. Other then that, it is a pure php solution and should run just fine on Shared hosting.
http://code.google.com/p/php-object-cache/
What I always do if I have to be able to write is to ensure I'm not writing anywhere I have PHP code. Typically my directory structure looks something like this (it's varied between projects, but this is the general idea):
project/
app/
html/
index.php
data/
cache/
app is not writable by the web server (neither is index.php, preferably). cache is writable and used for caching things such as parsed templates and objects. data is possibly writable, depending on need. That is, if the users upload data, it goes into data.
The web server gets pointed to project/html and whatever method is convenient is used to set up index.php as the script to run for every page in the project. You can use mod_rewrite in Apache, or content negotiation (my preference but often not possible), or whatever other method you like.
All your real code lives in app, which is not directly accessible by the web server, but should be added to the PHP path.
This has worked quite well for me for several projects. I've even been able to get, for instance, Wikimedia to work with a modified version of this structure.
Oh... and I'd use serialize()/unserialize() to do the caching, although generating PHP code has a certain appeal. All the templating engines I know of generate PHP code to execute, making post-parse very fast.
If you have access to the Database Query Cache (ie. MySQL) you could go with serializing your objects and storing them in the DB. The database will take care of holding the query results in memory so that should be pretty fast.
You don't spell out -why- you're trying to cache objects. Are you trying to speed up a slow database query, work around expensive object instantiation, avoid repeated generation of complex page, maintain application state or are you just compulsively storing away objects in case of a long winter?
The best solution, given the atrocious limitations of most low-cost shared hosting, is going to depend on what you're trying to accomplish. Going for bottom of the barrel shared-hosting means you have to accept that you won't be working with the best tools. The numbers are hard to quantify, but there's a trade off between hosting costs, site performance & developer time (ie - fast, cheap or easy).
It's in theory possible to store objects in sessions. That might get you past the file writing disabled problem. Additionally you could store the session in a mysql memory backed table to speed up the query.
Some hosting places may have APC compiled in.. That would allow you to store the objects in memory.