E.g. Is it more secure to use mod_php instead of php-cgi?
Or is it more secure to use mod_perl instead of traditional cgi-scripts?
I'm mainly interested in security concerns, but speed might be an issue if there are significant differences.
Security in what sense? Either way it really depends on what script is running and how well it is written. Too many scripts these days are half-assed and do not properly do input validation.
I personally prefer FastCGI to mod_php since if a FastCGI process dies a new one will get spawned, whereas I have seen mod_php kill the entirety of Apache.
As for security, with FastCGI you could technically run the php process under a different user from the default web servers user.
On a seperate note, if you are using Apache's new worker threading support you will want to make sure that you are not using mod_php as some of the extensions are not thread safe and will cause race conditions.
If you run your own server go the module way, it's somewhat faster.
If you're on a shared server the decision has already been taken for you, usually on the CGI side. The reason for this are filesystem permissions. PHP as a module runs with the permissions of the http server (usually 'apache') and unless you can chmod your scripts to that user you have to chmod them to 777 - world readable. This means, alas, that your server neighbour can take a look at them - think of where you store the database access password. Most shared servers have solved this using stuff like phpsuexec and such, which run scripts with the permissions of the script owner, so you can (must) have your code chmoded to 644. Phpsuexec runs only with PHP as CGI - that's more or less all, it's just a local machine thing - makes no difference to the world at large.
Most security holes occur due to lousy programming in the script itself, so it's really kind of moot if they are ran as cgi or in modules. That said, apache modules can potentially crash the whole webserver (especially if using a threaded MPM) and mod_php is kind of famous for it.
cgi will be slower, but nowadays there are solutions to that, mainly FastCGI and friends.
What is your threat model?
From the PHP install.txt doc for PHP 5.2.6:
Server modules provide significantly better performance and additional
functionality compared to the CGI binary.
For IIS/PWS:
Warning
By using the CGI setup, your server is open to several possible
attacks. Please read our CGI security section to learn how to defend
yourself from those attacks.
A module such as mod_php or FastCGI is incredibly faster than plain CGI.. just don't do CGI. As others have said, the PHP program itself is the greatest security threat, but ignoring that there is one other consideration, on shared hosts.
If your script is on a shared host with other php programs and the host is not running in safe mode, then it is likely that all server processes are running as the same user. This could mean that any other php script can read your own, including database passwords. So be sure to investigate the server configuration to be sure your code is not readable to others.
Even if you control your own hosting, keep in mind that another hacked web application on the server could be a conduit into others.
Using a builtin module is definitely going to be faster than using CGI. The security implications depend on the configuration. In the default configuration they are pretty much the same, but cgi allows some more secure configurations that builtin modules can't provide, specially in the context of shared hosting. What exactly do you want to secure yourself against?
Related
I have users upload PHP files to my server (I know this is a security risk, but it must be done).
I might have to execute the PHP scripts on the server.
So I was wondering, is there a way I can deny those PHP scripts access to any files and any directories outside of their current folder? This would make it secure enough for me to use.
Thanks.
Sigh... "but it must be done" - says whom?
Some options might exist: Looking at the comments on the documentation for the (now-removed) PHP Safe Mode, I found a link to suPHP which "is a tool for executing PHP scripts with the permissions of their owners". This would require local UNIX accounts for each user though - which I'm not sure is possible in your situation.
A real solution would need to go much deeper. I was once on a website that allowed you to compile and run applications in just about any language, as part of an exam. By compiling some "interesting" programs, I was able to determine that I was actually running in a QEMU VM "jail", and they were somehow funneling IO to/from the VM via my HTTP connection.
But the right answer is probably, of course, don't do it. With more information as to what exactly you're designing, we might be able to offer more sane alternatives.
You could set up a chrooted environment for these scripts to run in. Not absolutely waterproof, but a lot better than potentially giving access to your entire filesystem.
This article contains lots of info on how to get certain services running correctly inside your chrooted environment, it also contains a link to a best practices document concerning the correct usage of chroot.
Now, php also has a chroot command, it might be possible to tinker some kind of "sandbox" that's "good enough" for your purposes by using that function.
Anyways, although chroot can help tremendously to protect your system during the execution of foreign code you should remain very careful, and the basic rule is to provide as little services and facilities inside the chrooted environment as possible. In that context, the SO article pointed to by Emilio Gort contains a (very long) list of exploitable functions, probably most or all of these should be blocked by using the disable_functions setting in php.ini
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I utilize APC for opcode in par with 4-cpu license Litespeed.
What is the best PHP handler for this situation in term of performance first, and security later?
Is it suphp / dso / fcgi / cgi ? (i read that DSO can leave a hole if one of the script has a bug) ?
myusername#mybox [~]# /usr/local/cpanel/bin/rebuild_phpconf --available
Available handlers: suphp dso fcgi cgi none
PHP4 SAPI: cgi
PHP5 SAPI: not installed
SUEXEC: available
RUID2: not available
Thank you.
There is a great article on this on Servint Blog: http://blog.servint.net/2011/10/28/the-tech-bench-all-about-php-handlers/
Be sure to visit there site it also has comparison charts.
List of PHP handlers
DSO
Also known as mod_php. While this is an older configuration, its main benefit is speed. It is generally considered the fastest handler. It runs PHP directly from Apache, without having to pass it to a separate service for processing. This means that PHP scripts will run as the Apache user, which by default on our servers is the user ‘nobody’.
DSO has two things to consider before switching to it. First, any files that need to be written to by the webserver have to have write permissions for the ‘nobody’ user, and any file created by the webserver will be owned by ‘nobody’. Websites that need to upload files through PHP may run into permissions issues, since the settings aren’t as clear cut as the other handlers. This is common with WordPress users that upload files through the WordPress interface or utilize the auto-update feature.
A special note about the above: it is a common misconception that files need to have the 777 mode to be writable. This is not true, and generally a bad idea since it means the files are writable by anyone. To make a file writable by the webserver, the highest permissions needed should be 664 and owned by ‘user’ and group ‘nobody’. For directories this would be 775 and user:nobody. This should be enough to allow the webserver to write to the location without making it writable by everyone and introducing a potentially critical security vulnerability.
Also, know that DSO offers a different type of security as opposed to suPHP/FastCGI. Since the server runs it as ‘nobody’ anyone that would be able to exploit a file on your server to gain elevated privileges will have access to any other file that the webserver can directly access. What this simply means is that an intruder could have access to files across multiple accounts, but only those files that are owned by ‘nobody’. Please see the Security section below for more information.
The main advantage of DSO is speed and resource usage. With opcode caching extensions like eAccelerator or APC installed, DSO will run significantly faster and at a lower footprint than the other handlers. It is also the default setting on our servers.
A good rule of thumb is that DSO is best for servers that are running one or two large, high-performance websites where efficiency and speed are a concern.
CGI
CGI handler will run PHP as a CGI module as opposed to an Apache module. The CGI method is intended as a fallback handler for when DSO is not available. This method is neither fast nor secure, regardless of whether or not suEXEC is enabled. ServInt therefore does not recommend using CGI.
suPHP
suPHP runs PHP as a separate service that then passes the compiled code back to Apache for serving. It is technically a CGI module, however it is much different than the CGI handler mentioned above. The main difference, and the advantage of having suPHP, is that with suEXEC enabled it runs the PHP scripts as the user calling them, rather than as the ‘nobody’ user. For example, if an account is owned by the user Spock, all instances of Apache serving that user’s website will run as user Spock. The advantage here is that it makes tracking down websites using excessive resources easier.
Another advantage of running the process as the user is that it simplifies the overall permissions scheme. The webserver will be able to write to files that are owned by the user and not just ‘nobody’. What this means in the long run is that auto-update/install features in many CMS solutions will work more easily, and the general permissions of your file/directories is more clear-cut: 644 and owned by user and group user for files, and similarly 755 and user:user for directories.
The security difference between suPHP and DSO is that suPHP confines an intruder to the particular user that he/she has affected. The exploit can’t cross accounts, however it can affect every single file the user owns as opposed to just the files writable by the webserver. Please see the Security section below for more information.
The main disadvantage of suPHP is speed and CPU load. suPHP runs much slower than the other handlers, and you will see significant increase in your overall CPU load when switched to it. suPHP also cannot work with an opcode caching extension such as eAccelerator or APC, which is part of the reason for the increase in CPU usage. Because of this, it is recommended that you implement a caching plugin if you use this with a CMS, such as W3-Total-Cache for WordPress. This handler is recommended more for the smaller reseller client. This is because it locks down exploits to one affected account, and the CPU load increase will not be that significant with less busy sites or small number of individual cPanel accounts.
FastCGI
FastCGI (aka: mod_fcgid) is similar to suPHP in that it is a separate process that compiles the PHP which is then sent back to Apache. It is also a CGI module, which means with suEXEC enabled PHP runs the process as the user. This allows you the same permissions advantages of suPHP mentioned above. The difference between the two, however, is how they control the PHP processes. suPHP runs each time a PHP process needs to be compiled, whereas FastCGI keeps persistent connections open that can be recalled by the same PHP process. This allows you to use an opcode caching extension such as eAcceleartor or APC with it to increase performance.
The drawback is FastCGI has a high memory usage. Due to the persistent connections mentioned above being stored in memory, you are going to see much more available memory being taken up by FastCGI.
A good analogy of FastCGI and suPHP is to think of a book with several chapters. With suPHP, this book will have no table of contents and no index, so each time you want to find something you have to look through the book to get it. This takes time (increased CPU usage) and is slow. With FastCGI, this book has an extensive index and table of contents, so you can quickly and easily find what you are looking for; however these additional pages make the book much thicker (increased memory usage).
While I do know that system calls and security don't go hand in hand, there is a project for which I do need it. I'm writing a small code checker and I need to compile and execute the user submitted code to test against my test cases.
Basically I want to run the code in a sandbox, so that it can't touch any files outside of the temporary directory and any files that it creates can't be accessed by the outside world.
Recently I came across an exploit with with which the user could create a file say shell.php with the following contents.
<?php
echo system($_GET['x']);
?>
This gives the attacker a remote shell and since the owner of the file is apache, the attacker could basically move around my entire /var/www where mysql passwords were stored along with other configuration information.
While I am aware of threats like SQL Injections and have sanitized the user input before any operations that involve the DB, I have no idea as to how I can set up the sandbox. What are the techniques that I can use to disable system calls (right now I'm searching for the word 'system' in the user submitted code and not executing those snippets where it is found) and restrict the access to the files that the user submitted code creates.
As of now my code checker only works for C and I plan to add support for other languages like C++, Java, Ruby and Python after I can secure it. Also I'd like to learn more about this problem that I've encountered so pointers to a place where I could learn more about web security would also be appreciated.
My development machine is running Mac OS Lion and the deployment machine is a linux server so if a solution, that was cross platform would be most appreciated but one that dealt with just the linux machine would do too.
What you will probably want to do is set up a chroot to some random temp directory on your filesystem for the user running your scripts. Here is some reading on setting up a chroot, and some security to-know's.
I would suggest you also install a security module such as suExec or MPM-iTK for Apache. Then, within your Apache's VirtualHost (if you are not running a virtual host, do so!), assign a specific UserID to handle requests for this specific VirtualHost. This separates the request from the Apache default user, and adds a little security.
AssignUserID nonprivilegeduser nonprivilegeduser
Then, harden PHP a little by setting the following PHP options so the user cannot access files outside of the specific directories, and move your tmp_dir and session_save_path within this directory. This will prevent the users access outside of their base directory.
php_admin_value open_basedir /var/www/
php_admin_value upload_tmp_dir /var/www/tmp
php_admin_value session.save_path /var/www/tmp
Along with the lines of PHP, prevent access to specific functions and classes, and read up on PHP's security write-up.
Also, I would have you look into for that user, disabling access to sudo and su, to prevent a script from attempting to access root privileges. Learn more, here.
All in all, you said it nice and clear. There is no way to fully prevent a user from accessing your system if they have the will. The trick is to just make it as difficult as possible, and confusing as possible for them.
There is no way to make this work on a cross-platform basis, period. Sandboxing is inherently highly system-specific.
On Mac OS X, there is the Sandbox facility. It is poorly documented, but quite effective (Google Chrome relies heavily on it). Some enterprising souls have documented parts of it. However, it's only available on Mac OS X, so that probably rules it out.
On Linux, your options are considerably less developed. Some kernels support the seccomp mechanism to prevent processes from using any except a small "safe" set of system calls; however, not all do. Moreover, that "safe" subset doesn't include some calls that you are likely to need in code that hasn't been specifically written to run under seccomp -- for instance, mmap and sbrk are not permitted, so you can't allocate memory. Helper tools like seccomp-nurse may get you somewhere, though.
Here are the things I suggest doing.
1.
Disable system classes and functions in php.ini with
disable_functions="system,curl_init,fopen..."
disable_classes="DirectoryIterator,SplFileObject..."
2.
Run in a read only environment with no important data stored on it. In case anyone ever got into your server you don't want them to access anything. A good way to do this is buy an Amazon AWS EC2 and use a jailed user to run your server and PHP.
3.
Ask people to break it. The only way you can find flaws and loop holes that you are unaware of is to find them. If necessary, get a temporary server with a "test" application that replicates the same type of application your "production" environment will be.
Here are some helpful resources.
List of functions to disable. It can definitely be expanded upon but it's a good start.
Information on to avoid security issues.
The source and README in Viper-7's Codepad
I think what you are looking for is Mandatory Access Control. In Linux, it is available via SeLinux. Using it you can limit who can execute what command. In your case, you can limit the php user (Apache) to execute only limited commands like gcc etc. Also look into AppArmor
Also, look into runkit php virtual environment
You can try to run user submitted code in a container (docker) which are very light weight VMs. They start in less than a second.
I'm in the process of setting up a webserver from scratch, mainly for writing webapps with Python. On looking at alternatives to Apache+mod_wsgi, it appears that pypy plays very nicely indeed with pretty much everything I intend to use for my own apps. Not really having had a chance to play with PyPy properly, I feel this is a great opportunity to get to use it, since I don't need the server to be bulletproof.
However, there are some PHP apps that I would like to run on the webserver for administrative purposes (PHPPgAdmin, for example). Is there an elegant solution that allows me to use PyPy within a PHP-compatible webserver like Apache? Or am I going to have to run CherryPy/Paste or one of the other WSGI servers, with Apache and mod_wsgi on a separate port to provide administrative services?
You can run your PyPy apps behind mod_proxy and serve static content with Apache (or even better use nginx). In addition to CherryPy, gunicorn and tornado run great on PyPy.
I know that mod_wsgi doesn't work with mod_php
I heavily advise you, running PHP and Python applications on CGI level.
PHP 5.x runs on CGI, for python there exists flup, that makes it possible to run WSGI Applications on CGI.
Tamer
I'm developing a web app for an Apache shared hosting server. I have already written some code in Perl but I recently found out, to my surprise, the shared hosting provider does not provided mod_perl or a way to install it.
I have been a bit worried that running a Perl web app through CGI without mod_perl would make it very slow? Should I switch all of my code to PHP instead, would that be faster?
The reason I chose Perl in the first place is, I'm very familiar with Perl more than PHP. Also I wanted to be able to use my Perl libraries outside the realm of web development.
So if any of you are experienced with Apache web development, can you shed some light as to which direction should I take.
For the sake of this question, lets say the web application will get 500+ hits a day.
Which would be faster PHP or Perl without mod_perl?
Thanks in advance for the help.
At only 500 hits a day, you could write your code in just about anything and not have to worry about slow downs. 500 hits a day evens out to about 1 page every 3 minutes. Even assuming a non-normal distribution of hits, you shouldn't really worry about this with such small traffic numbers.
PHP would be faster.
However, with only 500 hits per day, using cgi would not be a problem. Not even with 500 hits an hour.
Much depends on your architecture. Modern Perl frameworks aren't well suited for use as CGI (long start-up times). If you use CGI, Catalyst probably is a bad idea. That said, using classical architecture it should be quite manageable.
Unless your shared host is running PHP as a CGI application (not mod_php or FastCGI), PHP is almost1 always going to be faster. While Perl, running as a CGI, could probably handle your 500 hits a day, an application/page developed with CGI is going to be sluggish.
CGI works by spawning a new process to run your program for each request. Both mod_php and FastCGI applications mitigate this by spawning a set a number of processes and then using these to run your application. In other words, a new processes isn't being spawned for each request. (This is an oversimplified explanation, please don't use in a CS Term Paper. See mod_php and FastCGI docs for more info)
You could come up with pathological examples where it wouldn't be, but then you'd be the kind of person to come up with pathological examples of things, and no one wants that
Speed shouldn't be your concern. Both languages are suitable for web applications.
For the volume of traffic you're looking at, Perl with vanilla CGI shouldn't be an issue, although I would second the earlier recommendations to check out FastCGI as another option which your hosting service may provide.
Or another option would be to look for a different hosting company...
Expanding on what Alan Storm said, you might be able to use Perl with FCGI instead.
FCGI works by having a sort of stand-alone server, a daemon if you like, that connects with your web server via FCGI protocol and delegates/dispatches requests.
This is faster than normal CGI, as this emulates a sort of "servlet" model, the application is persistent, and there is no need for a new initialization on every call like there is with normal CGI.
I have not yet learned how to do this myself, but I believe Catalyst has this option, so its just a matter of learning how to replicate this.
FastCGI/FCGI should be available on drastically more hosts than plain old mod_perl, as FCGI applications are not web-server specific, and some web servers implement PHP via a fcgi utility.
And I've experimented with FCGI webserving a little, and preliminary tests say it can handle at least 500 req/s , far faster than the above concerns of 500/day or 500/hour.
It's possible to hack fastcgi support into a hosting account that doesn't support it. I compiled the fastcgi library with the install prefix set to the same thing as the home directory on the hosting account. Then I synced it up and set up catalyst to use the small cgi-fcgi bridge. It worked well. Nice and fast, because the cgi bridge is just a tiny little executable. The catalyst process persisted in the background just fine.
The answer in everyones mind is: who cares.
500 requests per day is nothing.
Just use whats fastest to implement / maintain and move on.
For lighter web frameworks that will work using CGI then have a look at....
Squatting
CGI::Application
CGI::Lazy
It depends mostly on how complex your code is and how it's put together; if you run it as CGI, perl will compile your script and modules on each invocation, and will have to reconnect to your database for each request. If your code is complex enough, this may take a few seconds per pageview, which may hamper user experience.
If your codebase and used modules isn't huge though, there should be no problem at all.
You can do a perl -c on your code to get a feel for how long perl startup and your compilation time is.