Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I utilize APC for opcode in par with 4-cpu license Litespeed.
What is the best PHP handler for this situation in term of performance first, and security later?
Is it suphp / dso / fcgi / cgi ? (i read that DSO can leave a hole if one of the script has a bug) ?
myusername#mybox [~]# /usr/local/cpanel/bin/rebuild_phpconf --available
Available handlers: suphp dso fcgi cgi none
PHP4 SAPI: cgi
PHP5 SAPI: not installed
SUEXEC: available
RUID2: not available
Thank you.
There is a great article on this on Servint Blog: http://blog.servint.net/2011/10/28/the-tech-bench-all-about-php-handlers/
Be sure to visit there site it also has comparison charts.
List of PHP handlers
DSO
Also known as mod_php. While this is an older configuration, its main benefit is speed. It is generally considered the fastest handler. It runs PHP directly from Apache, without having to pass it to a separate service for processing. This means that PHP scripts will run as the Apache user, which by default on our servers is the user ‘nobody’.
DSO has two things to consider before switching to it. First, any files that need to be written to by the webserver have to have write permissions for the ‘nobody’ user, and any file created by the webserver will be owned by ‘nobody’. Websites that need to upload files through PHP may run into permissions issues, since the settings aren’t as clear cut as the other handlers. This is common with WordPress users that upload files through the WordPress interface or utilize the auto-update feature.
A special note about the above: it is a common misconception that files need to have the 777 mode to be writable. This is not true, and generally a bad idea since it means the files are writable by anyone. To make a file writable by the webserver, the highest permissions needed should be 664 and owned by ‘user’ and group ‘nobody’. For directories this would be 775 and user:nobody. This should be enough to allow the webserver to write to the location without making it writable by everyone and introducing a potentially critical security vulnerability.
Also, know that DSO offers a different type of security as opposed to suPHP/FastCGI. Since the server runs it as ‘nobody’ anyone that would be able to exploit a file on your server to gain elevated privileges will have access to any other file that the webserver can directly access. What this simply means is that an intruder could have access to files across multiple accounts, but only those files that are owned by ‘nobody’. Please see the Security section below for more information.
The main advantage of DSO is speed and resource usage. With opcode caching extensions like eAccelerator or APC installed, DSO will run significantly faster and at a lower footprint than the other handlers. It is also the default setting on our servers.
A good rule of thumb is that DSO is best for servers that are running one or two large, high-performance websites where efficiency and speed are a concern.
CGI
CGI handler will run PHP as a CGI module as opposed to an Apache module. The CGI method is intended as a fallback handler for when DSO is not available. This method is neither fast nor secure, regardless of whether or not suEXEC is enabled. ServInt therefore does not recommend using CGI.
suPHP
suPHP runs PHP as a separate service that then passes the compiled code back to Apache for serving. It is technically a CGI module, however it is much different than the CGI handler mentioned above. The main difference, and the advantage of having suPHP, is that with suEXEC enabled it runs the PHP scripts as the user calling them, rather than as the ‘nobody’ user. For example, if an account is owned by the user Spock, all instances of Apache serving that user’s website will run as user Spock. The advantage here is that it makes tracking down websites using excessive resources easier.
Another advantage of running the process as the user is that it simplifies the overall permissions scheme. The webserver will be able to write to files that are owned by the user and not just ‘nobody’. What this means in the long run is that auto-update/install features in many CMS solutions will work more easily, and the general permissions of your file/directories is more clear-cut: 644 and owned by user and group user for files, and similarly 755 and user:user for directories.
The security difference between suPHP and DSO is that suPHP confines an intruder to the particular user that he/she has affected. The exploit can’t cross accounts, however it can affect every single file the user owns as opposed to just the files writable by the webserver. Please see the Security section below for more information.
The main disadvantage of suPHP is speed and CPU load. suPHP runs much slower than the other handlers, and you will see significant increase in your overall CPU load when switched to it. suPHP also cannot work with an opcode caching extension such as eAccelerator or APC, which is part of the reason for the increase in CPU usage. Because of this, it is recommended that you implement a caching plugin if you use this with a CMS, such as W3-Total-Cache for WordPress. This handler is recommended more for the smaller reseller client. This is because it locks down exploits to one affected account, and the CPU load increase will not be that significant with less busy sites or small number of individual cPanel accounts.
FastCGI
FastCGI (aka: mod_fcgid) is similar to suPHP in that it is a separate process that compiles the PHP which is then sent back to Apache. It is also a CGI module, which means with suEXEC enabled PHP runs the process as the user. This allows you the same permissions advantages of suPHP mentioned above. The difference between the two, however, is how they control the PHP processes. suPHP runs each time a PHP process needs to be compiled, whereas FastCGI keeps persistent connections open that can be recalled by the same PHP process. This allows you to use an opcode caching extension such as eAcceleartor or APC with it to increase performance.
The drawback is FastCGI has a high memory usage. Due to the persistent connections mentioned above being stored in memory, you are going to see much more available memory being taken up by FastCGI.
A good analogy of FastCGI and suPHP is to think of a book with several chapters. With suPHP, this book will have no table of contents and no index, so each time you want to find something you have to look through the book to get it. This takes time (increased CPU usage) and is slow. With FastCGI, this book has an extensive index and table of contents, so you can quickly and easily find what you are looking for; however these additional pages make the book much thicker (increased memory usage).
Related
While I do know that system calls and security don't go hand in hand, there is a project for which I do need it. I'm writing a small code checker and I need to compile and execute the user submitted code to test against my test cases.
Basically I want to run the code in a sandbox, so that it can't touch any files outside of the temporary directory and any files that it creates can't be accessed by the outside world.
Recently I came across an exploit with with which the user could create a file say shell.php with the following contents.
<?php
echo system($_GET['x']);
?>
This gives the attacker a remote shell and since the owner of the file is apache, the attacker could basically move around my entire /var/www where mysql passwords were stored along with other configuration information.
While I am aware of threats like SQL Injections and have sanitized the user input before any operations that involve the DB, I have no idea as to how I can set up the sandbox. What are the techniques that I can use to disable system calls (right now I'm searching for the word 'system' in the user submitted code and not executing those snippets where it is found) and restrict the access to the files that the user submitted code creates.
As of now my code checker only works for C and I plan to add support for other languages like C++, Java, Ruby and Python after I can secure it. Also I'd like to learn more about this problem that I've encountered so pointers to a place where I could learn more about web security would also be appreciated.
My development machine is running Mac OS Lion and the deployment machine is a linux server so if a solution, that was cross platform would be most appreciated but one that dealt with just the linux machine would do too.
What you will probably want to do is set up a chroot to some random temp directory on your filesystem for the user running your scripts. Here is some reading on setting up a chroot, and some security to-know's.
I would suggest you also install a security module such as suExec or MPM-iTK for Apache. Then, within your Apache's VirtualHost (if you are not running a virtual host, do so!), assign a specific UserID to handle requests for this specific VirtualHost. This separates the request from the Apache default user, and adds a little security.
AssignUserID nonprivilegeduser nonprivilegeduser
Then, harden PHP a little by setting the following PHP options so the user cannot access files outside of the specific directories, and move your tmp_dir and session_save_path within this directory. This will prevent the users access outside of their base directory.
php_admin_value open_basedir /var/www/
php_admin_value upload_tmp_dir /var/www/tmp
php_admin_value session.save_path /var/www/tmp
Along with the lines of PHP, prevent access to specific functions and classes, and read up on PHP's security write-up.
Also, I would have you look into for that user, disabling access to sudo and su, to prevent a script from attempting to access root privileges. Learn more, here.
All in all, you said it nice and clear. There is no way to fully prevent a user from accessing your system if they have the will. The trick is to just make it as difficult as possible, and confusing as possible for them.
There is no way to make this work on a cross-platform basis, period. Sandboxing is inherently highly system-specific.
On Mac OS X, there is the Sandbox facility. It is poorly documented, but quite effective (Google Chrome relies heavily on it). Some enterprising souls have documented parts of it. However, it's only available on Mac OS X, so that probably rules it out.
On Linux, your options are considerably less developed. Some kernels support the seccomp mechanism to prevent processes from using any except a small "safe" set of system calls; however, not all do. Moreover, that "safe" subset doesn't include some calls that you are likely to need in code that hasn't been specifically written to run under seccomp -- for instance, mmap and sbrk are not permitted, so you can't allocate memory. Helper tools like seccomp-nurse may get you somewhere, though.
Here are the things I suggest doing.
1.
Disable system classes and functions in php.ini with
disable_functions="system,curl_init,fopen..."
disable_classes="DirectoryIterator,SplFileObject..."
2.
Run in a read only environment with no important data stored on it. In case anyone ever got into your server you don't want them to access anything. A good way to do this is buy an Amazon AWS EC2 and use a jailed user to run your server and PHP.
3.
Ask people to break it. The only way you can find flaws and loop holes that you are unaware of is to find them. If necessary, get a temporary server with a "test" application that replicates the same type of application your "production" environment will be.
Here are some helpful resources.
List of functions to disable. It can definitely be expanded upon but it's a good start.
Information on to avoid security issues.
The source and README in Viper-7's Codepad
I think what you are looking for is Mandatory Access Control. In Linux, it is available via SeLinux. Using it you can limit who can execute what command. In your case, you can limit the php user (Apache) to execute only limited commands like gcc etc. Also look into AppArmor
Also, look into runkit php virtual environment
You can try to run user submitted code in a container (docker) which are very light weight VMs. They start in less than a second.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm looking for a free alternative to manage personal sites (php/apache/mysql support) with the ability to configure DNS.
it should be VERY light weight and optimized.
I tried many panels especially kloxo, and i was disappointed, too many bugs and random crashes of the whole server.
Remember, i dont want any ticketing system or payment system or ability to install CMS by one click. Most important is an upto date product with strong community for regular updates and support.
I tried googling for hours and I came with a big list, so i am confused..
Virtualmin
The Good
It creates the web sites as I would create them. It puts them in the home directory, create a user/group for them. Sets up FTP/MySQL/more. Allows extensive customization: for example, I set up the websites to use chronolog and shorten the amount of time it takes to logrotate.
The resources
After an install (which includes Apache, BIND, MySQL, SpamAssasin, ClamAV, dovecot, and postfix. The memory usage of the entire server is about 500MB RAM (in an OpenVZ container after a reboot). The installation does not start any additional services, so in a memory constrained environment, you may want to disable them BEFORE restarting.
After disabling features in the setup, it still starts many unneeded services like SpamAssasin, mailman, postgreSQL, and more at startup. You can disable these using either the distribution tools or the "Services and Startup" portion of the interface. After a little tweaking I usually get the Memory Usage down to ~200MB (in OpenVZ containers) before giving MySQL about 33% of the RAM (usually at least 1GB containers).
Usage below 200MB is certainly possible-- also note that OpenVZ is a little wierd when it comes to memory
The integration
Virtualmin/Webmin manage the configuration more than anything else. Every Virtualmin server I set up feels like it can run without virtualmin (although I haven't tried it). In Ubuntu (maybe Debian as well)-- the apache configuration are placed in /etc/apache2/sites-available and /etc/apache2/sites-enabled. Usually every option in the interface corresponds to a configuration file that Virtualmin just helps you generate. It doesn't blindly override most files (like apache). If you make a modification, it'll notice and try not to botch it.
Things to know
One of the first things you may do is set up the Directory Restriction features so that users get chrooted to their home directory
If using Ubuntu 10.04 and fastcgi, you'll need to pull the new apache2 fcgi package from the updates repository to avoid an upload bug.
The subaccount usernames could be better: cPanel uses user#domain.com for FTP/WebDAV and domain_user for MySQL usernames/databases. Virtualmin allows you to choose one or the other: not both. The users Virtualmin creates in MySQL end up being truncated (instead of "some-user#my-domain.com" you get "some-user#my-dom" with nothing in the Virtualmin interface telling you that it did this). You can just manage your MySQL separately and have Virtualmin import it.
New account names seem to default now to the entire domain name. I'm not a fan of it, but at least its configurable.
Virtualmin stores account passwords in plain-text. It does this so that it can manage accounts in several different systems that don't have a unifying password format-- its understandable, I still use it because all of the passwords are just randomly generated and internal only (no emails on virtualmin boxes)
The webmin.pl file seemed to crash a bit last year. I haven't encounter it in a while, but its non-critical compared to apache and such. In fact, It'd be nice if it only started on-demand.
Overall
It saves me time, even with all of the options I need to tweak. It works with more operating systems than most control panels. They have their own repository, so the update-system integrates well with the operating system.
Have you tried webmin?
Directadmin is another one that is used by many. You can give it a look at http://www.directadmin.com.
I won't make a recommendation, since I have an obvious bias (I'm a Virtualmin developer, and it's how I make a living), but I do want to chime in with more detail about Virtualmin memory usage, since it's been alleged that it uses 500MB of RAM, which is way off.
Virtualmin, the control panel by itself, uses anywhere from 11MB to 150MB, depending on configuration, number of domains managed, amount of caching enabled, etc. The services that it manages, like Apache, BIND, databases, ClamAV, etc. could use hundreds of MB more, or even GB more RAM. That usage, however, happens in any system where you use those services and has no relation to Virtualmin. No control panel makes Apache smaller, assuming identical configuration. Likewise, if you are using ClamAV for virus scanning email, you will always have that memory usage no matter what control panel you use (or even if you don't use a control panel at all).
It is very easy to make Virtualmin use about 11-16MB (closer to 11 on a 32 bit system and closer to 16 on a 64 bit system) simply by turning off all library caching.
Memory usage is thoroughly documented, including how to configure it to use very little memory, in our "Virtualmin on Low Memory Systems" guide: http://www.virtualmin.com/documentation/system/low-memory
Virtualmin, by default, is configured for use in large deployments...hosting hundreds of domains on large servers. But, that doesn't mean it's only for those kind of deployments. We have tens of thousands of installations running on systems with 256 or 512MB of RAM, and even a few hundred running on 128MB systems (or even smaller; I know one guy that runs a static websites-only configuration on 96MB VMs). I'm not sure how Virtualmin could get much smaller than 11MB, honestly, and still be useful. I doubt any other control panel is significantly smaller.
I am planning to use php in an embedded environment. Our current web server is thttpd. I am considering two options now: whether to run it as a cgi or as SAPI module. I know cgi has advantage in terms of security. But if we are to use php as cgi, an instance of the php should be loaded into the memory for each request.
I have tried compiling it as a SAPI module of thttpd and I have observed that thttpd's memory usage, specifically rss, does not grow larger as the number of request increases.
Can anybody explain how thttpd loads php? Is it loaded just one time and stays resident to the memory as long as thttpd is running? If so, we may consider this as an alternative to cgi.
Does it perform multi-threading, i.e. if there's multiple http request at the same time? or does it process request one at a time?
Is there a good documentation discussing behavior of php as a module of thttpd?
I have no experience with thttpd, but here are some pointers:
the PHP engine is thread safe, but some extensions aren't, so usually people shy away from using it in a multi-threaded environment and rather go with the one-process - one-request method
yes, usually webserver modules (like the Apache mod_* stuff) works by staying resident, but the big speedbump for PHP is that it needs to parse the source file (or even multiple source files if you use include / require) for each request. You can cut down on this by using something like APC which caches the parsed version of the files
there is also a protocol called FastCGI which you might want to look at - it basically is a crossover between the module and CGI solution - it spins up a couple of processes, each process hosts a single instance of the CGI problem (PHP in this case) and uses them to process requests. Instances are recycled (ie. they can process multiple requests, one after the other).
E.g. Is it more secure to use mod_php instead of php-cgi?
Or is it more secure to use mod_perl instead of traditional cgi-scripts?
I'm mainly interested in security concerns, but speed might be an issue if there are significant differences.
Security in what sense? Either way it really depends on what script is running and how well it is written. Too many scripts these days are half-assed and do not properly do input validation.
I personally prefer FastCGI to mod_php since if a FastCGI process dies a new one will get spawned, whereas I have seen mod_php kill the entirety of Apache.
As for security, with FastCGI you could technically run the php process under a different user from the default web servers user.
On a seperate note, if you are using Apache's new worker threading support you will want to make sure that you are not using mod_php as some of the extensions are not thread safe and will cause race conditions.
If you run your own server go the module way, it's somewhat faster.
If you're on a shared server the decision has already been taken for you, usually on the CGI side. The reason for this are filesystem permissions. PHP as a module runs with the permissions of the http server (usually 'apache') and unless you can chmod your scripts to that user you have to chmod them to 777 - world readable. This means, alas, that your server neighbour can take a look at them - think of where you store the database access password. Most shared servers have solved this using stuff like phpsuexec and such, which run scripts with the permissions of the script owner, so you can (must) have your code chmoded to 644. Phpsuexec runs only with PHP as CGI - that's more or less all, it's just a local machine thing - makes no difference to the world at large.
Most security holes occur due to lousy programming in the script itself, so it's really kind of moot if they are ran as cgi or in modules. That said, apache modules can potentially crash the whole webserver (especially if using a threaded MPM) and mod_php is kind of famous for it.
cgi will be slower, but nowadays there are solutions to that, mainly FastCGI and friends.
What is your threat model?
From the PHP install.txt doc for PHP 5.2.6:
Server modules provide significantly better performance and additional
functionality compared to the CGI binary.
For IIS/PWS:
Warning
By using the CGI setup, your server is open to several possible
attacks. Please read our CGI security section to learn how to defend
yourself from those attacks.
A module such as mod_php or FastCGI is incredibly faster than plain CGI.. just don't do CGI. As others have said, the PHP program itself is the greatest security threat, but ignoring that there is one other consideration, on shared hosts.
If your script is on a shared host with other php programs and the host is not running in safe mode, then it is likely that all server processes are running as the same user. This could mean that any other php script can read your own, including database passwords. So be sure to investigate the server configuration to be sure your code is not readable to others.
Even if you control your own hosting, keep in mind that another hacked web application on the server could be a conduit into others.
Using a builtin module is definitely going to be faster than using CGI. The security implications depend on the configuration. In the default configuration they are pretty much the same, but cgi allows some more secure configurations that builtin modules can't provide, specially in the context of shared hosting. What exactly do you want to secure yourself against?
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What do you recommend for setting up a shared server with php from a security/performance point of view?
Apache mod_php (how do you secure that? other than safe_mode as it won't be in PHP6)
Apache CGI + suexec
Lighttpd and spawn a FastCGI per user
LE: I'm not interested in using an already made control panel as i'm trying to write my own so i want to know what's the best way to setup this myself.
I was thinking in using Lighttpd and spawn a fastcgi for every hosted user making the fcgi process run under his credentials (there is a tutorial for this on lighttpd wiki).
This would be somewhat secure but would this affect performance (lots of users / memory needed for every fcgi) so much that it's not a viable solution?
Personally, while Lighttpd is OK, I would go with Nginx + FastCGI if you end up going with a lightweight webserver + FastCGI solution. I've run benchmarks and read all the code, and Nginx is an order of magnitude faster/more stable under load -- it's very good.
But, that's not what you asked. Essentially, I would say there's a spectrum of security/scaleability vs. speed tradeoffs in the three options you list, and you just need to decide where you want to be. If you're a shared hosting provider with untrusted users installing god-knows-what PHP apps you'll lean more toward security, if this is shared amongst more trusted users you might lean toward performance. Here are my thoughts:
CGI + suexec: This is by far the most secure, and most efficient/scaleable for you in terms of numbers of users/sites in a shared hosting environment. Processes are spawned and memory used only as requests come in. Of course, the CGI-spawning makes this the slowest for execution time of individual scripts. How much slower? Well you would have to benchmark, but generally if people are running long-running apps (i.e. something like WordPress which takes 0.25-0.5 seconds just to load its libs and initialize on each request), then the CGI-spawning penalty starts to look pretty negligible in context.
FastCGI: The issue here (and it doesn't matter if your webserver is Apache, Lighttpd or Nginx) is figuring out how many FCGI child processes you let each user leave running, because each process eats memory equal to the size of the PHP interpreter (in Linux not all of it is wired of course, but I digress). And, unlike mod_php, these processes aren't shared among users so you have to limit per user. For instance, Dreamhost caps this at 3 for their customers -- now, for a customer running a website that gets bursts of more than 2-5 page views a second, that's actually pretty bad because those requests just stack up and the site hangs. Now, I like FastCGI with a lightweight webserver when I'm running apps on a dedicated server/cluster, when I can give the app hundreds of FCGI children (all with webserver privs of course, à la Apache/prefork + mod_php). But, I don't think it makes sense for shared hosting where you have to allocate/cap the FCGI children per user.
Apache + mod_php: Least secure since everything running with webserver privs, but your pool of live PHP processes is shared so it's best on the performance end. From a developer perspective, I can't tolerate php_safe mode, and from a sysadmin perspective it's really only an illusion of security (it mitigates against stupid users but doesn't protect from an actual attack) so I would actually rather have CGI if my other option has to include safe_mode.
Dreamhost does sort of a hybrid, they do Apache CGI + suexec by default, but let the (small) percentage of their more users who are sophisticated elect to do FCGI if they want to, subject to a cap and their own monitoring of memory usage. That saves a ton of memory resources versus enabling FCGI for everyone by default.
Another issue if you're talking about standard commercial shared hosting is, Apache is full-featured, has modules for just about anything (including stuff like mod_security you might want), and your users will like it because all their .htaccess configs will work etc. -- you will run into support headaches with anything else when they go to install Drupal or WordPress or whatever (a lot less of an issue if we're talking internal users).
Personally I would recommend just keeping it simple to start and going with CGI + suexec for best security and scaleability. If your users want FCGI or mod_php and you have a good channel open for suggestions/communication with them, they'll ask for it, but either of these are a much bigger headache for you with only marginal performance improvements for them, so my suggestion would be to not do either of them initially but be responsive if they clamor for it.
I do sympathize with the desire to do something "interesting" like Lighttpd + FCGI instead of the standard Apache + CGI + suexec, but I deep down I really can't recommend it.
If you're running multiple servers, you could end up putting CGI on some and something else for the power users on the others. And be sure to have cron grep all the www dirs for things like old-ass versions of phpBB!
I recommend Suhosin
With regard to PHP + FastCGI and security, check this blog post.
The challenge with securing a shared
hosting server is how to secure the
website from attack both from the
outside and from the inside. PHP has
built-in features to help, but
ultimately it’s the wrong place to
address the problem.
I’ve already written about a number of
solutions that work, but one option
I’ve been asked time and time again to
look at is using PHP + FastCGI. The
belief is that using FastCGI will
overcome the performance issues of
Apache’s suexec or mod_suphp, because
FastCGI processes persist between page
views.
But before we can look at performance,
the first question is: how exactly do
we get PHP and FastCGI running as
different users on the one web server
in the first place?
I have been using InterWorx for about a year now and have been very impressed. It maintains a LAMP server with chroots your scripts for security.
I have also used Ensim, but haven't found it as friendly, fast and it doesn't have as many features. Plus it costs a lot more.