Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm looking for a free alternative to manage personal sites (php/apache/mysql support) with the ability to configure DNS.
it should be VERY light weight and optimized.
I tried many panels especially kloxo, and i was disappointed, too many bugs and random crashes of the whole server.
Remember, i dont want any ticketing system or payment system or ability to install CMS by one click. Most important is an upto date product with strong community for regular updates and support.
I tried googling for hours and I came with a big list, so i am confused..
Virtualmin
The Good
It creates the web sites as I would create them. It puts them in the home directory, create a user/group for them. Sets up FTP/MySQL/more. Allows extensive customization: for example, I set up the websites to use chronolog and shorten the amount of time it takes to logrotate.
The resources
After an install (which includes Apache, BIND, MySQL, SpamAssasin, ClamAV, dovecot, and postfix. The memory usage of the entire server is about 500MB RAM (in an OpenVZ container after a reboot). The installation does not start any additional services, so in a memory constrained environment, you may want to disable them BEFORE restarting.
After disabling features in the setup, it still starts many unneeded services like SpamAssasin, mailman, postgreSQL, and more at startup. You can disable these using either the distribution tools or the "Services and Startup" portion of the interface. After a little tweaking I usually get the Memory Usage down to ~200MB (in OpenVZ containers) before giving MySQL about 33% of the RAM (usually at least 1GB containers).
Usage below 200MB is certainly possible-- also note that OpenVZ is a little wierd when it comes to memory
The integration
Virtualmin/Webmin manage the configuration more than anything else. Every Virtualmin server I set up feels like it can run without virtualmin (although I haven't tried it). In Ubuntu (maybe Debian as well)-- the apache configuration are placed in /etc/apache2/sites-available and /etc/apache2/sites-enabled. Usually every option in the interface corresponds to a configuration file that Virtualmin just helps you generate. It doesn't blindly override most files (like apache). If you make a modification, it'll notice and try not to botch it.
Things to know
One of the first things you may do is set up the Directory Restriction features so that users get chrooted to their home directory
If using Ubuntu 10.04 and fastcgi, you'll need to pull the new apache2 fcgi package from the updates repository to avoid an upload bug.
The subaccount usernames could be better: cPanel uses user#domain.com for FTP/WebDAV and domain_user for MySQL usernames/databases. Virtualmin allows you to choose one or the other: not both. The users Virtualmin creates in MySQL end up being truncated (instead of "some-user#my-domain.com" you get "some-user#my-dom" with nothing in the Virtualmin interface telling you that it did this). You can just manage your MySQL separately and have Virtualmin import it.
New account names seem to default now to the entire domain name. I'm not a fan of it, but at least its configurable.
Virtualmin stores account passwords in plain-text. It does this so that it can manage accounts in several different systems that don't have a unifying password format-- its understandable, I still use it because all of the passwords are just randomly generated and internal only (no emails on virtualmin boxes)
The webmin.pl file seemed to crash a bit last year. I haven't encounter it in a while, but its non-critical compared to apache and such. In fact, It'd be nice if it only started on-demand.
Overall
It saves me time, even with all of the options I need to tweak. It works with more operating systems than most control panels. They have their own repository, so the update-system integrates well with the operating system.
Have you tried webmin?
Directadmin is another one that is used by many. You can give it a look at http://www.directadmin.com.
I won't make a recommendation, since I have an obvious bias (I'm a Virtualmin developer, and it's how I make a living), but I do want to chime in with more detail about Virtualmin memory usage, since it's been alleged that it uses 500MB of RAM, which is way off.
Virtualmin, the control panel by itself, uses anywhere from 11MB to 150MB, depending on configuration, number of domains managed, amount of caching enabled, etc. The services that it manages, like Apache, BIND, databases, ClamAV, etc. could use hundreds of MB more, or even GB more RAM. That usage, however, happens in any system where you use those services and has no relation to Virtualmin. No control panel makes Apache smaller, assuming identical configuration. Likewise, if you are using ClamAV for virus scanning email, you will always have that memory usage no matter what control panel you use (or even if you don't use a control panel at all).
It is very easy to make Virtualmin use about 11-16MB (closer to 11 on a 32 bit system and closer to 16 on a 64 bit system) simply by turning off all library caching.
Memory usage is thoroughly documented, including how to configure it to use very little memory, in our "Virtualmin on Low Memory Systems" guide: http://www.virtualmin.com/documentation/system/low-memory
Virtualmin, by default, is configured for use in large deployments...hosting hundreds of domains on large servers. But, that doesn't mean it's only for those kind of deployments. We have tens of thousands of installations running on systems with 256 or 512MB of RAM, and even a few hundred running on 128MB systems (or even smaller; I know one guy that runs a static websites-only configuration on 96MB VMs). I'm not sure how Virtualmin could get much smaller than 11MB, honestly, and still be useful. I doubt any other control panel is significantly smaller.
Related
We all know situations when you cannot go open source and freely distribute software - and I am in one of these situations.
I have an app that consists of a number of binaries (compiled from C sources) and Python code that wraps it all into a system. This app used to work as a cloud solution so users had access to app functions via network but no chance to touch the actual server where binaries and code are stored.
Now we want to deliver the "local" version of our system. The app will be running on PCs that our users will physically own. We know that everything could be broken, but at least want to protect the app from possible copying and reverse-engineering as much as possible.
I know that Docker is a wonderful deployment tool so I wonder: is it possible to create encrypted Docker containers where no one can see any data stored in the container's filesystem? Is there a known solution to this problem?
Also, maybe there are well known solutions not based on Docker?
The root user on the host machine (where the docker daemon runs) has full access to all the processes running on the host. That means the person who controls the host machine can always get access to the RAM of the application as well as the file system. That makes it impossible to hide a key for decrypting the file system or protecting RAM from debugging.
Using obfuscation on a standard Linux box, you can make it harder to read the file system and RAM, but you can't make it impossible or the container cannot run.
If you can control the hardware running the operating system, then you might want to look at the Trusted Platform Module which starts system verification as soon as the system boots. You could then theoretically do things before the root user has access to the system to hide keys and strongly encrypt file systems. Even then, given physical access to the machine, a determined attacker can always get the decrypted data.
What you are asking about is called obfuscation. It has nothing to do with Docker and is a very language-specific problem; for data you can always do whatever mangling you want, but while you can hope to discourage the attacker it will never be secure. Even state-of-the-art encryption schemes can't help since the program (which you provide) has to contain the key.
C is usually hard enough to reverse engineer, for Python you can try pyobfuscate and similar.
For data, I found this question (keywords: encrypting files game).
If you want a completely secure solution, you're searching for the 'holy grail' of confidentiality: homomorphous encryption. In short, you want to encrypt your application and data, send them to a PC, and have this PC run them without its owner, OS, or anyone else being able to scoop at the data.
Doing so without a massive performance penalty is an active research project. There has been at least one project having managed this, but it still has limitations:
It's windows-only
The CPU has access to the key (ie, you have to trust Intel)
It's optimised for cloud scenarios. If you want to install this to multiple PCs, you need to provide the key in a secure way (ie just go there and type it yourself) to one of the PCs you're going to install your application, and this PC should be able to securely propagate the key to the other PCs.
Andy's suggestion on using the TPM has similar implications to points 2 and 3.
Sounds like Docker is not the right tool, because it was never intended to be used as a full-blown sandbox (at least based on what I've been reading). Why aren't you using a more full-blown VirtualBox approach? At least then you're able to lock up the virtual machine behind logins (as much as a physical installation on someone else's computer can be locked up) and run it isolated, encrypted filesystems and the whole nine yards.
You can either go lightweight and open, or fat and closed. I don't know that there's a "lightweight and closed" option.
I have exactly the same problem. Currently what I was able to discover is bellow.
A. Asylo(https://asylo.dev)
Asylo requires programs/algorithms to be written in C++.
Asylo library is integrated in docker and it seems to be feаsable to create custom dоcker image based on Asylo .
Asylo depends on many not so popular technologies like "proto buffers" and "bazel" etc. To me it seems that learning curve will be steep i.e. the person who is creating docker images/(programs) will need a lot of time to understand how to do it.
Asylo is free of charge
Asylo is bright new with all the advantages and disadvantages of being that.
Asylo is produced by Google but it is NOT an officially supported Google product according to the disclaimer on its page.
Asylo promises that data in trusted environment could be saved even from user with root privileges. However, there is lack of documentation and currently it is not clear how this could be implemented.
B. Scone(https://sconedocs.github.io)
It is binded to INTEL SGX technology but also there is Simulation mode(for development).
It is not free. It has just a small set of functionalities which are not paid.
Seems to support a lot of security functionalities.
Easy for use.
They seems to have more documentation and instructions how to build your own docker image with their technology.
For the Python part, you might consider using Pyinstaller, with appropriate options, it can pack your whole python app in a single executable file, which will not require python installation to be run by end users. It effectively runs a python interpreter on the packaged code, but it has a cipher option, which allows you to encrypt the bytecode.
Yes, the key will be somewhere around the executable, and a very savvy costumer might have the means to extract it, thus unraveling a not so readable code. It's up to you to know if your code contains some big secret you need to hide at all costs. I would probably not do it if I wanted to charge big money for any bug solving in the deployed product. I could use it if client has good compliance standards and is not a potential competitor, nor is expected to pay for more licenses.
While I've done this once, I honestly would avoid doing it again.
Regarding the C code, if you can compile it into executables and/or shared libraries can be included in the executable generated by Pyinstaller.
i am running the single core 512MB DO(digital ocean) droplet and Cent OS 6 i have configured php to use mod_suphp for security reasons. i will be running multiple sites off this box at some point, i want to isolate them all from eachother. the suphp setup went perfectly, i was able to install wordpress and set up the databases, ftp etc. the issue i am having is that certain actions spike the php-cgi process up to 100% and eventually timeout. the wordpress customizer hangs on save while accessing the admin-ajax.php file. one of the themes i was using (the X theme) when trying to upload a json file ended up hanging and timing out on line 30 of wp-includes/compat.php on cpanel servers, i used suphp without any issue, and the same actions and themes work fine. the only difference i notice is that the php process on cpanel machines is "php" whereas mine is "php-cgi". i have no idea if this is part of the issue, but any help at identifying why and how only certain wordpress scripts are overloading the cpu would be helpful. an important note is that the site is not under any traffic when this happens, as it is only in development. also there is just over 50% of the RAM used while the CPU is spiking so i am not running out of memory
SuPHP processes the file every single time it is called, due to this, it causes a lot of CPU usage. SuPHP just in general uses a lot of CPU, adding WordPress to the mix just makes the CPU usage even more. I recommend using FastCGI as your PHP handler as it uses a low amount of CPU but a high amount of memory. In addition you will be able to use OPCode caching such as APC or memcached, causing WordPress to be significantly faster. In regards to your security concern, FastCGI has the same security as SuPHP and you can upload things no problem. One small thing to note though is your going to need to tweak the settings quite a bit before you get it right, there will be errors at first possibly, the answers to all of which you can get courtesy of Google. Also, I am not sure how DO operates but if you need to fix permissions and have Cpanel, here is a nice article: http://boomshadow.net/tech/fixes/fixperms-script/
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I utilize APC for opcode in par with 4-cpu license Litespeed.
What is the best PHP handler for this situation in term of performance first, and security later?
Is it suphp / dso / fcgi / cgi ? (i read that DSO can leave a hole if one of the script has a bug) ?
myusername#mybox [~]# /usr/local/cpanel/bin/rebuild_phpconf --available
Available handlers: suphp dso fcgi cgi none
PHP4 SAPI: cgi
PHP5 SAPI: not installed
SUEXEC: available
RUID2: not available
Thank you.
There is a great article on this on Servint Blog: http://blog.servint.net/2011/10/28/the-tech-bench-all-about-php-handlers/
Be sure to visit there site it also has comparison charts.
List of PHP handlers
DSO
Also known as mod_php. While this is an older configuration, its main benefit is speed. It is generally considered the fastest handler. It runs PHP directly from Apache, without having to pass it to a separate service for processing. This means that PHP scripts will run as the Apache user, which by default on our servers is the user ‘nobody’.
DSO has two things to consider before switching to it. First, any files that need to be written to by the webserver have to have write permissions for the ‘nobody’ user, and any file created by the webserver will be owned by ‘nobody’. Websites that need to upload files through PHP may run into permissions issues, since the settings aren’t as clear cut as the other handlers. This is common with WordPress users that upload files through the WordPress interface or utilize the auto-update feature.
A special note about the above: it is a common misconception that files need to have the 777 mode to be writable. This is not true, and generally a bad idea since it means the files are writable by anyone. To make a file writable by the webserver, the highest permissions needed should be 664 and owned by ‘user’ and group ‘nobody’. For directories this would be 775 and user:nobody. This should be enough to allow the webserver to write to the location without making it writable by everyone and introducing a potentially critical security vulnerability.
Also, know that DSO offers a different type of security as opposed to suPHP/FastCGI. Since the server runs it as ‘nobody’ anyone that would be able to exploit a file on your server to gain elevated privileges will have access to any other file that the webserver can directly access. What this simply means is that an intruder could have access to files across multiple accounts, but only those files that are owned by ‘nobody’. Please see the Security section below for more information.
The main advantage of DSO is speed and resource usage. With opcode caching extensions like eAccelerator or APC installed, DSO will run significantly faster and at a lower footprint than the other handlers. It is also the default setting on our servers.
A good rule of thumb is that DSO is best for servers that are running one or two large, high-performance websites where efficiency and speed are a concern.
CGI
CGI handler will run PHP as a CGI module as opposed to an Apache module. The CGI method is intended as a fallback handler for when DSO is not available. This method is neither fast nor secure, regardless of whether or not suEXEC is enabled. ServInt therefore does not recommend using CGI.
suPHP
suPHP runs PHP as a separate service that then passes the compiled code back to Apache for serving. It is technically a CGI module, however it is much different than the CGI handler mentioned above. The main difference, and the advantage of having suPHP, is that with suEXEC enabled it runs the PHP scripts as the user calling them, rather than as the ‘nobody’ user. For example, if an account is owned by the user Spock, all instances of Apache serving that user’s website will run as user Spock. The advantage here is that it makes tracking down websites using excessive resources easier.
Another advantage of running the process as the user is that it simplifies the overall permissions scheme. The webserver will be able to write to files that are owned by the user and not just ‘nobody’. What this means in the long run is that auto-update/install features in many CMS solutions will work more easily, and the general permissions of your file/directories is more clear-cut: 644 and owned by user and group user for files, and similarly 755 and user:user for directories.
The security difference between suPHP and DSO is that suPHP confines an intruder to the particular user that he/she has affected. The exploit can’t cross accounts, however it can affect every single file the user owns as opposed to just the files writable by the webserver. Please see the Security section below for more information.
The main disadvantage of suPHP is speed and CPU load. suPHP runs much slower than the other handlers, and you will see significant increase in your overall CPU load when switched to it. suPHP also cannot work with an opcode caching extension such as eAccelerator or APC, which is part of the reason for the increase in CPU usage. Because of this, it is recommended that you implement a caching plugin if you use this with a CMS, such as W3-Total-Cache for WordPress. This handler is recommended more for the smaller reseller client. This is because it locks down exploits to one affected account, and the CPU load increase will not be that significant with less busy sites or small number of individual cPanel accounts.
FastCGI
FastCGI (aka: mod_fcgid) is similar to suPHP in that it is a separate process that compiles the PHP which is then sent back to Apache. It is also a CGI module, which means with suEXEC enabled PHP runs the process as the user. This allows you the same permissions advantages of suPHP mentioned above. The difference between the two, however, is how they control the PHP processes. suPHP runs each time a PHP process needs to be compiled, whereas FastCGI keeps persistent connections open that can be recalled by the same PHP process. This allows you to use an opcode caching extension such as eAcceleartor or APC with it to increase performance.
The drawback is FastCGI has a high memory usage. Due to the persistent connections mentioned above being stored in memory, you are going to see much more available memory being taken up by FastCGI.
A good analogy of FastCGI and suPHP is to think of a book with several chapters. With suPHP, this book will have no table of contents and no index, so each time you want to find something you have to look through the book to get it. This takes time (increased CPU usage) and is slow. With FastCGI, this book has an extensive index and table of contents, so you can quickly and easily find what you are looking for; however these additional pages make the book much thicker (increased memory usage).
I looking for advice on how to set up the default configuration of php.ini and my.cnf for a small site (100 pages) with very little traffic (300 visitors per day). All pages have a bit of text, some images, no video, no audio, no flash/silverlight, very little javascript and jquery. For tracking I'm using GA and Piwik. The main site database is around 50MB.
The site is hosted on a virtual server with 20GB RAM and 6 vCPUs so there's hopefully a lot of muscle to make it run very fast.
I don't know much about tweaking php and mysql settings and would appreciate it if your answers can be as detailed as possible.
Thanks
You don't need any special configuration. Your server is so severely oversized for the task that it really hurts. Any cheap webhosting offer with some PHP and database would suffice, given your access numbers are correct.
If you really grow into areas where your server shows signs of overload, your problems will be so special that any general advice on configuration given today is wrong.
Just follow the recommended default settings for production servers for PHP 5.4 and MySQL, unless you use software that needs them different and states so in its documentation.
I don't think you can tune php.ini and my.cnf that much. You can run mysql tuning primer script: How can I optimize my MySQL server? and https://stackoverflow.com/questions/10820933/ive-run-mysql-tuning-primer-but-i-cant-understand-it but it's difficult to understand. I would suggest to enable slow_query log and examine slow queries. I also suggest to install nginx or lighttpd and fastcgi (php-cgi) with eaccelerator. It's much faster and easier to configure. There are some interesting lighttpd parameter. When you can get a kvm virtualization to get access to the kernel parameters. I also suggest to compile php yourself and configure php to your needs. When you use php from repository I don't think you can get every php configure killswitch as module. Also enable http compression and http cache headers. When you have 20 GB ram install a ramdisk and move temporary folders to the ramdisk.
I have started having problems with my VPS in the way that it would faill to serve the pages on all the websites. It just showed a blank page, or offered to download the php file ( luckily the code was not in the download file :) ).
The server was still running, but this seemed to be a problem with PHP, since i could login into WHM.
If i did a apache restart, the sites would work again.
After some talks with the server support they told me this is a problem with the APC extension witch they considered to be old and not recommended for production servers. So they removed it for now, to see if the same kind of fails would continue to appear.
I haven't read anywhere that APC could have some problems or that its not always recommended to use, quite the contrary ... everywhere people are saying to always use it.
The APC extension was installed ssh and is the latest version.
Edit:
They also dont recomend MemCache and say that a more reliable extension would be eAccelerator
Um APC is current tech and almost a must for any performant PHP site.
Not only that but it will ship as standard in PHP 6 (rather than being an optional module like it is now).
I don't know what your issue is/was but it's not APC being outdated or old tech.
I run several servers myself and the only time I have ever had trouble with APC was when trying to run it concurrently with Zend Optimizer. They don't work together so if I must use Optimizer (like if some commercial, third-party code requires it) I run eAccelerator instead of APC. Effectively 6 of one, half-dozen of another when it comes to performance but I really doubt APC is the problem here.
Just to add, memcached is only going to benefit you greatly if you are running multiple servers which need to access a shared data cache. Memcached does not do opcode caching like APC/eAccelerator/Xcache/etc.
The problem is not to do with APC. If APC had a problem, it would either show up in your php log file or you simply wouldn't be able to access your website until you adjusted APC. The problem is more likely with apache itself. I've experience the same problem as you with blank pages before and it was to do with mod_security playing up and preventing pages from being sent that looked "suspicious". Also, memory usage in apache is good at killing the server under load. I've also had experience with a web host that had compiled apache with a memory leak so every X amount of requests (say 100,000) the server would crash! Most annoying.
Your web host doesn't sound the most competent out there as they are giving some bad advice, most likely based on ignorance.
APC should be used on production (with the mstat check turned off on production, but on for development). You can get more stats about your apc setup while it's working by loading the apc status file that comes with it and you get a nice page like this: http://drupal.org/files/images/APC%20Status-1.png
Memcache is very heavily used as it's also distributed! The use for such is as follows:
APC is the fastest as it works most closely to php, but only works on the same server executing the PHP itself so it's use is limited in that scope. Used primarily as an opcode cache.
Memcache is like a very fast database spread over many computers working as one unit. However, a powercut will wipe the lot!!! Hence why they are heavily used to remove preasure from the persistant database. Facebook and many other sites have hundreds of servers running memcache.
My advice would be to find a web host that understands PHP. Fighting web hosts is hard work about whos right and whos wrong... until you find a good one ;)
Sounds to me like they are pushing a product that they probably have referral kickbacks on.
I run my own servers (have for a while) and I've never had this problem, not any MAJOR problems with MemCache.