Is NGINX safe for webapps that usually rely on htaccess? - php

First of all, you don't have to convince me that NGINX is good, I know that and want to use it for all web projects.
I was asked this question the other day, and didn't have a solid answer, as I haven't looked much at that specific situation.
The Question, elaborated
Old and new web software, such as Drupal and Wordpress, come with some .htaccess-files, and while there are plenty of example configs for nginx for them available with just a simple search.
How can I assure clients that nginx is secure for Drupal and Wordpress respectively?
How can the nginx configs out there cover such complex systems and keep them just as safe as with apache+htaccess?
I don't need complete answers here, but links to articles that tries to answer this or similar questions.
I want to convince some clients to move to nginx for the huge performance boost it would give them, but they need more proof that I have, in regards to security with systems that usually rely on htaccess.

What is an .htaccess file?
It's a partial web server configuration file that's dynamically parsed on each (applicable) request. It allows you to override certain aspects of the web server's behaviour on a per-directory basis by sprinkling configuration snippets in them.
Fundamentally it does nothing one central configuration file can't, it just decentralises that configuration and allows users to override that configuration without having direct access to the web server as such. That has proven very useful and popular in the shared-hosting model, where the host does not want to provide full access to the web server configuration and/or that would be impractical.
nginx fundamentally uses one configuration file only, but you can add to that by including other configuration files with the include directive, with which you could ultimately mimic a similar system. But nginx is also so ridiculously performant because it does not do dynamic configuration inclusion on a per-directory basis; that is a huge performance drain on every request in Apache. Any sane Apache configuration will disable .htaccess parsing to avoid that loss of performance, at which point there's no fundamental difference between it and nginx.
Even more fundamentally: what is a web server? It's a program that responds to HTTP requests with specific HTTP responses. Which response you get depends on the request you send. You can configure the web server to send specific responses to specific requests, based on a whole host of variables. Apache and nginx can equally be configured to respond to requests in the same way. As long as you do that, there's no difference in behaviour and thereby "security".

Related

Websocket complications

This is complicated, and not necessarily one question. I'd appreciate any possible help.
I've read that is is possible to have websockets without server access, but I cannot seem to find any examples that show how it is. I've come to that conclusion (that I believe I need this) based on the following two things:
I've been struggling for the past several hours trying to figure out how to even get websockets to work with the WAMP server I have on my machine, which I have root access. Installed composer, but cannot figure out how to install the composer.phar file to install ratchet. Have tried other PHP websocket implementations (would prefer that it be in PHP), but still cannot get them to work.
My current webhost I'm using to test things out on is a free host, and doesn't allow SSH access. So, even if I could figure out to get websockets with root access, it is a moot point when it comes to the host.
I've also found free VPS hosts by googling (of course, limited everything) but has full root access, but I'd prefer to keep something that allows more bandwidth (my free host is currently unlimited). And I've read that you can (and should) host the websocket server on a different subdomain than the HTTP server, and that it can even be run on a different domain entirely.
It also might eventually be cheaper to host my own site, of course have no real clue on that, but in that case I'd need to figure out how to even get websockets working on my machine.
So, if anyone can understand what I'm asking, several questions here, is it possible to use websockets without root access, and if so, how? How do I properly install ratchet websockets when I cannot figure out the composer.phar file (I have composer.json with the ratchet code in it but not sure if it's in the right directory), and this question is if the first question is not truly possible. Is it then possible to have websocket server on a VPS and have the HTTP server on an entirely different domain and if so, is there any documentation anywhere about it?
I mean, of course, there is an option of using AJAX and forcing the browser to reload a JS file every period of time that would use jQuery ajax to update a series of divs regardless of whether anything has been changed, but that could get complicated, and I'm not even sure if that is possible (I don't see why it wouldn't be), but then again I'd prefer websockets over that since I hear they are much less resource hungry than some sort of this paragraph would be.
A plain PHP file running under vanilla LAMP (i.e. mod_php under Apache) cannot handle WebSocket connections. It wouldn't be able to perform the protocol upgrade, let alone actually perform real-time communication, at least through Apache. In theory, you could have a very-long-running web request to a PHP file which runs a TCP server to serve WebSocket requests, but this is impractical and I doubt a shared host will actually allow PHP to do that.
There may be some shared hosts that make it possible WebSocket hosting with PHP, but they can't offer that without either SSH/shell access, or some other way to run PHP outside the web server. If they're just giving you a directory to upload PHP files to, and serving them with Apache, you're out of luck.
As for your trouble with Composer, I don't know if it's possible to run composer.phar on a shared host without some kind of shell access. Some hosts (e.g. Heroku) have specific support for Composer.
Regarding running a WebSocket server on an entirely different domain, you can indeed do that. Just point your JavaScript to connect to that domain, and make sure that the WebSocket server provides the necessary Cross-Origin Resource Sharing headers.
OK... you have a few questions, so I will try to answer them one by one.
1. What to use
You could use Socket.IO. Its a library for developing realtime web application based on JavaScript. It consists of 2 parts - client side (runs on the visitor browser) and server side. Basic usage does not require almost any background knowledge on Node.js. Here is an example tutorial for a simple chat app on the official Socket.IO website.
2. Hosting
Most of the hosting providers have control panel (cPanel) with the capebility to install/activate different Apache plugins and so on. First you should check if Node.js isn't available already, if not you could contact support and ask them if including this would be an option.
If you don't have any luck with your current hosting provider you could always switch hosts quickly as there are a lot of good deals out there. Google will definitely help you here.
Here is a list containing a few of the (maybe) best options. Keep in mind that although some hosting deals may be paid there are a lot of low cost options to choose from.
3. Bandwidth
As you are worried about "resource hungry" code maybe you can try hosting some of your content on Amazon CloudFront. It's a content delivery network that is widely used and guarantees quick connection and fast resource loading as the files are loaded from the closest to the client server. The best part is that you only pay for what you actually use, so if you don't have that much traffic it would be really cheap to run and still reliable!
Hope this helps ;)

Technical aspects of running Node.js and apache in parallel

Earlier today, I asked a question on the Programmers StackExchange: Is it bad practice to run Node.js and apache in parallel?
My end application can be considered a social network in which I want to have a chat feature and a normal status update feature.
For the chat feature, I'd like to use Node.js because I want to push data from the server to the client instead of polling the server frequently. For the status update, I want a normal apache and PHP installation, because I'm way more familiar with that and don't see why I'd use Node.js for that.
However, that would mean I'd have to run Node.js and apache in parallel. Whilst that is possible and not considered bad practice according to the answer on Programmers.SE, I do see a few technical problems:
I'd need two ports open - could give a problem with open networks that don't have all ports open
I can't use my shared-server because I'm not allowed to open a port there, so I'd have to buy a VPS
I don't care too much about the second one, more about the first one. So are there really no solutions to combine both features on one port?
Or is there some workaround for the ports? Could I, for example, redirect subdomain.domain.com:80 to domain.com:x where x is the port of Node.js? Would that be possible and solve my problem? This solution was given in this Programmers.SE answer, but how would I go about implementing it?
You could proxy all requests to node.js through the Apache (using mod_proxy), so you won't have any troubles will multiple open ports. This also allows to remap everything to subfolders or subdomains.
This is performance-wise not the best solution, but if you are on a shared web-space it doesn't really matter. (Shared servers usually are pretty slow, and if you get a larger user base you need to move to a separate server either way sooner or later.)
As #TheHippo said you can do this with Apache's mod_proxy.
nGinx however may act faster especially if you're running PHP >= 5.4 with FastCGI.
nGinx is also a better forwarding proxy than apache and it's event based model is in line with Node's event based I/O. With the propper setup this could mean better overall performance.
If you're in a restricted environment (like shared server or no ability to change the webserver) then you should go with Apache and mod_proxy.

Make Apache read PHP documents from RAM / Memory?

Is there any way to make Apache to read PHP documents from RAM?
I'm thinking of creating a virtual disk in the memory and then modify httpd.conf to change the document root directory to the virtual disk in the memory.
Is this viable?
Basically, what I want to do is distribute my PHP code to my users' computers so they can run it. But I don't want them to be able to look at the PHP source code easily - the code can't be stored in the harddisk in plain text, instead, they are stored in a data file and then read by my program into the memory where Apache reads it.
Is this viable? Is it easy to create a virtual disk in memory in C++ yet the virtual disk can't be accessed by any other means such as My Computer?
Update:
Thank you all for the questions which would help me better percept my goals, but I think I know what I'm doing. Please just suggest any solutions you may have towarding my needs.
The hard part thus far is for Apache to read from somewhere other than a plain directory in the harddisk that contains all the source code of my project. I would like it to be as concealed as possible. I know a little about windows desktop development and thought virtual disk might be a solution but if you have better ones, please suggest.
You can, in theory, have Apache serve files out of a Samba share. You would need to configure the server to mount a specific file share made by the user. This won't work if the user is behind a firewall or NAT gateway of any variety.
This will be:
Slower than molasses in January ... in Alaska. Apache does a lot of stat calls on each request by default. This is going to add a lot of overhead before even finding the file, transferring it over, and then executing it.
Hard to configure. Adding mounts is a non-trivial task at the server level and Samba can be rather finicky on both sides. Further, if you are using RHEL/CentOS or any other distro running SELinux, you're going to have to do the chcon/setsebool tapdance to even get it working. The default settings expressly prohibit Apache from touching any file that came to the system through a Samba share.
A security nightmare. You will be allowing Apache to serve up files to anyone from a computer that is not under your direct control. The malicious possibilities are endless. This is a horrible idea that you should not seriously consider.
A safer-but-still-insane alternative might be available. FastCGI. The remote systems can run a FastCGI process and actually host and execute the code directly. Apache can be configured to pass PHP requests to the remote FastCGI process. This will still break if the users are firewalled or NATted. This will only be an acceptable solution if the user can actually run a FastCGI process and you don't mind the code actually executing on their system instead of the server.
This has the distinct advantage of the files not executing in the context of the server.
Perhaps I've entirely misunderstood -- are you asking for code to be run live from user's systems? Because I wrote this answer under that interpretation.

Scaling File Systems

This could be a question for serverfault as well, but it also includes topics from here.
I am building a new web site that consist of 6 servers. 1 mysql, 1 web, 2 file processing servers, 2 file servers. In short, file processing servers process files and copy them to the file servers. In this case I have two options;
I can setup a web server for each file server and serve files directly from there. Like, file1.domain.com/file.zip. Some files (not all of them) will need authentication so I will authenticate users via memcache from those servers. 90% of the requests won't need any authentication.
Or I can setup NFS and serve files directly from the web server, like www.domain.com/fileserve.php?id=2323 (it's a basic example)
As the project is heavily based on the files, the second option might not be as effective as the first option, as it will consume more memory (even if I split files into chunks while serving)
The setup will stay same for a long time, so we won't be adding new file servers into the setup.
What are your ideas, which one is better? Or any different idea?
Thanks in advance,
Just me, but I would actually put a set of reverse proxy rules on the "web server" and then proxy HTTP requests (possibly load balanced if they have equal filesystems) back to a lightweight HTTP server on the file servers.
This gives you flexibility and the ability implement future caching, logging, filter chains, rewrite rules, authentication &c, &c. I find having a fronting web server as a proxy layer a very effective solution.
I recommend your option #1: allow the file servers to act as web servers. I have personally found NFS to be a little flaky when used under high volume.
You can also use Content Delivery Network such as simplecdn.com, they can solve bandwidth and server load issue.

Linux users and groups for a LAMP server

What is the best practice for setting up a LAMP server in terms of linux users and groups? If there are multiple sites hosted on the same server, is it best to have a single user that owns all site source files (and uploads) that is in the same group as apache - or to have a different user for each site (so that each site has its own crontab)? Or something else entirely?
For some reason, this question never seems to be addressed in PHP/MySQL/Linux books that I've encountered.
On our platform each site's htdocs etc has it's own user. This means if one site is compromised, the others should be fine.
If this is a small number of large sites, you may find that splitting your server into multiple VMs using something like Xen is a better option than simply segregating by user. This will improve the isolation of your sites, and make it easier to move a site to its own hardware if, in future, one starts to become much heavier on resource usage than the others.
I assume you don't want to go crazy and get WHM for cPanel and may want to do this inexpesnively.
I think its a best practice to have each user access their space from their own username and group - especially if unrelated users may be using the webserver.
If you have over 10 domains and users and want to keep accounts segregated to their own space, I would consider using Webmin with VirtualMin installed on the server. This easily handles these type of issues, within a nice, free install. Otherwise, you'll have to purchase a commercial product or handle everything manually - a real pain, but it can be done (not recommended for a commercial venture).
Also, Xen and VMS might be overkill, but also not as easy to manage as Webmin/VirtualMin for 10-100+ accounts.
The best choice is create VirtualHost for each domain using Apache with suPHP module. By this way, each site will be owned by an user and run with that user's permission. Webroot of each site should be put under user's homedir to prevent local attack.
If you use the same user for every websites, that means user from websiteA can access read/write to files of websiteB.
I did some kind of small level hosting over several years and my answer is "It depends".
First of all there is a difference between Apache Module (mod_php). CGI and FastCGI.
A good list with all the pros and cons could be found here:
Apache php modes
When it comes to security all of the modes have pros and cons.
Since we only hosted a relatively small amount of Domains with moderate traffic I decided to stay with mod_php and used vhost configuration.
I also used different FTP users for each vhost root dir (of course).
Configuring vhosts (one per customer) allows you to switch off domains the easy way without digging your way through a ridiculously big httpd.conf and producing errors on the way.

Categories