Using the PHP built-in server in production - php

I was recently curious about PHP 5.4's built-in webserver. On the surface it seems as though, while rather barebones, with enough work it could be possible to distribute PHP applications that traditionally depend on a separate web server, like WordPress, as standalone scripts that you could just run with php -S localhost:80 app.php (or, more likely, './wordpress.sh'). They might even ship with their own PHP interpreter that has all the features the application needs, which would obviate the need for targeting many different versions of the language.
It's re-inventing the wheel somewhat, but it would certainly increase portability and reduce complexity for the end user.
However, I saw the following on the documentation page:
This web server was designed to aid application development. It may also be useful for testing purposes or for application demonstrations that are run in controlled environments. It is not intended to be a full-featured web server. It should not be used on a public network.
This would obviously refer to issues like proper filesystem security and serving the correct HTTP headers, which can be worked through. However, is there more to it? Are there inherent security concerns and/or technical limitations with using PHP's built-in web server in a production environment that can't be worked around? If so, what are they?

I can think of plenty of operational issues why you wouldn't want to do this:
Logging
Rewrites
Throttling
Efficiency (not tested, but I'm guessing Nginx is a lot faster than PHP's built-in non-optimized server)
Integration with anything else you have that extends Nginx, Apache, and IIS (things like New Relic)
However, there is a solution where you get most of the benefit of running PHP with its built-in web server while getting most of the benefit of running a web server out front. That is, you could use a server like Nginx as a reverse proxy to PHP's built-in web server. In this situation, HTTP becomes a replacement for FastCGI, analogous to common usages of the built-in HTTP server in Node.js applications.
Now, I can't speak to the specifics of the warning in the documentation as I am not one of the PHP authors. If it were me, I'd not run PHP alone for the reasons above, but I might consider running it behind a real web server like Nginx. For me though, setting up PHP with PHP-FPM and what not isn't that difficult, and I'll take that over guessing at the seaworthiness of a built-in server that is documented to be for testing only.

The problem with PHP's built-in web server is that it is single threaded!
That has performance and security implications. Performance implications obviously are that only one user can be served at a time (until one request finishes, another can not start).
Security implications are that it's pretty easy to DOS that server, using a simple open socket that sends tiny amounts of data (similar to Slow Loris).
It's useful for simple, one-page, non-interactive applications that have no risk of denial of service.

PHP's built in server only supports HTTP/1.0, which means clients have to make a new TCP/IP connection for every request. This is very slow.

It is not intended for production use and may not be able to gracefully handle crashes and memory leaks, raising stability concerns. More importantly PHP itself warns of this explicitly:
Warning This web server was designed to aid application development. It may also be useful for testing purposes or for application demonstrations that are run in controlled environments. It is not intended to be a full-featured web server. It should not be used on a public network.
http://php.net/manual/en/features.commandline.webserver.php

Related

Deploying Laravel site via Nginx Vs. PHP Artisan Serve

Since locally, I did only php artisan serve and it works fine.
In my production VM, I am not sure if I should just do the same php artisan serve &
so I don't have to install Nginx, configure the document root, and so on.
Are there any disadvantages in doing that?
nginx
designed to solve c10k problem
performs extremely well, even under huge load
is a reverse proxy
uses state of the art http parser to check whether request is even valid
uses extremely powerful yet simple config syntax
comes with plethora of modules to deal with http traffic (auth module, mirror module)
can terminate ssl/tls
can load balance between multiple php serving endpoints (or any other endpoints that speak http)
can be reloaded to apply new config, without losing current connections
php artisan serve
designed to quickly fiddle with laravel based website
written in php, isn't designed to solve c10k problem
will crash once available memory is exceeded (128 mb by default, that gets quickly filled up)
isn't a reverse proxy
isn't using state of the art http parser
isn't stress tested
can't scale to other machines the way nginx does
doesn't terminate SSL. Even if it did, it would be painfully slow compared to a pure compiled solution
isn't event-based or threaded the way php-fpm/nginx are so everything executes in the same process. There's no reactor pattern for offloading to workers to scale across cpu cores and protect against bringing the server down if a piece of code is messed up. This means if you load too much data from MySQL - process goes down, therefore the server too.
Configuring nginx takes about ~30 seconds on average, for experienced person. I'm speaking from experience since it's my daily job. Using automation tools like ansible makes this even easier, you can almost forget about it.
Using a web server designed to fiddle and quickly test a part of your code in production comes with risks. Your site will be slower. Your site will be prone to crashing if any script kiddie decides to run a curl request in a foreach loop.
If you think installing and configuring nginx is a hassle and you want to go with php artisan serve, make sure you run it supervised (supervisord is my go to tool). If it crashes, it'll boot up back again.
In my opinion, it's worthless to run a php-based server to serve your app. The amount of time spent to configure nginx / php-fpm isn't humongous, even if you're new to it.
Everything comes with risks and gains, but in this particular case - the gain doesn't exist, while there's certainty that something will go wrong.
TL;DR
Don't do it, spend those few minutes configuring nginx. The best software is the one that does the work well to that point you can forget about it. nginx is one of those tools. PHP excels in many areas, but built-in webserver is not one of those things that you should use in production. Go with tools proven in the battle field.
The php artisan serve never should be used on the production environment as it is using the PHP7 built-in server functionality which is designed to development purposes only.
See this page
So, please avoid using in production. Instead, use Apache or Nginx, which both are good choices, depending on your needs. Nginx may be usually faster(not always).

run php as server without Apache

Currently I'm working with PHP programming, and I find that I can load a web page just only by using PHP CL, so I don't understand exactly why we have to install additional server like Apache or Nginx.
I don't know why your question was voted down. I see it as a question for focusing on a slightly broader but highly related question: Why should we be extremely careful to only allow specific software onto public-facing infrastructure? And, even more generally, what sort of software is okay to place onto public-facing infrastructure? And its corollary, what does good server software look like?
First off, there is no such thing as secure software. This means you should always hold a very skeptical view of anything that opens a single port on a computer to enable network connections (in either direction). However, there is a very small set of software that has had enough eyeballs on it to guarantee a certain minimum level of assurance that things will probably not go horribly wrong. Apache is the most battle-tested server out there and Nginx comes in at a close second as far as modern web servers are concerned. The built-in PHP HTTP server is not a good choice for a public-facing system let alone testing production software as it lacks the qualities of good network server design and may have undiscovered security vulnerabilities in it. For those and other reasons, the developers include a warning against using the built-in PHP server. It was added because users kept asking for it but that doesn't mean it should be used.
It is also a good idea to not trust network servers written by someone who doesn't know what they are doing. I frequently see ill-conceived network servers written in Node or Go, typically WebSocket-based solutions or just used to work around some issue with another piece of software, that implicitly opens security holes in the infrastructure even if the author didn't intend to do so. Just because someone can do something doesn't mean that they should and, when it comes to writing network servers, they shouldn't. Frequently those servers are proxied behind Apache or Nginx, which affords some defense against standard attacks. However, once an attacker gets past the defenses of Apache or Nginx, it's up to the software to provide its own defenses, which, sadly, is almost always significantly lacking. As a result, any time I see a proxied service running on a host, I brace myself for the inevitable security disaster that awaits - Ruby, Node, and Go developers being the biggest offenders. The moment a developer decides to write a network server is the moment they've probably chosen the wrong strategy unless they have a very specific reason to do so AND must be aware of and prepared to defend against a wide range of attack scenarios. A developer needs to be well-versed in a wide variety of disciplines before taking on the extremely difficult task of writing a network server, scalable or otherwise. It is my experience that few developers out there are actually capable of that task without introducing major security holes into their own or their users' infrastructure. While the PHP core developers generally know what they are doing elsewhere, I have personally found several critical bugs in their core networking logic, which shows that they are collectively lacking in that department. Therefore their built-in web server should be used sparingly, if at all.
Beyond security, Apache and Nginx are designed to handle "load" more so than the built-in PHP server. What load means is the answer to the question of, "How many requests per second can be serviced?" The answer is actually extremely complicated. Depending on code complexity, what is being hosted, what hardware is in use, and what is running at any point in time, a single host can handle anywhere from 20 to 20,000 requests per second and that number can vary greatly from moment to moment. Apache comes with a tool called Apache Bench (ab) that can be used to benchmark performance of a web server. However, benchmarks should always be taken with a grain of salt and viewed from the perspective of "Can we get this application to go any faster?" rather than "My application is faster than yours."
As far as developing software in PHP goes (since SO is a programming question site), I recommend trying to mirror your production environment as best as possible. If Apache will be running remotely, then running Apache locally provides the best simulation of the real thing so that there aren't a bunch of last-minute surprises. PHP code running under the Apache module may have significantly different behavior than PHP code running under the built-in PHP server (e.g. $_SERVER differences)!
If you are like me and don't like setting up Apache and PHP and don't need Apache running all the time, I maintain a set of scripts for setting up portable versions of Apache, PHP, and Maria DB (roughly equivalent to MySQL) for Windows over here:
https://github.com/cubiclesoft/portable-apache-maria-db-php-for-windows/
If your software application is actually intended to be run using the built-in PHP server (e.g. a localhost only server), then I highly recommend introducing a buffer layer such as the CubicleSoft WebServer class:
https://github.com/cubiclesoft/ultimate-web-scraper/
By using a PHP userland class like that one, you can gain certain assurances that the built-in PHP server cannot provide while still being a pure PHP solution (i.e. no extra dependencies): Fewer, if any, buffer overflow opportunities, the server is interpreted through the Zend Engine resulting in fewer rogue code execution opportunities, and has more features than the built-in server including complete customization of the server request/response cycle itself. PHP itself can start such a server during an OS boot by utilizing a tool similar to Service Manager:
https://github.com/cubiclesoft/service-manager/
Of course, that all means that a user has to trust your application's code that opened a port to run on their computer. For example, what happens if a website starts port scanning localhost ports via the user's web browser? And, if they do find the port that your software is running on, can that website start deleting files or run code that installs malware? It's the unusual exploits that will really trip you up. A "zero open ports" with "disconnected network cable/disabled WiFi" strategy is the only known way to truly secure a device. Every open port and established connection carries risk.
Good network-enabled software will have been battle-tested and hardened against a wide range of attacks. Writing such software is a responsibility that takes a lot of time to get right and it will generally show if it is done wrong. PHP's built-in server feels sloppy and lacks basic configuration options. I can't recommend its use for any reasonable purpose.
If you refer to the PHP documentation:
Warning
This web server was designed to aid application development. It may
also be useful for testing purposes or for application demonstrations
that are run in controlled environments. It is not intended to be a
full-featured web server. It should not be used on a public network.
http://php.net/manual/en/features.commandline.webserver.php
So yes, as it states, this is a good tool for testing purposes. You can quickly start a server and test your scripts in your browser. But that does not mean it provides all of the features you get with a production level server like apache or Nginx :)
You can use the built in server in your local development environment. But you should you use a more secure, feature rich web server in your production environment which requires much more features in terms of security, handling large number of requests etc.

Is it more performant to use Proxygen or NGINX + FastCGI local socket with HHVM?

HHVM has a built in Server, Proxygen. You can run HHVM with the Proxygen server or run it in FastCGI mode, using another server such as nginx or apache to handle web requests.
I cannot find any benchmarks or authoritative source that provides any indication of which of the two option performs best. Obviously I could provision two systems an manually test various loads under different concurrency combinations and put together a benchmark, but I'd rather avoid the work if someone has already done such a comparison.
Does anyone know in general which is the better option from a sheer performance standpoint?
I have not done any measurement. But in theory, proxygen server would be more performant because it runs in the same process as the php worker threads, thus avoiding some overhead inter-process communication. Proxygen server is used at Facebook and some efforts are made to make it more reliable, e.g., protection mechanisms when the JIT compiler isn't fully warmed up. However, these should not matter much for other users. If you already have your favorite apache/nginx setup and do not want to spend the time to tune settings for another http server, use FastCGI.

Technical aspects of running Node.js and apache in parallel

Earlier today, I asked a question on the Programmers StackExchange: Is it bad practice to run Node.js and apache in parallel?
My end application can be considered a social network in which I want to have a chat feature and a normal status update feature.
For the chat feature, I'd like to use Node.js because I want to push data from the server to the client instead of polling the server frequently. For the status update, I want a normal apache and PHP installation, because I'm way more familiar with that and don't see why I'd use Node.js for that.
However, that would mean I'd have to run Node.js and apache in parallel. Whilst that is possible and not considered bad practice according to the answer on Programmers.SE, I do see a few technical problems:
I'd need two ports open - could give a problem with open networks that don't have all ports open
I can't use my shared-server because I'm not allowed to open a port there, so I'd have to buy a VPS
I don't care too much about the second one, more about the first one. So are there really no solutions to combine both features on one port?
Or is there some workaround for the ports? Could I, for example, redirect subdomain.domain.com:80 to domain.com:x where x is the port of Node.js? Would that be possible and solve my problem? This solution was given in this Programmers.SE answer, but how would I go about implementing it?
You could proxy all requests to node.js through the Apache (using mod_proxy), so you won't have any troubles will multiple open ports. This also allows to remap everything to subfolders or subdomains.
This is performance-wise not the best solution, but if you are on a shared web-space it doesn't really matter. (Shared servers usually are pretty slow, and if you get a larger user base you need to move to a separate server either way sooner or later.)
As #TheHippo said you can do this with Apache's mod_proxy.
nGinx however may act faster especially if you're running PHP >= 5.4 with FastCGI.
nGinx is also a better forwarding proxy than apache and it's event based model is in line with Node's event based I/O. With the propper setup this could mean better overall performance.
If you're in a restricted environment (like shared server or no ability to change the webserver) then you should go with Apache and mod_proxy.

What are the side-effects of enabling PROCESS CONTROL (PCNTL) in PHP on web server environment?

below is a quotation from http://www.php.net/manual/en/intro.pcntl.php
Process Control should not be enabled within a web server environment and unexpected results
may happen if any Process Control functions are used within a web server environment.
what are the side-effects of enabling it on my web server? what are the threatens and security concerns in it?
Thanks a lot for your help
There's a big difference between just enabling the extension and using the functions. Just enabling the extension should have no side effects whatsoever.
On the other hand, the functions made available can allow for some mischief. Forks can be abused, signals can be sent to other processes, telling them to perform actions that you otherwise might not want, and priorities of processes with the same owner as the web server daemon can be modified.
In other words, it's not something you'd want to enable unless you control all of the PHP running on that machine, like in a shared hosting environment.
If you enable this, an untrusted PHP code author could fork-bomb your server, which is harder to protect against than you might think.
An untrusted PHP code author could kill or suspend the webserver, or any processes that run as the same user as the webserver. (If the webserver runs untrusted PHP code as root, then it can stop or suspend all processes on the server.) Or, if you're using FastCGI or similar tools, it could kill or suspend any other tasks run as the same user.
An untrusted PHP code author could call the wait(2) family of functions, which will desperately confuse the server or FastCGI interface. It might hang it, it might cause it to crash, depends on the server.
Of course, the PHP process controls flag is really just advisory -- bugs in the PHP interpreter will allow a malicious code author all these things and more. This setting is simply there to keep honest programmers honest.
Any code you run in mod_php (or similar technologies for other servers) will have complete access to everything the web server can do.
Any code you run in FastCGI (or similar technologies) will have complete access to everything that the FastCGI system can do, based on the operating system's access controls.
If you really want to confine what untrusted PHP code can do, I suggest looking into different mandatory access control mechanisms, such as AppArmor, TOMOYO, SELinux, or SMACK.

Categories