Since locally, I did only php artisan serve and it works fine.
In my production VM, I am not sure if I should just do the same php artisan serve &
so I don't have to install Nginx, configure the document root, and so on.
Are there any disadvantages in doing that?
nginx
designed to solve c10k problem
performs extremely well, even under huge load
is a reverse proxy
uses state of the art http parser to check whether request is even valid
uses extremely powerful yet simple config syntax
comes with plethora of modules to deal with http traffic (auth module, mirror module)
can terminate ssl/tls
can load balance between multiple php serving endpoints (or any other endpoints that speak http)
can be reloaded to apply new config, without losing current connections
php artisan serve
designed to quickly fiddle with laravel based website
written in php, isn't designed to solve c10k problem
will crash once available memory is exceeded (128 mb by default, that gets quickly filled up)
isn't a reverse proxy
isn't using state of the art http parser
isn't stress tested
can't scale to other machines the way nginx does
doesn't terminate SSL. Even if it did, it would be painfully slow compared to a pure compiled solution
isn't event-based or threaded the way php-fpm/nginx are so everything executes in the same process. There's no reactor pattern for offloading to workers to scale across cpu cores and protect against bringing the server down if a piece of code is messed up. This means if you load too much data from MySQL - process goes down, therefore the server too.
Configuring nginx takes about ~30 seconds on average, for experienced person. I'm speaking from experience since it's my daily job. Using automation tools like ansible makes this even easier, you can almost forget about it.
Using a web server designed to fiddle and quickly test a part of your code in production comes with risks. Your site will be slower. Your site will be prone to crashing if any script kiddie decides to run a curl request in a foreach loop.
If you think installing and configuring nginx is a hassle and you want to go with php artisan serve, make sure you run it supervised (supervisord is my go to tool). If it crashes, it'll boot up back again.
In my opinion, it's worthless to run a php-based server to serve your app. The amount of time spent to configure nginx / php-fpm isn't humongous, even if you're new to it.
Everything comes with risks and gains, but in this particular case - the gain doesn't exist, while there's certainty that something will go wrong.
TL;DR
Don't do it, spend those few minutes configuring nginx. The best software is the one that does the work well to that point you can forget about it. nginx is one of those tools. PHP excels in many areas, but built-in webserver is not one of those things that you should use in production. Go with tools proven in the battle field.
The php artisan serve never should be used on the production environment as it is using the PHP7 built-in server functionality which is designed to development purposes only.
See this page
So, please avoid using in production. Instead, use Apache or Nginx, which both are good choices, depending on your needs. Nginx may be usually faster(not always).
Related
HHVM has a built in Server, Proxygen. You can run HHVM with the Proxygen server or run it in FastCGI mode, using another server such as nginx or apache to handle web requests.
I cannot find any benchmarks or authoritative source that provides any indication of which of the two option performs best. Obviously I could provision two systems an manually test various loads under different concurrency combinations and put together a benchmark, but I'd rather avoid the work if someone has already done such a comparison.
Does anyone know in general which is the better option from a sheer performance standpoint?
I have not done any measurement. But in theory, proxygen server would be more performant because it runs in the same process as the php worker threads, thus avoiding some overhead inter-process communication. Proxygen server is used at Facebook and some efforts are made to make it more reliable, e.g., protection mechanisms when the JIT compiler isn't fully warmed up. However, these should not matter much for other users. If you already have your favorite apache/nginx setup and do not want to spend the time to tune settings for another http server, use FastCGI.
I am currently tasked with finding a solution for a serious PHP bottleneck which is apparently caused by server-side minification of CSS and JS when our sites are under high load.
Some details and what I have found out so far
I inherited a web application running on Wordpress and which uses a complex constellation of Doctrine, Memcached and W3 Total Cache for minification and caching. When under heavy load our application begins to slow down rapidly. So far we have narrowed part of the problem down to the server-side minification process. Preliminary analysis has shown that the number PHP processes start to stack up under load, and when reaching the process limit of 500 processes, start to slow everything down. Something which is also mentioned by the author of the minify library.
Solutions I have evaluated so far
Pre-minification
The most logical solution would be to pre-minify any of the files before going live. Unfortunately our workflow demands that non-developers should be able to edit said files on our production servers (i.e. after the web app has gone live). Therefore I think that pre-processing is out of the question, as it limits the editability of minified files.
Serving unminified files
75% of our users are accessing our web application with their mobile devices, especially smartphones. Unminified JS and CSS amounts to 432KB and is reduced by 60-80% in size when minified. Therefore serving unminified files, while solving the performance and editability problem, is for the sake of mobile users out of the question.
I understand that this is as much a technical problem as it is a workflow problem and I guess we are open to working on both as long as we end up with a better overall performance.
My questions
Is there a reasonable compromise which solves the PHP bottleneck
problem, allows for non-devs to make changes to live CSS/JS and
still serves reasonably sized files to clients.
If there is no such one-size-fits-all solution, what can I do to
better our workflow and / or server-side behaviour?
EDIT: Because there were some questions / comments regarding the server configuration, our servers run Debian and are equipped with 32GB of RAM and 24 core CPUs.
You can run a css/javascript compilation service like Gulp or Grunt via Node.js that minifies all your js and css assets on change.
This service can run in production but that is not recommended without some architectural setup ( having multiple versioned compiled files and auto-checking them via gulp or another extension ).
I emphasize that patching features into production and directly
editing it is strongly discouraged as it can present live issues to
your visitors reducing your credibility.
http://gulpjs.com/
Using Gulp/Grunt would require you to change how you write your css/javascript files.
I would solve this with 2 solutions - first, removing any WP-CRON operation that runs every time a user runs the application and move that to actual CRON on the server. Second I would use load balancing so that a single server is not taking the load of the work. That is your real problem and even if you fix your perceived code issues you are still faced with the load issue.
I don't believe you need to change your workflow at all or go down the road of major modification to your existing system.
The WP-CRON tasks that runs each time a page is loaded causes significant load and slowness. You can shift this from the users visiting running that process to your server just running it at the server level. This reduces load. It is also most likely running these processes that you believe are slowing down the site.
See this guide:
http://www.inmotionhosting.com/support/website/wordpress/disabling-the-wp-cronphp-in-wordpress
Next - load balancing. Having a single server supplying all your users when you have a lot of traffic is a terrible idea. You need to split the webservers load.
I'm not sure where or how you are hosted but I would move things to AWS. Setup my WordPress site for load balancing # AWS: http://www.mornin.org/blog/scalable-wordpress-amazon-web-services/
This will involve:
Load Balancer
EC2 Instances running PHP/Apache
RDS for your database storage for all EC2 instances
S3 Storage for the sites media
For user sessions I suggest you just setup stickiness on the load balancer so users are continually served the same node they arrived on.
You can get a detailed guide on how to do this here:
http://www.mornin.org/blog/scalable-wordpress-amazon-web-services/
Or at server fault another approach:
https://serverfault.com/questions/571658/load-balancing-wordpress-on-amazon-web-services-managing-changes
The assumption here is that if you are high traffic you are making revenue from this high traffic so anytime your service responds slowly it will turn away users or possibly discourage them from returning. Changing the software could help - but you're treating the symptom not the illness. The illness is that your server comes under heavy load. This isn't uncommon with WordPress and high traffic, so you need to spread the load instead of try and micro-optimize. The difference is the optimizations will be small gains while the load balancing and spread of load actually solves the problem.
Finally - consider using a CDN for serving all of your media. This loads media faster and it removes load from your system by reducing the amount of requests to the server and it's output to the clients. It also loads pages faster consistently for people wherever they are visiting from by supplying media from nodes closest to them. At AWS this is called CloudFront. WordPress also offers this service free via Jetpack (I believe) but it does not handle all media from my understanding.
I like the idea of using GulpJS. One thing you might consider is to have a wp-cron or even just a system cron that runs every 5 minutes or so and then runs a gulp task to minify and concatenate your css and js files.
Another option that doesn't require scheduling but is based off of watching the file system for changes and then triggering a Gulp build to happen is to use incron (inotify cron). Check out the incron man page. Incron is great in that it triggers actions based on file system events such as file changes. You could use this to trigger a gulp build when any css file changes on the file system.
One caveat is that this is a Linux solution so if you're hosting on Windows you might have to look for something similar.
Edit:
Incron Documentation
I was recently curious about PHP 5.4's built-in webserver. On the surface it seems as though, while rather barebones, with enough work it could be possible to distribute PHP applications that traditionally depend on a separate web server, like WordPress, as standalone scripts that you could just run with php -S localhost:80 app.php (or, more likely, './wordpress.sh'). They might even ship with their own PHP interpreter that has all the features the application needs, which would obviate the need for targeting many different versions of the language.
It's re-inventing the wheel somewhat, but it would certainly increase portability and reduce complexity for the end user.
However, I saw the following on the documentation page:
This web server was designed to aid application development. It may also be useful for testing purposes or for application demonstrations that are run in controlled environments. It is not intended to be a full-featured web server. It should not be used on a public network.
This would obviously refer to issues like proper filesystem security and serving the correct HTTP headers, which can be worked through. However, is there more to it? Are there inherent security concerns and/or technical limitations with using PHP's built-in web server in a production environment that can't be worked around? If so, what are they?
I can think of plenty of operational issues why you wouldn't want to do this:
Logging
Rewrites
Throttling
Efficiency (not tested, but I'm guessing Nginx is a lot faster than PHP's built-in non-optimized server)
Integration with anything else you have that extends Nginx, Apache, and IIS (things like New Relic)
However, there is a solution where you get most of the benefit of running PHP with its built-in web server while getting most of the benefit of running a web server out front. That is, you could use a server like Nginx as a reverse proxy to PHP's built-in web server. In this situation, HTTP becomes a replacement for FastCGI, analogous to common usages of the built-in HTTP server in Node.js applications.
Now, I can't speak to the specifics of the warning in the documentation as I am not one of the PHP authors. If it were me, I'd not run PHP alone for the reasons above, but I might consider running it behind a real web server like Nginx. For me though, setting up PHP with PHP-FPM and what not isn't that difficult, and I'll take that over guessing at the seaworthiness of a built-in server that is documented to be for testing only.
The problem with PHP's built-in web server is that it is single threaded!
That has performance and security implications. Performance implications obviously are that only one user can be served at a time (until one request finishes, another can not start).
Security implications are that it's pretty easy to DOS that server, using a simple open socket that sends tiny amounts of data (similar to Slow Loris).
It's useful for simple, one-page, non-interactive applications that have no risk of denial of service.
PHP's built in server only supports HTTP/1.0, which means clients have to make a new TCP/IP connection for every request. This is very slow.
It is not intended for production use and may not be able to gracefully handle crashes and memory leaks, raising stability concerns. More importantly PHP itself warns of this explicitly:
Warning This web server was designed to aid application development. It may also be useful for testing purposes or for application demonstrations that are run in controlled environments. It is not intended to be a full-featured web server. It should not be used on a public network.
http://php.net/manual/en/features.commandline.webserver.php
This has been bugging me for awhile now.
In a deployed PHP web application one can upload a changed php script and have the updated file picked up by the web server without having to restart.
The problem? Ruby, Groovy, & Python, etc. are all "better" than PHP in terms of language expressiveness, concision, power, ...your-reason-here.
Currently, I am really enjoying Groovy (via Grails), but the reality is that the JVM does not do well (at all) with production dynamic reloading of application code. Basically, Permgen out of memory errors are a virtual guarantee, and that means application crash at anytime -- not good.
Ruby frameworks seem to have this solved somewhat from what I have read: Passenger has an option to dynamically reload changed files in polled directories on the next request (thus preventing connected users from being disconnected, session lost, etc.).
Standalone Python I am not sure about at all; it may, like PHP allow dynamic reloading of python scripts without web server restart.
As far as our web work is concerned, invariably clients wind up wanting to make changes to a deployed application regardless of how detailed and well planned the spec was. Telling the client, "sure, we'll implement that [simple] change at 4AM tomorrow [so as to not wreak havoc with connected users]", won't go over too well.
As of 2011 where are we at in terms of dynamic reloading and scripting languages? Are we forever doomed, relegated to the convenience of PHP, or the joys of non-PHP and being forced to restart a deployed application?
BTW, I am not at all a fan of JSPs, GSPs, and Ruby, Python templating equivalents, despite their reloadability. This is a cake & eat it too thread, where we can make a change to any aspect of the application and not have to restart.
You haven't specified a web server. If you're using Apache, mod_wsgi is your best bet for running Python web apps, and it has a reloading mechanism that doesn't require a server restart.
I think you're making a bigger deal out of this than it really is.
Any application for which it is that important that it never be down for 1/2 a minute (which is all it takes to reboot a server to pick up a file change) really needs to have multiple application server instances in order to handle potential failures of individual instances. Once you have multiple application servers to handle failures, you can also safely restart individual instances for maintenance without causing a problem.
I found there are a lot of articles comparing Nginx and Apache in Internet. However, all these comparisons are based on stress test to web server running PHP code. I suppose this is mainly due to Apache is generally deployed with PHP as LAMP architecture.
In my understanding, Nginx is created to solve C10K problem with event-based architecture. That is, Nginx is supposed to serve M concurrent requests with N threads/processes. N is supposed to much less than M. This is a big difference from Apache which needs M threads/processes to serve M concurrent requests.
For PHP code, the programming model is not asynchronous. Each web request would occupy one thread/process for PHP to handle it. So, I don't understand the meaning to compare Nginx and Apache with PHP code.
The event-based architecture of Nginx must excels Apache especially when requests involves I/O operations. For example, requests need to merge results from multiple other web services. For Apache+PHP, each requests might takes seconds just waiting for I/O operation complete. That would consume a lot of threads/processes. For Nginx, this is not a problem, if asynchronous programming is used.
Would it make more sense to deploy Nginx with language supporting asynchronous programming model?
I'm not sure which programming language could dig most potential from Nginx, but it id definitely not PHP.
First and foremost, nginx does not support any application execution directly.
It can serve static files, proxy requests to any other webserver and some other small things.
Historically, nginx aimed to handle many network connections, true, but the rationale was this:
until apache respond to the request of someone on slow connection, it can do nothing.
Apache has a limit of workers, so when there are lots of slow clients, anyone new have to wait until
a worker finishes the transfer and resumes accepting new request.
So the classic setup is nginx accepting external requests, proxying them to the local apache;
apache handles the requests and gives back the responses to the nginx to transfer to the clients.
Thus apache is eliminated from dealing with clients.
Regarding the question and nginx in the picture. It's not that hard to utilize
system event frameworks these days. That's epoll for Linux, kqueue for FreeBSD
and others. At the application level lots of choices, twisted for python for
example. So all you have to do is to write application with these frameworks,
which 1) usually put you in async world and 2) give you a way
to build HTTP service, ready to be backend for nginx.
That's probably where you are aiming at.
So, c10k doesn't seem to be a problem for nginx,
nor for applications built around these frameworks.
Example at hand is friendfeed's tornado server:
python written, uses epoll and kqueue depending on the system,
handles up to 8k easyly, as i recall. There were some benchmarks
and afterthought to scale it further.
Something must be brewing in ruby world about all the async trend,
so they can come up with, if they haven't already.
Ruby's passenger and mongrel, whatever in essense they are (i'm blanking on this),
do work with nginx, and this required writing modules for nginx.
So the community takes nginx into account and does extra when it needs to.
Php, by the way, stays relevant for pushes when websockets massively deployed. Oh well.
The point is that potential doesn't matter. PHP is something of a standard for web development and so is what people usually care about with servers, so just because Ngnix or Apache are optimised to run an obscure programming language y times faster than the other is irrelevant unless it's PHP.