Best practices for withstanding launch day traffic burst - php

We are working on a website for a client that (for once) is expected to get a fair amount of traffic on day one. There are press releases, people are blogging about it, etc. I am a little concerned that we're going to fall flat on our face on day one. What are the main things you would look at to ensure (in advance without real traffic data) that you can stay standing after a big launch?
Details: This is a L/A/M/PHP stack, using an internally developed MVC framework. This is currently being launched on one server, with Apache and MySQL both on it, but we can break that up if need be.
We are already installing Memcached and doing as much PHP-level caching as we can think of. Some of the pages are rather query intensive, and we are using Smarty as our template engine. Keep in mind there is no time to change any of these major aspects--this is the just the setup. What sorts of things should we watch out for?

Measure first, and then optimize. Have you done any load testing? Where are the bottlenecks?
Once you know your bottlenecks then you can intelligently decide if you need additional database boxes or web boxes. Right now you'd just be guessing.
Also, how does your load testing results compare against your expected traffic? Can you handle two times the expected traffic? Five times? How easy/fast can you acquire and release extra hardware? I'm sure the business requirement is to not fail during launch, so make sure you have lots of capacity available. You can always release it afterwards when the load has stabilized and you know what you need.

I would at least factor out all static content. Set up another vhost somewhere else and load all the graphics, CSS, and JavaScript onto it. You can buy some extra cycles, offloading the serving of that type of content. If you're really concerned, you can signup and use a content distribution service. There are lots now similar to Akamai and quite cheap.
Another idea might be to utilize Apache mod_proxy to keep the generated page output for a specific amount of time. APC would also be quite usable... You could employ output buffering capture + the last modified time of related data on the page, and use the APC cached version. If the page isn't valid any more, you regenerate and store in APC again.
Good luck. It'll be a learning experience!

Have a beta period where you allow in as many users as you can handle, measure your site's performance, and work out bugs before you go live.
You can either control the number of users explicitly in a private beta, or a Google-style semi-public beta where each user has a number of referrals that they can offer to their friends.

To prepare or handle a spike (or peak) performance, I would first determine whether you are ready through some simple performance testing with something like jmeter.
It is easy to set up and get started and will give you early metrics whether you will handle an expected peak load.
However, given your time constraints, other steps to take would be to prepare static versions of content that will attract the highest attention (such as press releases, if your launch day). Also ensure that you are making the best use of client-side caching (one fewer request to your server can make all the difference). The web is already designed for extremely high scalability and effective use content caching is your best friend in these situations.
There is an excellent podcast on high scalability on software engineering radio on the design of the new Guardian website when things calm down.
Good luck on the launch.

I'd, personally, do a few things
1) Put in some sort of load balancer/database replication system
This means that you can have your service spread across multiple servers. Can't afford to have more than one server permanently? Use Amazon E3 - It's good for putting in place for things like this (switch on a few more servers to handle the load)
2) Code in some "High Load" restrictions
For example, if your searching is inefficient - switch it off when load gets to a certain level. "Sorry, we're busy, try again later for searching"
3) Load test... Use something like ApacheBench to stress test your servers.
4) Personally, I think that switching "Keep-Alive" Connections off is better. It may slightly reduce overall performance, but - it means that instead of having something where the site works well for a few people, and the others get timeouts, everyone gets inconsistent service, if it gets to that level
Linux Format did a good article on "How to survive a slashdotting"... which I've found useful in the past. It's available online as a PDF

Basic first steps to harden your site for high traffic.
Use a low-cost tool like https://browsermob.com/ to load-test your site. At a minimum, you should be looking at 100K unique visitors per hour. If you get an ad off of the MSN home page, look to be able to handle 500K unique visitors per hour.
Move all static graphic/video content to a CDN. Edgecast and Amazon are two excellent choices.
Use Jet Profiler to profile your MySQL server to analyze any slow performing queries. Minor changes can have huge benefits.

Look into using Varnish - it's a caching reverse proxy server (like Squid, but much more single purpose).
I've run some pretty big sites behind it, and it seemed to work really well.

Related

PHP Image Generation

So, for a simple test game, I'm working on generating user images based on their current in-game avatar. I got this idea from Club Penguin and GTA V. They both generate images of the current in-game avatar.
I created a script to simply put a few images together and print out the final image to the client. It's similar to how Club Penguin does it, I believe: http://cdn.avatar.clubpenguin.com/%7B13bcb2a5-2e21-442c-b8e4-10516be6abc6%7D/cp?size=300
As you can see, the penguin is wearing multiple clothing items. The items are each different images located at http://mobcdn.clubpenguin.com/game/items/images/paper/image/300/ (ex: http://mobcdn.clubpenguin.com/game/items/images/paper/image/300/210.png)
Anyway, I've already made the script and all, but I have a few questions.
When going to Club Penguin's or Grand Theft Auto's avatar generator, you'll notice it finishes the request so fast. Even when it's a new user, (so before it has a chance to cache the image since it hasn't been generated yet), it finishes in under a second.
How could I possibly speed up the image generation process? Right now I'm just using PHP, but I could definitely switch over to another language. I know a few others too and I'm willing to learn. Which language can provide the fastest web-image generator (it has to connect to a database first to grab the user avatar info)?
For server specs, how much RAM and all that fun stuff would be an okay amount? Right now I'm using an OVH cloud server (VPS Cloud 2) to test it and it's fine and all. But, if someone with experience with this could help, what might happen if I started getting a lot more traffic and there were people with 100+ image requests being made per client when they first log in (relationship system that shows their friend's avatar). I'll probably use Cloudflare and other caching tools to help so that most of them get cached for a maximum of 24 hours, but I can't completely rely on that.
tl;dr:
Two main questions:
What's the fastest way to generate avatars on the web (right now I'm using PHP)?
What are some good server specs for around 100+ daily unique clients (at minimum) using this server for generating these avatars?
Edit: Another question, which webserver could process more requests for this? Right now I'm using Apache for this server, but my other servers are using nginx for other API things (like logging users in, getting info, etc).
IMHO, language is not the bottleneck. PHP is fast enough for real-time small images processing. You just need right algorithm. Also, check out bytecode caching engines such as APC, or XCache, or even HHVM. They can significantly improve PHP performance.
I think, any VPS can do the job until you have >20 concurrent requests. The more clients use service at the same time the more RAM you need. You can easily determine your script memory needs and other performance info by using profiler, such as XHProf.
Nginx or Lighttpd in FastCGI mode use less RAM than Apache http server and they can handle more concurrent connections. But is's not important until you have many concurrent connections.
Yes, PHP is can do this job fast and flexible(example generate.php?size=32)
I know only German webspaces, but they have also an English interface. www.nitrado.net

A proper way to build a robust API for the social application?

There is an existing social web app and the client now wants to make mobile applications for it. We need to make an API which will be robust and which will not hang down when 100k existing users start using these mobile apps.
Do you have suggestion about the proper way to build this API?
We have made hundreds of APIs in the past which also used 100k users. But this app is a bit specific and we anticipate many, many more concurrent API requests, 20 times more concurrent requests then ever in the past.
That is why I am reluctant to using our "standard" API logic before I see if anyone here can advise some good tactics.
On the high level, there will be a database on the server and an API layer which will handle all the communication.
In short: look at your hardware solution.
Try to minimize IO where possible. If possible pack as many included resources you need for the app into one payload, server side, then push it to the client in one go. You don't want hundreds of includes on the client side really.
One idea is monitor how many simultaneous requests are going on and if it gets too much for the threshold (you can do sims to find out what the threshold of pain is) start buffering the requests in a queue to ensure each user gets a smooth experience. Maybe they'll wait longer in the begining but once loading starts it will be smooth rather than bits of the application loading. The most annoying thing for a user is not knowing if the thing has finished loading or not, or have it load half way and then stop.
Ultimately though, you'll need to look at the hardware solution. Look at the set up of hardware LBs and servers with SSD drives if it's IO intensive.

Optimizations to reduce website loading time

What are some important optimizations that can be made to a website to reduce the loading time?
Remove/Minimize any bottlenecks on the server side. For this purpose, use a profiler like Xdebug or Zend Debugger to find out where your application is doing expensive and slow operations. Implement caching where possible. Use an OpCode Cache. If this still isn't fast enough consider investing in more CPU or RAM or SSDs (depending on whether you are CPU, IO or Memory bound)
For general server/client side optimizations, see the Yahoo YSlow! User Guide.
It basically sums it up to:
Minimize HTTP Requests
Use a Content Delivery Network
Add an Expires or a Cache-Control Header
Gzip Components
Put StyleSheets at the Top
Put Scripts at the Bottom
Avoid CSS Expressions
Make JavaScript and CSS External
Reduce DNS Lookups
Minify JavaScript and CSS
Avoid Redirects
Remove Duplicate Scripts
Configure ETags
Make AJAX Cacheable
Use GET for AJAX Requests
Reduce the Number of DOM Elements
No 404s
Reduce Cookie Size
Use Cookie-Free Domains for Components
Avoid Filters
Do Not Scale Images in HTML
Make favicon.ico Small and Cacheable
Also see the comments contributed below, as they contain some additional useful information for other users.
Before attempting any optimizations first you need to be able to profile, get FireBug for Firefox. Then you can run some analysis that will tell you exactly what to do using YSlow. Fundamental things that you should do are listed here.
definitely want to look at caching, as round trips to DB are expensive.
also, minify JS
Here are a few "best practice" things:
Caching CSS, JavaScript, images, etc.
Minifying Javascript files.
gzip content.
Place links to JavaScript files, JavaScript code, and links to CSS files at the bottom of your page when possible.
Load only what is necessary.
For an existing website, before you do any of this determine where your bottlenecks are with tools like Firebug and as someone else mentioned YSlow (I highly recommend this tool).
install firebug and pagespeed plugin
follows all the pagespeed directives (until possible) and be happy
http://code.google.com/intl/it/speed/page-speed/
anyway the most importante optimization in my experience is to reduce the number of HTTP requests to a minimum...
The simple options I can think of are:
Gzip (x)html, so a compressed file should arrive more quickly to the user
minify the CSS
minify the JS
use caching where possible
use a content-delivery network
use a tool, such as yslow to identify bottlenecks and further suggestions
There are two sides you can care about, when optimizing :
The server side : what matters is generating the ouput faster
The client side : what matters is getting all that has to be displayed faster.
Note : we, as developpers, often think about optimizing the server-side first... Which in most cases only represents less than 10% percent of the loading-time of the page !
On the server side, you'll generally want to :
profile, to determine what's long
optimize your SQL queries, and reduce their number
use caching
For more informations, you can take a look to the answer I gave some time ago to this question : Optimizing Kohana-based Websites for Speed and Scalability
On the client side, the biggest gains are generally achieved by :
Reducing the number of HTTP requests -- the easiest way being to reduce the number of JS/CSS/images files, by combining several files into one
Compressing CSS/JS/HTML, using for instance Apache's mod_deflate.
About that, there is a lot of great stuff on Yahoo's Exceptional Performance : they've released lots of good pratices and tools, such as yslow.
We recently did this on our web site. Here we outlined nine techniques that seemed to have the highest impact with the least difficulty: http://mentormate.com/blog/easy-ways-speed-website-load-time/
The first optimisation is: Decide if it is slow, and if not, don't bother.
This is trickier than it sounds, because it's not like testing a desktop app or game. A game is slow if when you play it on the target hardware, the frame rate is too low. This is very easy to measure.
A web site is trickier, because you, as the developer, are probably using a local test system with a very fast network. Even when you use your staging / system test servers, you're probably still on the local network. Even your production servers are in all likelihood, on the same continent.
The same is possibly not true for quite a lot of your users.
Therefore the options which exist are:
Find out by asking your users, whether they find it to be slow
Simulate a high latency environment and test it yourself (or your QA team)
Guesswork
The latter is not recommended.
An option which the holier-than-thou Yahoo Web Sites performance book (which yes, is a book you can buy) doesn't mention a lot is HTTPS. Most web applications which handle important data run mostly or entirely over HTTPS, which changes the rules of the game rather a lot. Remember to do all testing with it enabled.
i wrote some things about, see:
Google page speed test optimization
As already mentioned, you can use Yslow or PageSpeed firefox extension. But you can also use GTmetrix, an online service scanning your page with both tools.
Features I like / use:
soft, clean and usable presention
comparison with another page. It's really interesting to see where are your friends / competitors.
(by the way, i'm not related to gtmetrix !)
To reduce network traffic, you can minify static files, such as CSS and Javascript, and use gzip compression on generated content. You can also try using tools such as optipng to reduce the size of images.
However, the first step to take is to actually analyse what's taking all of the time -- whether it's sending the bits over the network, or actually generate the content to send. There's no point making your CSS files 10% smaller if it takes a minute to generate each HTML page.
Don't use whitespace in code.
Load balancing would help to reduce the loading time immense.

Speedup a php web site

What's the best way (ways?) to speed up a php web site and how much faster it can using this or that way?
PHP isn't really the kind of language where you can do micro-optimizations, or just work on the code alone. There's really no point. Although PHP isn't particularly fast, PHP itself is rarely the bottleneck in a given web site.
You need to work out where that bottleneck is before you can fix it. There are a lot of common bottlenecks, with common solutions. It's difficult to generalize, given so few details, but there are a lot of performance hints that apply to most web sites.
The first good place to look is actually on the client side, rather than the server side. How large are your pages (including images, CSS, JavaScript and the like)? How many HTTP requests does a single page view require? Use something like Firebug (and the YSlow add-on for Firebug) to see how long your page actually takes to load, and which bits of your page cause the problem. Some general hints:
Work out ways to shrink the CSS and JavaScript - remove anything you don't need, and run the rest through a tool like YUI Compressor.
If you have multiple CSS and JavaScript files, try to combine them into a single file.
Optimize all of your images as much as possible, and see if you can combine any of those into a single file using CSS sprites or similar. PunyPNG is good for lossless images. A decent JPEG encoder (NOT Photoshop) is good for photos.
Move the CSS to the top of the page, and the JavaScript to the bottom, so the browser can render the page before the JavaScript has finished downloading.
Make sure that all of your CSS, JavaScript and HTML are being served compressed.
Make sure that you're using appropriate caching - if a file hasn't changed, there's no point in re-downloading it.
Once you've got the client side out of the way, you might have to turn your attention to the server side.
Install an opcode cache, like APC, XCache, or Zend Optimizer. It's very easy to do, and will always provide some improvement. Once you've done that, profile your pages, to find out where the time is actually being spent.
More likely than not, you'll be spending most of your time waiting for the database to return results. So, at a bare minimum:
Work out which queries are taking the longest, and work on them first. Use your head though - a query that takes five seconds on an admin page that nobody looks at is not as important as a query that takes one second on the front page.
Make sure that your query uses appropriate indexes. No common query should ever need to do a full table scan. Certain kinds of sorting or grouping may be unable to use indexes - try to avoid them, or modify the query so that it can use indexes.
Make sure that your queries aren't using temporary tables.
Use the EXPLAIN keyword - it's very useful.
Tune the database server itself. MySQL is generally not optimized for performance.
Once you've done that, it's usually best to start working out how to use caching. The best way to speed PHP code up is to reduce the amount of work it has to do.
Make sure your database's query cache is working properly.
Use something like Memcached to store frequently used results, instead of getting them from the database.
If you have enough memory, try to keep everything in Memcached, resorting to the database only when something isn't present in the cache.
If you have chunks of pages that are dynamic, but the same for all users, try caching those chunks. For example, if two users are looking at an article, the article itself is going to be exactly the same for each user, even if the rest of the page isn't. Generate the HTML for the article, and chuck it in the cache.
If you have lots of non-authenticated users, it's entirely possible that they'll all be seeing the exact same page. Two non-authenticated users looking at the above article won't just see an identical article - they'll see an identical page, right down to the login links. Set your PHP scripts up so you can use HTTP caching headers (check the last modified date, and return a 304 Not Modified if it's not been changed). Once you've done that, stick a Squid reverse-proxy in front of the webserver, and let Squid serve pages out of it's cache.
After that point, the general approach is to start using more servers, and the problem becomes one of scaling, rather than raw speed. The general plan is to make sure that your website has a shared-nothing architecture - all persistent data is stored in the database. Then, you install multiple webservers, move the database server to a separate machine, and run the entire thing behind a caching reverse proxy. To add more capacity, you add more machines.
One way: php accelerators, e.g. APC.
Another; read blog articles, e.g. performance tuning overview.
A general question i would say. Try looking for optimazation tips online...
Several parameters are involved:
I/O access (using it a lot - file_exists, is_file overheads)
Database access (optimize queries, use stored procedures, check your db cache)
Using an opcode cache (like APC)
Compressing output
Serving js/css minified and compressed (and using subdomains to deliver them to the browser)
Using memcache to cache data into memory for faster access
You can use benchmarking tools to test your environment before and after the optimizations.
Try apache bench for example.
Filesize.
A file of 500 KB takes longer to download then a file of 300 KB. So optimize and crop as much as you can.
Accelators
Self explainable: List of PHP accelerators
Server upgrade
Though this costs money, when dealing with a lot of traffic, it will have impact on how fast the .php files gets processes and how fast data will be send to the user.
I don't recommend this though since there are other (free) ways to improve speed.
Don't user external resources
If you are linking some images trough other sites, the speed of the downloading will not be in your control. Instead, if you plan on using images from others download them to your own server first (or upload them to your own provider) and load them that way.
Review and improve your code
Find short cuts, remove unnecessary code, delete unused variables, reuse others etc.
There are other ways but I believe the above information has the most impact on your speed
You should probably do some search for existing answers to this question, however...
APC for opcode caching
Memcached for object storing (to reduce the number of database queries)
Check for / optimize slow SQL queries
Measure and find bottlenecks
Don't rely on (slow) web services on each page load, etc.
Yahoo has got some good basic advice on speeding up web pages, much of it very easy to implement. You may also want to download yslow + firebug for firefox; they will help indicate possible basic bottlenecks from a client request perspective.
The rest of the advice here is good, so I wont add much else other than; don't bother optimising any code until you're 100% sure that you've found a bottleneck. I can't stress that enough. Don't waste time tweaking code or implementing new things (ie caching) because you "feel" will make things quicker, act only on real evidence (ie performance profiling).

How to get Google like speeds with php?

I am using PHP with the Zend Framework and Database connects alone seem to take longer than the 0,02 seconds Google takes to do a query. The wierd thing today I watched a video that said Google connects to 1000 servers for a single query. With latency I would expect one server for every query to be more efficent than having multiple servers in different datacenters handeling stuff.
How do I get PHP, MySQL and the Zend Framework to work together and reach equal great speeds?
Is caching the only way? How do you optimize your code to take less time to "render".
There are many techniques that Google uses to achieve the amount of throughput it delivers. MapReduce, Google File System, BigTable are a few of those.
There are a few very good Free & Open Source alternatives to these, namely Apache Hadoop, Apache HBase and Hypertable. Yahoo! is using and promoting the Hadoop projects quite a lot and thus they are quite actively maintained.
I am using PHP with the Zend Framework
and Database connects alone seem to
take longer than the 0,02 seconds
Google takes to do a query.
Database connect operations are heavyweight no matter who you are: use a connection pool so that you don't have to initialise resources for every request.
Performance is about architecture not language.
Awhile ago Google decided to put everything into RAM.
http://googlesystem.blogspot.com/2009/02/machines-search-results-google-query.html
If you never have to query the hard drive, your results will improve significantly. Caching helps because you don't query the hard drive as much, but you still do when there is a cache miss (Unless you mean caching with PHP, which means you only compile the PHP program when the source has been modified).
It really depends on what you are trying to do, but here are some examples:
Analyze your queries with explain. In your dev environment you can output your queries and execution time to the bottom of the page - reduce the number of queries and/or optimize those that are slow.
Use a caching layer. Looks like Zend can be memcache enabled. This can potentially greatly speed up your application by sending requests to the ultra-fast caching layer instead of the db.
Look at your front-end loading time. Use Yahoo's YSlow add-on to Firebug. Limit http requests, set far-future headers to cache js, css and images. Etc.
You can get lightning speeds on your web app, probably not as fast as google, if you optimize each layer of your application. Your db connect times are probably not the slowest part of your app.
Memcached is a recommended solution for optimizing storage/retrieval in memory on Linux.
PHP scripts by default are interpreted every time they are called by the http server, so every call initiates script parsing and probably compilation by the Zend Engine.
You can get rid of this bottleneck by using script caching, like APC. It keeps the once compiled PHP script in memory/on disk and uses it for all subsequent requests. Gains are often significant, especially in PHP apps created with sophisticated frameworks like ZF.
Every request by default opens up a connection to the database, so you should use some kind of database connection pooling or persistent connections (which don't always work, depending on http server/php configuration). I have never tried, but maybe there's a way to use memcache to keep database connection handles.
You could also use memcache for keeping session data, if they're used on every request. Their persistence is not that important and memcache helps make it very fast.
The actual "problem" is that PHP works a bit different than other frameworks, because it works in a SSI (server-side includes) way - every request is handled by http server and if it requires running a PHP script, its interpreter is initialized and scripts loaded, parsed, compiled and run. This can be compared to getting into the car, starting the engine and going for 10 meters.
The other way is, let's say, an application-server way, in which the web application itself is handling the requests in its own loop, always sharing database connections and not initializing the runtime over and over. This solution gives much lower latency. This on the other hand can be compared to already being in a running car and using it to drive the same 10 meters. ;)
The above caching/precompiling and pooling solutions are the best in reducing the init overhead. PHP/MySQL is still a RDBMS-based solution though, and there's a good reason why BigTable is, well, just a big, sharded, massively distributed hashtable (a bit of oversimplification, I know) - read up on High Scalability.
If it's for a search engine, the bottleneck is the database, depending of its size.
In order to speed-up search on full text on a large set, you can use Sphinx. It can be configured either on 1 or multiple servers. However, you will have to adapt existing querying code, as Sphinx runs as a search daemon (libs are available for most languages)
Google have a massive, highly distributed system that incorporates a lot of proprietary technology (including their own hardware, and operating, file and database systems).
The question is like asking: "How can I make my car be a truck?" and essentially meaningless.
According to the link supplied by #Coltin, google response times are in the region of .2 seconds, not .02 seconds. As long as your application has an efficient design, I believe you should be able to achieve that on a lot of platforms. Although I do not know PHP it would surpise me if .2 seconds is a problem.
APC code caching;
Zend_Cache with APC or Memcache backend;
CDN for the static files;

Categories