I am using Amazon's RDS. I have a single database, and we are getting fairly heavy traffic. I already scaled our EC2 instances without any issues, it's been working great, but I want to loosen the database load by creating:
1 - Write database
2 - Read databases
Obviously, I will have to have multiple connections going on in my script, and reading from one and writing to one is easy enough, but what is the logic for load balancing multiple read databases?
Is there something in Amazon I can setup to do this? Like the load balancing for EC2? Or is this something I have to setup within my scripts automatically?
Technically, I may NOT need 2 read db instances at this time, but surely this is a common thing, right? I would assume this would need to be done, and I was curious about the architecture.
Unfortunately there is no easy way of doing this. Due to the automagically managed nature of RDS, you are at the mercy of amazon and the services they provide. You have a few options though.
1. You stick with RDS and set up a round robin DNS.
This is achieved easiest through route53. You do this by creating multiple CNAME records for each of your read replicas' endpoints. eg db.mydomain.com -> somename.23ui23asdad4r.region.rds.amazonaws.com
Make sure to turn on weighted routing policy and set the weight and "set ID" to the same.
rinse and repeat for each read replica.
http://note.io/1agsSMB
Caveat 1: this is not a true load balancer. This is simply rolling a die and pointing each request to one of your RDS
Caveat 2: There is no way to health check your RDS instances and there is no way to auto-scale the instances either. Unless you do some crazy things with cloud watch trigger scripts to manually add and remove RDS read replicas and update route53.
2. Use a die roll in your application itself.
A really cheap and nasty approach you could try is to create a config for each of your read replicas in CodeIgniter and when you connect to the database you randomly choose one.
Caveats: Same as above but even worse as you will need to update your codeigniter config each time you add or remove a read replica.
3. Spend hours and hours porting your RDS to ec2 instances.
You move your database to EC2 instances. This is perhaps the most difficult solution as you will need to manage ALL of your database tweaking and tuning yourself. On the plus side you will be able to put them in an autoscaling group and behind an internal load balancer in your VPC
RDS cluster provides you two endpoints read and write. If you send the read traffic on read endpoint, AWS will manage load balancing for all read replicas. You can also apply a scaling policy for read replicas.
These options are available for AWS Aurora clusters.
Related
I plan to use Amazon EC2 Server(s) for Magento. But I'm fairly new to AWS.
I know that I have to use Elastic Load Balancer (ELB) to balance load between two or more EC2-Instances. - That is important, because it's highly possible, that my main instance is having a loading peek 1-2 hours per day.
I can`t connect one EBS to two EC2-Instances, I know. But I have to have the very same data on both (or more) EC2-Instances. - One possible solution is to make a snapshot of Instance-1 and start it to Instance-2. But as the data can change really quickly (Cache for example, new products ...) it's maybe not the best solution, I think.
I heard that I can mount my S3-Storage to my instances and then use this as "global" storage, but as far as I know from different articles S3 is not quickly enough for a high-peek-storage-server.
Some facts by the way: this server is going to have 200-300 visitors per hour, but it can be 500-1000 too.
Conclusion: I need a Storage Server, that is quick enough to share a lot of data (images, js, css, php) and is mountable to more than one instance. How do I do this in a clever way?
Greetings
Bubble
The new EFS service (NFS share) can give you a simple solution to what you are seeking to do, but its cost is high compared to the alternatives.
When you are dealing with multiple instances your instances should follow the shared nothing architecture. Meaning, any unique application data is not stored on the instance.
Application code can be stored on the instance, you should have a release process to update this automatically on an instance if its changed.
Cache data is something that can be regenerated, this ideally should be a memory cache like memcached.
Application data (Product images, etc) should be stored on S3. You can also serve from S3 (which offloads some of the work from your web server). I believe there are plugins for Magento to store images on s3.
Database should be on a server outside of web server instance. You may be able to use RDS to set this up quickly.
I am currently using an AWS micro instance as a web server for a website that allows users to upload photos. Two questions:
1) When looking at my CloudWatch metrics, I have recently noticed CPU spikes, the website receives very little traffic at the moment, but becomes utterly unusable during these spikes. These spikes can last several hours and resetting the server does not eliminate the spikes.
2) Although seemingly unrelated, whenever I post a link of my website on Twitter, the server crashes (i.e.,Error Establishing a Database Connection). Once restarting Apache and MySQL, the website returns to normal functionality.
My only guess would be that the issue is somehow the result of deficiencies with the micro instance. Unfortunately, when I upgraded to the small instance, the site was actually slower due to fact that the micro instances can have two EC2 compute units.
Any suggestions?
If you want to stay in the free tier of AWS (micro instance), you should off load as much as possible away from your EC2 instance.
I would suggest you to upload the images directly to S3 instead of going through your web server (see some example for it here: http://aws.amazon.com/articles/1434).
S3 can also be used to serve most of your web pages (images, js, css...), instead of your weak web server. You can also add these files in S3 as origin to Amazon CloudFront (CDN) distribution to improve your application performance.
Another service that can help you in off loading the work is SQS (Simple Queue Service). Instead of working with online requests from users, you can send some requests (upload done, for example) as a message to SQS and have your reader process these messages on its own pace. This is good way to handel momentary load cause by several users working simultaneously with your service.
Another service is DynamoDB (managed NoSQL DB service). You can put on dynamoDB most of your current MySQL data and queries. Amazon DynamoDB also has a free tier that you can enjoy.
With the combination of the above, you can have your micro instance handling the few remaining dynamic pages until you need to scale your service with your growing success.
Wait… I'm sorry. Did you say you were running both Apache and MySQL Server on a micro instance?
First of all, that's never a good idea. Secondly, as documented, micros have low I/O and can only burst to 2 ECUs.
If you want to continue using a resource-constrained micro instance, you need to (a) put MySQL somewhere else, and (b) use something like Nginx instead of Apache as it requires far fewer resources to run. Otherwise, you should seriously consider sizing up to something larger.
I had the same issue: As far as I understand the problem is that AWS will slow you down when you reach a predefined usage. This means that they allow for a small burst but after that things will become horribly slow.
You can test that by logging in and doing something. If you use the CPU for a couple of seconds then the whole box will become extremely slow. After that you'll have to wait without doing anything at all to get things back to "normal".
That was the main reason I went for VPS instead of AWS.
I have been asked to re-develop an old php web app which currently uses mysql_query functions to access a replicated database (4 slaves, 1 master).
Part of this redevelopment will move some of the database into a mysql-cluster. I usually use PDO to access databases these days and I am trying to find out whether or not PDO will play nicely with a cluster, but I can't find much useful information on the web.
Does anyone have any experience with this? I have never worked with a cluster before ...
I've done this a couple different ways with different levels of success. The short answer is that your PDO connections should work fine. The options, as I see them, are as follows:
If you are using replication, then either write a class that handles connections to various servers or use a proxy. The proxy may be a hardware or a software. MySQL Proxy (http://docs.oracle.com/cd/E17952_01/refman-5.5-en/mysql-proxy.html) is the software load balancer I used to use and for the most part it did the trick. It automatically routes traffic between your readers and writers, and handles failover like a champ. Every now and then we'd write a query that would throw it off and have to tweak things, but that was years ago. It may be in better shape now.
Another option is to use a standard load balancer and create two connections - one for the writer and the other for the readers. Your app can decide which connection to use based on the function it's trying to perform.
Finally, you could consider using the max db cluster available from MySQL. In this setup, the MySQL servers are all readers AND writers. You only need one connection, but you'll need a load balancer to route all of the traffic. Max db cluster can be tricky if the indexes become out of sync, so tread lightly if you go with this option.
Clarification: When I refer to connections what I mean is an address and port to connect to MySQL on - not to be confused with concurrent connections running on the same port.
Good luck!
Have you considered hiding the cluster behind a hardware or software load balancer (e.g. HAProxy)? This way, the client code doesn't need to deal with the cluster at all, it sees the cluster as just one virtual server.
You still need to distinguish applications that write from those that read. In our system, we put the slave servers behind the load balancer, and read-only applications use this cluster, while writing applications access the master server directly. We don't try to make this happen automatically; applications that need to update the database simply use a different server hostname and username.
Write a wrapper class for the DB that has your connect and query functions in it...
The query function needs to look at the very first word to detect if it's a SELECT and use the slave DB connection, anything else (INSERT, UPDATE, RENAME, CREATE etc...) needs to go the MASTER server.
The connect() function would look at the array of slaves and pick a random one to use.
You should only connect to the master slave when you need to do an update (Most webpages shouldn't be updating the DB, only reading data... make sure you don't waste time connecting to the MASTER db when you won't use it)
You can also use a static variable in your class to hold your DB connections, that way connections are shared between instances of your DB class (i.e. you only have to open the DB connection once instead of everytime you call '$db = new DB()')
Abstracting the database functions into a class like this also makes it easier to debug or add features
I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it?
Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components:
There is a load balancer that first receives requests and then decides where to route each request
This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client
If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache
A blackbox that handles database replication
Specifically, I am trying to do the following:
Setting up a load balancer (my homework revealed that HAProxy is one such load balancer)
Setting up replication so that databases can be synchronized
Using memcached
Configuring Apache to work with multiple web servers
Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage)
Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally.
I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?
Personally, I think you should be considering how your app will scale initially - as otherwise you'll run into problems down the line.
I'm not saying you need to build it initially as a multi-server system, but if you think you'll need to do it later, be mindful of the concerns now.
In my experience, this includes things like:
Sessions. Unless you use 'sticky' load balancing, you will have to have some way of sharing session state between servers. This probably means storing session data on either shared storage, or in a DB.
File uploads and replication. If you allow users to upload files, or you have a CMS that allows you to upload images/documents, it needs to cater for the fact that these files will also need to find their way onto other nodes in your cluster. However, if you've gone down the shared storage route mentioned above, this should cover it.
DB scalability. If you're using traditional DB servers, you might want to think about how you'll implement scalability at that level. This may mean coding your app so you use one connection string for reads, and another for writes. Then, you are free to implement replication with one master node handling the inserts/updates cascading the changes to read only nodes that handle the bulk of the work.
Middleware. You might even want to go down the route of implementing some kind of message oriented middleware solution to completely hand off business logic functions - this will give you a great level of flexibility in how you wish to scale this business logic layer in the future. Although initially this will be a lot of complication and work for not a great deal of payoff.
Have you considered playing around with VMs first? You can run 2-3 VMs on your local machine and set them up like you would actual servers, they just won't be able to handle real traffic levels. If all you're looking for is the learning experience, it might be an ideal way to go about it.
I recently experienced a flood of traffic on a Facebook app I created (mostly for the sake of education, not with any intention of marketing)
Needless to say, I did not think about scalability when I created the app. I'm now in a position where my meager virtual server hosted by MediaTemple isn't cutting it at all, and it's really coming down to raw I/O of the machine. Since this project has been so educating to me so far, I figured I'd take this as an opportunity to understand the Amazon EC2 platform.
The app itself is created in PHP (using Zend Framework) with a MySQL backend. I use application caching wherever possible with memcached. I've spent the weekend playing around with EC2, spinning up instances, installing the packages I want, and mounting an EBS volume to an instance.
But what's the next logical step that is going to yield good results for scalability? Do I fire up an AMI instance for the MySQL and one for the Apache service? Or do I just replicate the instances out as many times as I need them and then do some sort of load balancing on the front end? Ideally, I'd like to have a centralized database because I do aggregate statistics across all database rows, however, this is not a hard requirement (there are probably some application specific solutions I could come up with to work around this)
I know this is probably not a straight forward answer, so opinions and suggestions are welcome.
So many questions - all of them good though.
In terms of scaling, you've a few options.
The first is to start with a single box. You can scale upwards - with a more powerful box. EC2 have various sized instances. This involves a server migration each time you want a bigger box.
Easier is to add servers. You can start with a single instance for Apache & MySQL. Then when traffic increases, create a separate instance for MySQL and point your application to this new instance. This creates a nice layer between application and database. It sounds like this is a good starting point based on your traffic.
Next you'll probably need more application power (web servers) or more database power (MySQL cluster etc.). You can have your DNS records pointing to a couple of front boxes running some load balancing software (try Pound). These load balancing servers distribute requests to your webservers. EC2 has Elastic Load Balancing which is an alternative to managing this yourself, and is probably easier - I haven't used it personally.
Something else to be aware of - EC2 has no persistent storage. You have to manage persistent data yourself using the Elastic Block Store. This guide is an excellent tutorial on how to do this, with automated backups.
I recommend that you purchase some reserved instances if you decide EC2 is the way forward. You'll save yourself about 50% over 3 years!
Finally, you may be interested in services like RightScale which offer management services at a cost. There are other providers available.
First step is to separate concerns. I'd split off with a separate MySQL server and possibly a dedicated memcached box, depending on how high your load is there. Then I'd monitor memory and CPU usage on each box and see where you can optimize where possible. This can be done with spinning off new Media Temple boxes. I'd also suggest Slicehost for a cheaper, more developer-friendly alternative.
Some more low-budget PHP deployment optimizations:
Using a more efficient web server like nginx to handle static file serving and then reverse proxy app requests to a separate Apache instance
Implement PHP with FastCGI on top of nginx using something like PHP-FPM, getting rid of Apache entirely. This may be a great alternative if your Apache needs don't extend far beyond mod_rewrite and simpler Apache modules.
If you prefer a more high-level, do-it-yourself approach, you may want to check out Scalr (code at Google Code). It's worth watching the video on their web site. It facilities a scalable hosting environment using Amazon EC2. The technology is open source, so you can download it and implement it yourself on your own management server. (Your Media Temple box, perhaps?) Scalr has pre-built AMIs (EC2 appliances) available for some common use cases.
web: Utilizes nginx and its many capabilities: software load balancing, static file serving, etc. You'd probably only have one of these, and it would probably implement some sort of connection to Amazon's EBS, or persistent storage solution, as mentioned by dcaunt.
app: An application server with Apache and PHP. You'd probably have many of these, and they'd get created automatically if more load needed to be handled. This type of server would hold copies of your ZF app.
db: A database server with MySQL. Again, you'd probably have many of these, and more slave instances would get created automatically if more load needed to be handled.
memcached: A dedicated memcached server you can use to have centralized caching, session management, et cetera across all your app instances.
The Scalr option will probably take some more configuration changes, but if you feel your scaling needs accelerating quickly it may be worth the time and effort.