AWS RDS DB slow when deployed - php

I'm trying to implement a CMS on AWS using mostly free tier services and am doing so using PHP, and have already implemented this using PHPMyAdmin in school.
The problem:
I wanted to do this using RDS, but I've come up with a pretty strange issue.
When I upload my site to Elastic Beanstalk and try to connect to the DB from there, I face incredibly long wait times. Sometimes it returns error 504 from long wait times.
This is not an issue with the Database's speed, however. I know this because I can run this code on localhost and it works exactly as intended (see photo)
Only half a second to load!
What I think is happening:
Something must be going on while it's being executed in Elastic Beanstalk. I don't know quite what, but it's taking way too long.
Extra info:
I connect to the DB using the following code (credentials are spoofed):
$conn = mysqli_connect("mydb.cebelvm3fa0n.ca-central-1.rds.amazonaws.com","USERNAME","PASSWORD","mydb");
Both my database and EL instance are run from ca-central. The connection is not failing, it is taking an extreme amount of time.
the page that is failing is http://howardpearce.ca/posts.php
If you have any ideas on what it could be, I would appreciate it very much. Thanks!
NOTE:
I will post certain bits of my code, but creating an MCVE is really
not feasible as I would have to give my DB password, and you would
need to re-create my AWS environment, so please don't ask for one, I
can always give more detail.

After #colde mentioned checking logs, I quickly found an error code which lead me to the right resources.
My database was only configured to allow connections from my IP, and the firewall was blocking out my Elastic Beanstalk instance, I just had to add a new security rule.
Thanks!

Related

Joomla on Azure Web App - performance issues

I'm trying to get Joomla running on Azure Web Apps but the performance is terrible. 3-5 second response time no matter what operation I'm doing.
Tried the Bitnami VM and it runs much much smoother, so the problem is obviously in the web app. Have tried both Cleardb and the internal mysql preview db and both give the same laggy result. It doesn't matter what sized machines I run on, they all lag.
The only thing I haven't tried is upgrading the Cleardb database from the free tier to the paid ones, simply because my subscription doesn't allow it. Since I see the same performance issue with the internal mysql db this should not matter.
My guess is that this is an issue with PHP and IIS. Anyone with similar experience, and possibly a solution?

Comprehensive guide to setting up a data driven website using Amazon web services for EC2

I have started making a website and was hosting on Hostgator but I am going to move it to Amazon web services before launch. There is a small problem that I previously just uploaded my files to the relevant location to Hostgator and it has all just worked. I have no experience in setting up from scratch a production worthy server setup and I need to know how. I did setup the basic lamp stack on the EC2 instance, however, I keep reading that when the EC2 instance does down it will take all the data with it and I can not have that happen. I have also read then when it dies it wont do anything and you have to start up the apache server again it is not automatic. I need it to be reliable and have the data independent so it will not crash, burn and die if the server goes. I have worked out that I will need S3 for static things such as my PDF's and images as well as using the RDS for my MYSQL database. My domain name is registered elsewhere so I believe I need to use route 53 as well.I want to use AWS for a few reasons reasons, firstly as it can scale which is really important but not sure if this is built in or it requires customization. I have been told that it is very secure the EC2 and the last reason is that I can debug my php code. The debug reason is that I have an error that only appears on the Hostgator server not my local lamp stack and I can't debug it there so I should be able to when I move to EC2.
I have done a lot of looking around online and I can't find anything comprehensive about what to setup. I have been reading (some of you may think otherwise). However, I am so overwhelmed by the amount of information there is as it is either far to complicated discussing some theory that I do not care about or to easy and does not discuss how to use anything other then a generic install of a LAMP stack on the EC2 with out using the other services.
I have seen http://bitnami.com/stack/lamp/cloud/amazon but do not think this is what I want as again the EC2 has a mysql database and I am not using the RDS
If someone can point me in the direction of a comprehensive guide to setting up a slid LAMP stack on AWS (mabey even a book has been written) that would be great as I found the amazon docs did not go into much detail and told me how to do things but not why I should do them and what purpose they had.
Thanks
I'll start with answering your q's first, and as you are a newbie I would suggest don't pressurize to learn all of AWS, you can keep migrating slowly and keep discovering the magic of cloud.
Q.
when the EC2 instance does down it will take all the data with it and
I can not have that happen. I have also read then when it dies it wont
do anything and you have to start up the apache server again it is not
automatic?
A. When an EC2 instance goes down (down could mean shutdown manual by you or Down means AWS network is down, or instances are having some other issues) only the data on "ephemeral data" or you can say data on RAM or sessions will get lost, whatever is on disk will remain on disk, And the instance will be available as soon as problem is resolved.
Apache will start itself when an instance restarts, and remains up until you manually shut it down or some other issue.
Q. I will need S3 for static things such as my PDF's and images as
well as using the RDS for my MYSQL database?
A. Its a good practice to keep static stuff on s3, but not a necessary thing to do, you can set up a ftp or manage your static content like you were used to, like keeping it on a folder of your website.
You don't necessarily need RDS to have a mysql database, I have a process running on aws with around 40 mil transactions a day, and I do it on a normal mysql at an ec2 instance.
however having RDS gets rid from the daily backup and index maintenance hustles.
Q. My domain name is registered elsewhere so I believe I need to use
route 53 as well ?
A. Again not a necessary thing, you can just go to your domain manager and change the A-name or C-name records (with static public ip of ec2) and give a static public ip to your ec2 instance or Elastic load balancer and you'll be up and running in no time.
Q. I want to use AWS for a few reasons reasons, firstly as it can
scale which is really important but not sure if this is built in or it
requires customization.
A. It can scale really well, but depends how do you want it to scale, and its highly customizable.
there are 2 kinds of scaling
vertical - you change your instance type from one type to another to get better disk / cpu or RAM or better network performance, but this will need you to stop your ec2 instance and change its type, that means there will be downtime of around 10 minutes while you do so.
horizontal - you can put your website (ec2 based) behind a load balancer (ELB - elastic load balancer) and add/remove more instances to/from it as and when you deemed suitable, or you can also have an auto scaling policy to help you do it automatically depending up on the load at your web server.
Security? - you can be very well assured its very well secure, and so much secure that I can bet my life on a secure ec2 instance, i can swear by linux thor that it works and it works like a charm.
Debugging? - I suggest you do debugging by classic means, make logs of errors and all, just treat ec2 like a normal machine and learn slowly the tricks of trade.
Now lets setup a basic solid LAMP stack for ourselves, I am assuming that you have a ready ubuntu instance, and you can ssh to it, in case you haven't been able to make one - see this.
basically.
1. create security groups - This is your firewall, makes sure which ports are open, and also makes sure which ec2 instances can talk amongst themselves.
2. Create an ec2 instance - make any ubuntu instance.
And access your instance using ssh - ssh is basically secure terminal connection to your ec2 machine which is secured by a key file (pem file) and whoever has it can access your machine's data, so keep it very very secure, and you can't afford to lose it.
3. install LAMP using - Tasksel utility
4. setup a public ip for yourself ( costs a dollar per month) - you can use this ip to redirect your www.example.com traffic using domain manager of your DNS provider - godaddy or someone alike i suppose.
I think this will be it to make you start with AWS.
Just to be safe that you have a copy of your data make an AMI of your ec2 instance with all the data on it. AMI is the image from which you can make a similar or better instance in 10 minutes flat (or even lesser).
You wil pay for - instance type you chose, public IP, traffic if its beyond a level (usually very very cheap), and disk usage (8 gb is the default disk), and AMI volume.
Have fun with AWS.
To retain data between during the down time, make sure you use EBS storage. Its default now a days. In the past, before EBS, instance storage was default and you would lose data once server is down, but with the EBS storage, data is retained during the shutdown.
You can go one of the follow two routes depending upon your needs.
1. Use AWS ElasticBeanStalk (http://aws.amazon.com/elasticbeanstalk/) if you do not need to install anything additional Its super easy and its similar to Google Apps and you can deploy your app quickly. You do not get server, but a server to deploy your app. You have to use RDS for database and S3 for storage. You can not store locally on the server where you are running.
Use EC2 server with static IP address. You can get pre-configured LAMP stacks from market place. I use bitnami cloud stacks for AWS that comes pre-configured with LAMP and many other apps. Just use their free account to create micro instance for your PHP and select a server and you are good to go. http://bitnami.com/cloud
You do not need to use Route 53 unless you need to manage DNS programatically. You can just point your server to EC2 server by adding entry in your DNS (godaddy or whoever is your domain name provider).
Bitnami service also allow scheduled backups, but if you are not storing anything locally, you do not need frequent backups.
Make sure you use Multi-AZ option in RDS which is more reliable. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Also, Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period and enabling point-in-time recovery, up to last 5 minutes.
I hope this helps.
You should be using dynamo DB (http://aws.amazon.com/dynamodb/pricing/) in with LAMP without Mysql for storage. Having a Samebox database can almost never give you reliability. So you will not loose your data what ever your Application box goes through. You can even read our application config from dynamo DB.
http://aws.amazon.com/documentation/dynamodb/
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SettingUpTestingSDKPHP.html
Do I need to use EC2 with DynamoDB?
You wont loose data when server is down. Just make sure your select EBS volume, and not Instance.
You can get ready-made server from AWS market place. I used the following for my projects, but there are many other pre-configured servers available.
https://aws.amazon.com/marketplace/pp/B007IN7GJA/ref=srh_res_product_title?ie=UTF8&sr=0-2&qid=1382655655469
This with RDS server is what you need. We use this all the time for production servers and never had any issues.
Here are two guides that look good to me:
http://shout.setfive.com/2013/04/05/amazon-aws-ec2-lamp-quickstart-guide-5-steps-in-10-minutes/
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html
If learning the Linux command line isn't your thing, you should consider going "up the stack" to a PaaS (Platform As A Service). They are things like Heroku, Google App Engine, and ElasticBeanStalk.
The trade-off between Infrastructure as a Service (IaaS like EC2) and a Platform as a Service (PaaS like Heroku):
- PasS is quicker to get started, less to learn. IaaS requires you to know the entire stack from the start (or hire/rent a sysadmin).
- PasS usually gets more expensive as you get bigger compared to IaaS (but it depends).
- PaaS has less control (you can't choose the language version, so you can't upgrade to get around a specific bug.)
- IaaS can literally do anything (it's just a Linux box)
- IaaS allows for more tuning (upgrade libraries to get features, switch to different instance type to trade off RAM for CPU, run HipHop for speed, add caching layers, etc)
You have a few choices:
Use only EC2. Install Apache+MySQL and your dynamic website on EC2. This will be very similar to setting it up on Hostgator except you are running a full server.
Use EC2 for "compute" (that is, the dynamic part of the site) and S3 for storage. This doesn't differ much from #1 above, except that you are using S3 for static file storage - which is great if you are expecting to host a lot of static content (multimedia, etc)
Set up your website using Amazon Elastic Beanstalk (which now supports PHP). However, if you go this route, you will need to host your database somewhere - which will likely be RDS.
I recommend going with #1. There is nothing wrong with that - yes, if EC2 goes down, it will take down your site with it, but to alleviate that, you can run two servers in two different regions (one in US East and one in US West) - I don't think two EC2 regions have ever gone down at the same time.
UPDATE: If you are concerned about backup/restore and making sure your data is safe, I recommend the following (I do this with a site in production on EC2):
Put your website code into Git/SVN source control; and pull from there
Backup your MySQL database to Amazon S3 regularly (at least once a day) using mysqldump
I think you have some misconceptions.
If EC2 as a whole goes down (which is rare) then you do NOT lose your data. The site would simply be offline until Amazon restored services.
If your particular instance goes down due to a hardware issue, then you might lose data. This is no different than if your own server went belly up. The right answer is to simply make normal backups of your database and store it in S3 or some other location. Generally you will want to create and attach a second EBS volume to your DB server which has the DB files on it as well.
If you Terminate your instance then, yes you will lose everything on that. However Amazon has the ability to make terminating instances difficult so you don't do it accidentally.
Stopping your instance is like turning the computer off. The difference being that you can remotely turn it back on when you want. You can only stop EBS backed instances - which means that your data is safe while it is offline.
I would highly suggest that if you are uncomfortable with setting up and maintaining your own server that you should investigate fully managed hosting instead. EC2 is awesome, we've been on it for 2 years. However, we have a strong tech team that understands what it takes to run and manage servers.

Usage of Mysql in offline internet state

I'm using a self-made customer system in PHP running with a local mySQL Database.
Now i have a second computer on a different location which has to use this Database too. So i gave this mysql Database on a Server reachable through internet.
My problem is now, that the first one has often problems with the internet connection and then the program will not work. But it has to work every time!
Now i do not know how i should handle this problem?
A local Database and one in the internet, but how should i merge them?
Should i make a local DB per computer and match them together in one?
I also want to change the framework behind this system to symfony2 so is there a way to solve this problem with this framework (e.g. doctrine?)
Thanks for your help!
Update:
My limitation is the Internet connection on the first computer which could not be eliminated.
If you really have limitations of (1) not being able to move the database off of the machine with a bad connection and (2) not being able to fix the bad connection; you are going to have to keep some sort of local instance on the second machine.
I would try to setup master-master replication from the first machine with the bad connection to the second machine. I'm not sure how reliable this will be considering the replication will be failing often due to the first machine's bad connection. This problem may be extrapolated if one or both machines are using old versions of MySQL. MySQL 5.5, for example, can be configured to actively monitor replication connectivity.
If the majority of your application does READS instead of WRITES, perhaps you could install Memcached (or something similar) on the second machine so that the application can pull data from local memory without requiring a connection to the MySQL server.
There are a few ways to achieve what you want (although maybe not exactly how you described), but the best way is definitely do host the database on a server that doesn't have Internet connectivity problems. Look for hosting that allows remote MySQL connections.

The right way? Automated Synchronization between 2 mysql DB

I already read a few threads here and I also went through the MySQL Replication Documentation (incl. Cluster Replication), but I think it's not what I really want and propably too complicated, too.
I got a local and a remote DB that might get both accessed/manipulated by 2 different persons at the same time. What I'd like to do is to sync them as soon as possible (means instantly or as soon the local machine goes online). Both DB's only get manipulated from my own PHP Scripts.
My approach is the following:
If local machine is online:
Let my PHP Script on the loal machine always send the SQL Query to the remote DB too
Let my PHP Script on the remote machine always store its queries and...
...let the local machine ask the remote DB every x minutes for new queries and apply them locally.
If local machine is offline:
Do step 2. also for both machines and send the stored queries to the remote DB as soon as
local machine goes online again. Also pull the queries from the remote machine for sure.
My questions are:
Did I just misunderstand Replication or am I right that my way would be easier in my case? Or is there any other good solution for what I'm trying to accomplish?
Any idea how I could check whether my local machine is online/offline? I guess I'd have to use JavaScript, but I don't know how. The browser/my script would always be running on the local machine.
What you're describing is master-master or multi-master replication. There are plenty of tutorials on how to set this up across the web. Definitely do your research before putting a solution like this into production as replication in MySQL isn't exactly elegant -- you need to know how to recover if (when?) something goes wrong.

Connecting to SQL Server very slow

I have a standard php app that uses SQL Server as the back-end database. There is a serious delay in response for each page I access. This is my development server, so its not an issue with the live setup, but it is really annoying for working on the system.
I have a 5 - 8 second delay on each page.
I am running SqlServer 2000 Developer Edition on a Virtual Machine (Virtual PC).
I have installed SqlServer on my development machine but get the same delay.
I have isolated the issue to the call to mssql_connect (calling mssql_pconnect has no effect)
It is a networking issue on how I have set up (or not set up, since I didn't really change default config) SQL server. It's not a strictly a programming issue but I thought I might get some valuable feedback here.
Can anyone tell me if there is a trick, specific set of protocols, registry setting, something that will kill this delay?
I was also experiencing a 5-10 second delay on every connect, using the official Microsoft SQL drivers for PHP (as suggested by #gaRex) - none of the answers posted here solved it for me.
As suggested by #ircmaxell, my problem was a DNS issue - and the solution was to edit the \windows\system32\drivers\etc\hosts file (your local local host file) and add the name of my own machine to it.
In the "system properties" dialog, find the "computer name" of your machine - then add a line like 127.0.0.1 my-computer to your local host file.
For me, the delay occurred once more, on the following attempt to load the page - after that, it was super fast, no delay at all.
Note that this problem may occur even on a physical machine, not only on a VM.
I came across network issues when running virtual pc, everything network related is slow, try adding this entry on your registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
Create new DWORD value named DisableTaskOffload and set its value to 1.
Restart the computer.
It worked for me, source.
Is it perhaps a DNS issue? I know that MySQL does a reverse DNS lookup on each login (not each connection). If you don't have a reverse dns record for your server (or your dns is slow) it can cause a major delay at login. There's an option in MySQL to disable that. I'm not sure about SQL Server, but I'd assume it may be doing something similar...
I remember the same problem, but forgot, how we have solve it.
To clarify please specify exact connect strings, your SQLserver versions and also try to start this old good utility c:\WINDOWS\system32\cliconfg.exe, which is also can bring some light.
Yes, I know, it's from 2k, but guys at m$ don't like to create client tools from scratch.
Also try to get "right" mssql client dlls for PHP.

Categories