Multiple Azure Virtual Machines on same Cloud Service - php

I have a cloud service on Azure and on that cloud service, I have 2 VMs. And I have apache, php, mysql, phpmyadmin installed on both. Both the VMs lie on the same availability set as well.
If my cloud service dns is hello.cloudapp.net and I access phpmyadmin by going to hello.cloudapp.net/phpmyadmin and create a database, which VM will it create a database on?

One of the two, depending on whichever one the load balancer decides to send you to.
The problem is having a separate instance of MySQL local to each VM. Instead, you need something that will create a shared data context between the two instances; MySQL Cluster looks promising.

As availability set just makes sure that your application is not affected by single points of failure, like the network switch or the power unit of a rack of servers. And you have 2 same VMs in a cloud service, so load balancer may create different data in these instances of MySQL V. So you have to handle such as synchronization in MySQL, such as MySQL Cluster as #Ic. said.
As another approach, we can create a new VM Installed MySQL database in a new cloud service as data tier.
Of cause, as an alternative approach, you can use the 3th part service, i.e. ClaerDB which tends to take care of most of these things for you.

Related

Improve response time when database is on a dedicated server

Overview
I have a Laravel 9 application which is hosted with Digital Ocean. I use Laravel Forge to handle provisioning of the servers, management, etc. I've created two separate servers for my production environment. One to host my Laravel application code and another for the database which runs MySQL 8. These two servers are networked together and communicate over their VPC assigned private IP addresses.
Problem
I initially provisioned one server to host my application. This single server hosted both the Laravel application code and database. I have an endpoint that I hit to measure the response time for my application.
With one server that hosts the codebase and database the average response time was: ~70ms
When I hit the same endpoint again but with my two dedicated servers the average response time was: ~135ms
Other endpoints in my application also have a significant increase in response time when the database lives on a dedicated server vs a single server that houses everything.
Things I have done
All database queries have been optimized. (n+1, etc.)
Both networked servers are in the same region.
Both networked server's resources (CPU, RAM) are low and are not capping out.'
I've turned on Laravel's database config "sticky" option with no noticeable improvements.
I've enabled PHP OPcache for PHP 8.1.
Questions
How can I achieve a faster response time when my database is on a separate server than my codebase?
Am I sacrificing performance for scalability with dedicated servers?
TLDR
I'm experiencing slower response times in my Laravel application when the codebase and database run on separate dedicated servers vs hosting everything on one server.
Are your servers in the same data center and on the same VLAN?
Are you sure that you are connecting with your private VLAN IP address?
Some latency is expected if you need to connect to a database on another server. Have you tried to ping between the servers to see what the latency is?
Do you really need to have the web server and the database on separate servers? If so, I would probably try Digital Oceans managed database. I have used that for several projects and it works great.
Q: How can I achieve a faster response time when my database is on a separate server than my codebase?
A: If hosted in the same data center, the connection latency should be 30ms or less. Tested between AWS RDS and EC2 instances. Your mileage could vary depending on host.
Q: Am I sacrificing performance for scalability with dedicated servers?
A: It's standard practice to host databases separately to your application. It would be unrealistic to do otherwise for bigger projects. You can soften the impact by selectively caching data that doesn't change regularly on the main server. Unfortunately, PHP is not particularly good at this kind of fine tuning so you might be out of luck.
I can tell you that I currently run a central MySQL RDS instance that many ubuntu EC2 instances communicate with. While the queries take around 30ms, smart use of caching gives the majority of my web requests a 30ms response time in their own right. I do have the advantage of using NodeJS which is always doing things in the background without needing a request before performing work.
You may unfortunately find that you're running into one of the limitations of PHP.

Using AWS for SQL and REST frontend

I'm trying to set up a PostgreSQL db with a REST(php) gate. I want to access this database the standard REST way (using GET and Headers), from a C# application.
Currently I am using a standard web hoster that has a MySQL database, but I want to jump ship to PostgreSQL.
Am I right in thinking Amazon's EC2 servers a good choice to do this? I just need a server with PostgreSQL installed and a bunch of php scripts (for REST).
Thanks
EC2 would work fine, or you could use an RDS PostgreSQL instance, and just run the PHP scripts on a small EC2 instance.
Check out restSQL. It's an open-source, ultra-lightweight persistence layer, with out of the box PostgreSQL support.
It's written in Java but you can use its HTTP API by deploying the service in a standard JEE container, e.g. Tomcat, or as a Docker container. The latest versions are on bundled as images on docker hub.
You can take it for a spin using the sandboxed instance and explore the docs at http://restsql.org.

mysql database Multi-master replication on dynamic ip

Situation:
Php application with mysql database running on 2 sites
online -static ip X.X.X.X
localhost (not online most of time and dynamic ip)
application traffic is usually low <10 users.
what i need is that whenever a change is done to the online database, this change is pushed to localhost -if its online or when ever its available- and vise versa (any changes done locally is uploaded online to database when ever there is online connection).
is it possible to setup such replication by mysql ? or do i need to write a custom PHP that ping master server and once its available
thanks very much :).
Yes you can do this with replication. Just pick which server you want to be the master and have the second one send all of its changes to the main one then the main one could send its changes back.
Replication can be a bit daunting to set up but once its up and running its grate. http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html
Let's first analyze your question:
The problem of accessing MySQL with a dynamic ip.
This is very easy. Once you installed MySQL on a server with an ever-changing IP, what you can do is go to NO-IP, DynDNS or any other Dynamic DNS service and register for free with them. Once you've registered with them, you would get a client for your operating system. Install that and then you can access your MySQL server using a domain name.
Example:
Instead of having to access your server at 127.0.0.1, you can access it as mysql-server.easynet.net etc.
Now the second and albeit complex part of your question, how to do available and lazy replication.
This is relatively a bit more complex than the previous step. But, what actually happens is that you have to choose a scheme of replication. Basically what you are looking for here is MASTER-MASTER replication since you have a possibility of changes happening at both the MySQL servers. Thus the updates need to be bi-directional, that's what this scheme of replication does. How to do it? Well, I am providing the links which I've found easier to follow:
Master-Master Replication
Step-by-step MySQL Master Replication
I hope that would ease your plight and answer your question!
Cheers!
Sure, you can
You need to setup both MySQL servers as Master and Slave at the same time.
Configure the online server as Master, and the localhost server as slave, and once replication is OK.
Configure the localhost as Master and the online server as slave.
I already did that on two servers.
About the dynamic IP on the local host, simply you can use any dynamic IP service like: no-ip, and use the dns name instead of the IP.
Here's a post i've written (in french, but you can get the configuration snippets from it) for setting up a MASTER-MASTER replication with a load balancer (mysql proxy) for balancing SQL queries between both nodes.

Comprehensive guide to setting up a data driven website using Amazon web services for EC2

I have started making a website and was hosting on Hostgator but I am going to move it to Amazon web services before launch. There is a small problem that I previously just uploaded my files to the relevant location to Hostgator and it has all just worked. I have no experience in setting up from scratch a production worthy server setup and I need to know how. I did setup the basic lamp stack on the EC2 instance, however, I keep reading that when the EC2 instance does down it will take all the data with it and I can not have that happen. I have also read then when it dies it wont do anything and you have to start up the apache server again it is not automatic. I need it to be reliable and have the data independent so it will not crash, burn and die if the server goes. I have worked out that I will need S3 for static things such as my PDF's and images as well as using the RDS for my MYSQL database. My domain name is registered elsewhere so I believe I need to use route 53 as well.I want to use AWS for a few reasons reasons, firstly as it can scale which is really important but not sure if this is built in or it requires customization. I have been told that it is very secure the EC2 and the last reason is that I can debug my php code. The debug reason is that I have an error that only appears on the Hostgator server not my local lamp stack and I can't debug it there so I should be able to when I move to EC2.
I have done a lot of looking around online and I can't find anything comprehensive about what to setup. I have been reading (some of you may think otherwise). However, I am so overwhelmed by the amount of information there is as it is either far to complicated discussing some theory that I do not care about or to easy and does not discuss how to use anything other then a generic install of a LAMP stack on the EC2 with out using the other services.
I have seen http://bitnami.com/stack/lamp/cloud/amazon but do not think this is what I want as again the EC2 has a mysql database and I am not using the RDS
If someone can point me in the direction of a comprehensive guide to setting up a slid LAMP stack on AWS (mabey even a book has been written) that would be great as I found the amazon docs did not go into much detail and told me how to do things but not why I should do them and what purpose they had.
Thanks
I'll start with answering your q's first, and as you are a newbie I would suggest don't pressurize to learn all of AWS, you can keep migrating slowly and keep discovering the magic of cloud.
Q.
when the EC2 instance does down it will take all the data with it and
I can not have that happen. I have also read then when it dies it wont
do anything and you have to start up the apache server again it is not
automatic?
A. When an EC2 instance goes down (down could mean shutdown manual by you or Down means AWS network is down, or instances are having some other issues) only the data on "ephemeral data" or you can say data on RAM or sessions will get lost, whatever is on disk will remain on disk, And the instance will be available as soon as problem is resolved.
Apache will start itself when an instance restarts, and remains up until you manually shut it down or some other issue.
Q. I will need S3 for static things such as my PDF's and images as
well as using the RDS for my MYSQL database?
A. Its a good practice to keep static stuff on s3, but not a necessary thing to do, you can set up a ftp or manage your static content like you were used to, like keeping it on a folder of your website.
You don't necessarily need RDS to have a mysql database, I have a process running on aws with around 40 mil transactions a day, and I do it on a normal mysql at an ec2 instance.
however having RDS gets rid from the daily backup and index maintenance hustles.
Q. My domain name is registered elsewhere so I believe I need to use
route 53 as well ?
A. Again not a necessary thing, you can just go to your domain manager and change the A-name or C-name records (with static public ip of ec2) and give a static public ip to your ec2 instance or Elastic load balancer and you'll be up and running in no time.
Q. I want to use AWS for a few reasons reasons, firstly as it can
scale which is really important but not sure if this is built in or it
requires customization.
A. It can scale really well, but depends how do you want it to scale, and its highly customizable.
there are 2 kinds of scaling
vertical - you change your instance type from one type to another to get better disk / cpu or RAM or better network performance, but this will need you to stop your ec2 instance and change its type, that means there will be downtime of around 10 minutes while you do so.
horizontal - you can put your website (ec2 based) behind a load balancer (ELB - elastic load balancer) and add/remove more instances to/from it as and when you deemed suitable, or you can also have an auto scaling policy to help you do it automatically depending up on the load at your web server.
Security? - you can be very well assured its very well secure, and so much secure that I can bet my life on a secure ec2 instance, i can swear by linux thor that it works and it works like a charm.
Debugging? - I suggest you do debugging by classic means, make logs of errors and all, just treat ec2 like a normal machine and learn slowly the tricks of trade.
Now lets setup a basic solid LAMP stack for ourselves, I am assuming that you have a ready ubuntu instance, and you can ssh to it, in case you haven't been able to make one - see this.
basically.
1. create security groups - This is your firewall, makes sure which ports are open, and also makes sure which ec2 instances can talk amongst themselves.
2. Create an ec2 instance - make any ubuntu instance.
And access your instance using ssh - ssh is basically secure terminal connection to your ec2 machine which is secured by a key file (pem file) and whoever has it can access your machine's data, so keep it very very secure, and you can't afford to lose it.
3. install LAMP using - Tasksel utility
4. setup a public ip for yourself ( costs a dollar per month) - you can use this ip to redirect your www.example.com traffic using domain manager of your DNS provider - godaddy or someone alike i suppose.
I think this will be it to make you start with AWS.
Just to be safe that you have a copy of your data make an AMI of your ec2 instance with all the data on it. AMI is the image from which you can make a similar or better instance in 10 minutes flat (or even lesser).
You wil pay for - instance type you chose, public IP, traffic if its beyond a level (usually very very cheap), and disk usage (8 gb is the default disk), and AMI volume.
Have fun with AWS.
To retain data between during the down time, make sure you use EBS storage. Its default now a days. In the past, before EBS, instance storage was default and you would lose data once server is down, but with the EBS storage, data is retained during the shutdown.
You can go one of the follow two routes depending upon your needs.
1. Use AWS ElasticBeanStalk (http://aws.amazon.com/elasticbeanstalk/) if you do not need to install anything additional Its super easy and its similar to Google Apps and you can deploy your app quickly. You do not get server, but a server to deploy your app. You have to use RDS for database and S3 for storage. You can not store locally on the server where you are running.
Use EC2 server with static IP address. You can get pre-configured LAMP stacks from market place. I use bitnami cloud stacks for AWS that comes pre-configured with LAMP and many other apps. Just use their free account to create micro instance for your PHP and select a server and you are good to go. http://bitnami.com/cloud
You do not need to use Route 53 unless you need to manage DNS programatically. You can just point your server to EC2 server by adding entry in your DNS (godaddy or whoever is your domain name provider).
Bitnami service also allow scheduled backups, but if you are not storing anything locally, you do not need frequent backups.
Make sure you use Multi-AZ option in RDS which is more reliable. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Also, Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period and enabling point-in-time recovery, up to last 5 minutes.
I hope this helps.
You should be using dynamo DB (http://aws.amazon.com/dynamodb/pricing/) in with LAMP without Mysql for storage. Having a Samebox database can almost never give you reliability. So you will not loose your data what ever your Application box goes through. You can even read our application config from dynamo DB.
http://aws.amazon.com/documentation/dynamodb/
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SettingUpTestingSDKPHP.html
Do I need to use EC2 with DynamoDB?
You wont loose data when server is down. Just make sure your select EBS volume, and not Instance.
You can get ready-made server from AWS market place. I used the following for my projects, but there are many other pre-configured servers available.
https://aws.amazon.com/marketplace/pp/B007IN7GJA/ref=srh_res_product_title?ie=UTF8&sr=0-2&qid=1382655655469
This with RDS server is what you need. We use this all the time for production servers and never had any issues.
Here are two guides that look good to me:
http://shout.setfive.com/2013/04/05/amazon-aws-ec2-lamp-quickstart-guide-5-steps-in-10-minutes/
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html
If learning the Linux command line isn't your thing, you should consider going "up the stack" to a PaaS (Platform As A Service). They are things like Heroku, Google App Engine, and ElasticBeanStalk.
The trade-off between Infrastructure as a Service (IaaS like EC2) and a Platform as a Service (PaaS like Heroku):
- PasS is quicker to get started, less to learn. IaaS requires you to know the entire stack from the start (or hire/rent a sysadmin).
- PasS usually gets more expensive as you get bigger compared to IaaS (but it depends).
- PaaS has less control (you can't choose the language version, so you can't upgrade to get around a specific bug.)
- IaaS can literally do anything (it's just a Linux box)
- IaaS allows for more tuning (upgrade libraries to get features, switch to different instance type to trade off RAM for CPU, run HipHop for speed, add caching layers, etc)
You have a few choices:
Use only EC2. Install Apache+MySQL and your dynamic website on EC2. This will be very similar to setting it up on Hostgator except you are running a full server.
Use EC2 for "compute" (that is, the dynamic part of the site) and S3 for storage. This doesn't differ much from #1 above, except that you are using S3 for static file storage - which is great if you are expecting to host a lot of static content (multimedia, etc)
Set up your website using Amazon Elastic Beanstalk (which now supports PHP). However, if you go this route, you will need to host your database somewhere - which will likely be RDS.
I recommend going with #1. There is nothing wrong with that - yes, if EC2 goes down, it will take down your site with it, but to alleviate that, you can run two servers in two different regions (one in US East and one in US West) - I don't think two EC2 regions have ever gone down at the same time.
UPDATE: If you are concerned about backup/restore and making sure your data is safe, I recommend the following (I do this with a site in production on EC2):
Put your website code into Git/SVN source control; and pull from there
Backup your MySQL database to Amazon S3 regularly (at least once a day) using mysqldump
I think you have some misconceptions.
If EC2 as a whole goes down (which is rare) then you do NOT lose your data. The site would simply be offline until Amazon restored services.
If your particular instance goes down due to a hardware issue, then you might lose data. This is no different than if your own server went belly up. The right answer is to simply make normal backups of your database and store it in S3 or some other location. Generally you will want to create and attach a second EBS volume to your DB server which has the DB files on it as well.
If you Terminate your instance then, yes you will lose everything on that. However Amazon has the ability to make terminating instances difficult so you don't do it accidentally.
Stopping your instance is like turning the computer off. The difference being that you can remotely turn it back on when you want. You can only stop EBS backed instances - which means that your data is safe while it is offline.
I would highly suggest that if you are uncomfortable with setting up and maintaining your own server that you should investigate fully managed hosting instead. EC2 is awesome, we've been on it for 2 years. However, we have a strong tech team that understands what it takes to run and manage servers.

Can local intranet application (built on php) query mysql database stored in offsite location?

I have a local intranet application which runs off a basic WAMP server in our offices. Every morning, one of our team members manually syncs our internal mysql db with our external mysql db (where our online enrollments occur). If a change is made during the day on the intranet application, it is not reflected on the external db until the following day.
I am wondering if it is possible to (essentially) tunnel to an external mysql connection from say a wamp or xampp server from within our offices and work in 'real-time'.
Anybody had any luck or advice?
Yes
Replication enables data from one MySQL database server (the master) to be replicated to one or more MySQL database servers (the slaves). Replication is asynchronous - slaves need not to connected permanently to receive updates from the master. This means that updates can occur over long-distance connections and even over temporary or intermittent connections such as a dial-up service. Depending on the configuration, you can replicate all databases, selected databases, or even selected tables within a database.
If you use the external server directly, performance is likely to suffer. A Gigabit LAN might be a thousand times faster than your Internet connection - particularly the upload speed of an ADSL connection.
Just make your internal application use the database from the external one. You may need to add permission to the external server to allow connections from your internal server IP, but otherwise this is just like having a webserver and sperate db server that need to access each other.
Can't really tell you how to do this here - it all depends on your specific configuration, something that I would thing is a little complicated (and too specialized) to figure out on SO.

Categories