I’ve been working on a cloud based (AWS EC2 ) PHP Web Application, and I’m struggling with one issue when it comes to working with multiple servers (all under an AWS Elastic Load Balancer). On one server, when I upload the latest files, they’re instantly in production across the entire application. But this isn’t true when using multiple servers – you have to upload files to each of them, every time you commit a change. This could work alright if you don’t update anything very often, or if you just have one or two servers. But what if you update the system multiple times in one week, across ten servers?
What I’m looking for is a way to ‘commit’ changes from our dev or testing server and have it ‘pushed’ out to all of our production servers immediately. Ideally the update would be applied to only one server at a time (even though it just takes a second or two per server) so the ELB will not send traffic to it while files are changing so as not to disrupt any production traffic that may be flowing to the ELB .
What is the best way of doing this? One of my thoughts would be to use SVN on the dev server, but that doesn’t really ‘push’ to the servers. I’m looking for a process that takes just a few seconds to commit an update and subsequently begin applying it to servers. Also, for those of you familiar with AWS , what’s the best way to update an AMI with the latest updates so the auto-scaler always launches new instances with the latest version of the software?
There have to be good ways of doing this….can’t really picture sites like Facebook, Google, Apple, Amazon, Twitter, etc. going through and updating hundreds or thousands of servers manually and one by one when they make a change.
Thanks in advance for your help. I’m hoping we can find some solution to this problem….what has to be at least 100 Google searches by both myself and my business partner over the last day have proven unsuccessful for the most part in solving this problem.
Alex
We use scalr.net to manage our web servers and load balancer instances. It worked pretty well until now. we have a server farm for each of our environments (2 production farms, staging, sandbox). We have a pre configured roles for a web servers so it's super easy to open new instances and scale when needed. the web server pull code from github when it boots up.
We haven't completed all the deployment changes we want to do, but basically here's how we deploy new versions into our production environment:
we use phing to update the source code and deployment on each web service. we created a task that execute a git pull and run database changes (dbdeploy phing task). http://www.phing.info/trac/
we wrote a shell script that executes phing and we added it to scalr as a script. Scalr has a nice interface to manage scripts.
#!/bin/sh
cd /var/www
phing -f /var/www/build.xml -Denvironment=production deploy
scalr has an option to execute scripts on all the instances in a specific farm, so each release we just push to the master branch in github and execute the scalr script.
We want to create a github hook that deploys automatically when we push to the master branch. Scalr has api that can execute scripts, so it's possible.
Have a good look at KwateeSDCM. It enables you to deploy files and software on any number of servers and, if needed, to customize server-specific parameters along the way. There's a post about deploying a web application on multiple tomcat instances but it's language agnostic and will work for PHP just as well as long as you have ssh enabled on your AWS servers.
Related
I want to deploy a Laravel app from Gitlab to a server with no downtime. On the server, I serve the app with php artisan serve. Currently, I'm thinking that I would first copy all the files, then stop the old php artisan serve process on the server and start a new one on the server in the directory with the new files. However, this introduces a small downtime. Is there a way to avoid this?
If you are serving with a single server, you can not achieve 0 downtime. If downtime is a crucial part of your system, then use two server and load balance between them smartly. Remember, no hosting or VPS provider ensure you to deliver 100% availability. So if you want to get 100% availability in the deployment process, in an irony situation, your site may have been down in another time. What I'm saying is that, if tiny moment of restarting php artisan serve is matter, then scale up to more than one server.
A workaround solution would be use some 3rd party service (like CloudFlare) which can smartly detect server down situation and notify user when it is back, I personally use that.
If you really want full uptime, docker with kubernetes is your technology.
After many hours of reading documentation and messing around with Amazon Web Services. I am unable to figure out how to host a PHP page.
Currently I am using the S3 service for a basic website, but I know that this service does not support dynamic pages. I was able to use the Elastic Beanstalk to make the Sample Application running PHP. But i have really no idea how to use it. I read up on some other services but they don't seem to do what I want or they are just way to confusing.
So what I want to be able to do is host a website with amazon that has dynamic PHP pages. Is this possible and what services do you use?
For a PHP app, you really have two choices in AWS.
Elastic Beanstalk is a service that takes your code, and manages the runtime environment for you - once you've set it up, it's very easy to deploy, and you don't have to worry about managing servers - AWS does pretty much everything for you. You have less control over the environment, but if your server will run in EB then this is a pretty easy path.
EC2 is closer to conventional hosting. You need to decide how your servers are configured & deployed (what packages get installed, what version of linux, instance size, etc), your system architecture (do you have separate instances for cache or database, whether or not you need a load balancer, etc) and how you manage availability and scalability (multiple zones, multiple data centers, auto scaling rules, etc).
Now, those are all things that you can use - you dont have to. If you're just trying to learn about php in AWS, you can start with a single EC2 instance, deploy your code, and get it running in a few minutes without worring about any of the stuff in the previous paragraph. Just create an instance from the Amazon Linux AMI, install apache & php, open the appropriate ports in the firewall (AKA the EC2 security group), deploy your code, and you should be up & running.
Your Php must run on EC2 machines.
Amazon provides great tools to make life easy (Beanstalk, ECS for Docker ...) but at the end, you own EC2 machines.
There is no a such thing where you can put your Php code without worrying about anything else ;-(
If you are having problems hosting PHP sites on AWS then you can go with a service provider like Cloudways. They provide managed AWS servers with one click installs of PHP frameworks and CMS.
I am currently working with a startup that is in a transitional phase.
We have a PHP web application and utilise continuous integration with the standard unit and regression tests (selenium) run over jenkins. We have a development server which hosts newly committed code and a staging server that holds the build ready for deployment to the production server. The way we deploy to the production server is through a rudimentary script that pulls the latest svn copy and overwrites the changes in the htdocs directory. Any SQL changes are applied via the sync feature from MySQL Workbench.
This setup works fine for a very basic environment but we are now in a transition from single server setups to clusters due to high traffic and I have come up against a conundrum.
My main concern is how exactly do we switch deployment from a single
server to a cluster of servers ? Each server will have its own htdocs
and SQL database and under the current setup I would need to execute
the script on every server which sounds like an abhorrent thing to
do. I was looking into puppet which can be used to automate sysadmin tasks but I am not sure whether it is a formidable approach for deploying new builds to a cluster.
My second problem is to do with the database. Now my assumption is the code changes will be applied immediately, but since we will have a db master/slave replication my concern is the database changes will take longer to propagate and thus introduce inconsistencies during deployment. How can the code AND database be synchronised at the same time ?
My third problem is related to automation of database changes. Does anyone know of any way I can automate the process of updating a DB schema without manually having to run the synchronisation ? At the moment I have to manually run the workbench sync tool, whereas I am really looking for a commit and forget approach. I commit it and DB changes are auto synchronised across the dev and QA setups.
I am running a similar scenario, but I am using a Cloud Provider for my production environment, in order that I do not need to care about replication of DB, multi server instances etc. (I am Using pagodabox, but AWS would also work perfectly fine).
I would recommend you to create real migrations for Database Migrations, in order to track those via svn or something else. In that case, you can also provide information, how to roll back. I am using https://github.com/doctrine/migrations, but mainly because I use doctrine as ORM.
If you have a migration tool, you can easily add a command in your deployment script to run those migrations after deployment.
I don't think that the database synchronisation is a big issue during deployment. That might depend on the actual infrastructure youre using. The cloud providers like pagoda or aws take care of it for you.
I am attempting to figure out a good way to push out a new commit to a group of EC2 server instances behind a ELB (load balancer). Each instance is running Nginx and PHP-FPM
I would like to perform the following workflow, but I am unsure of a good way to push out a new version to all instances behind the load balancer.
Dev is done on a local machine
Once changes are ready, I perform a "git push origin master" to push
the changes to BitBucket (where I host all my git repos)
After being pushed to bitbucket, I would like to have the new version
pushed out to all EC2 instances simultaneously.
I would like to do this without having to SSH in to each instance
(obviously).
Is there a way to configure the remote servers to accept a remote push? Is there a better way to do this?
Yes, I do this all of the time (with the same application stack, actually).
Use a base AMI from a trusted source, such as the default "Amazon Linux" ones, or roll your own.
As part of the launch configuration, use the "user data" field to bootstrap a provisioning process on boot. This can be as simple as a shell script that runs yum install nginx php-fpm -y and copies files down from a S3 bucket or do a pull from your repo. The Amazon-authored AMI's also include support for cloud-init scripts if you need a bit more flexibility. If you need even greater power, you can use a change management and orchestration tool like Puppet, Chef, or Salt (my personal favorite).
As far as updating code on existing instances: there are two schools of thought:
Make full use of the cloud and just spin up an entirely new fleet of instances that grab the new code at boot. Then you flip the load balancer to point at the new fleet. It's instantaneous and gives you a really quick way to revert to the old fleet if something goes wrong. Hours (or days) later, you then spin down the old instances.
You can use a tool like Fabric or Capistrano to do a parallel "push" deployment to all the instances at once. This is generally just re-executing the same script that the servers ran at boot. Salt and Puppet's MCollective also provide similar functionality that mesh with their basic "pull" provisioning.
Option one
Push it to one machine.
Have a git hook created on it http://git-scm.com/book/en/Customizing-Git-Git-Hooks.
Make hook run pull on other machines.
Only problem , you'll have to maintain list of machines to run update on.
Another option
Have cron job to pull from your bitbucket account. on a regular base.
The tool for this job is Capistrano.
I use an awesome gem called capistrano-ec2group in order to map capistrano roles with EC2 security groups.
This means that you only need to apply an EC2 security group (eg. app-web or app-db) to your instances in order for capistrano to know what to deploy to them.
This means you do not have to maintain a list of server IPs in your app.
The change to your workflow would be that instead of focussing on automating the deploy on pushing to bitbucket, you would push and then execute
cap deploy
If you really don't want to do to steps, make an alias :D
alias shipit=git push origin master && cap deploy
This solution builds on E_p's idea. E_p says the problem is you'd need to maintain a server list somewhere in order to tell each server to pull the new update. If it was me, I'd just use tags in ec2 to help identify a group of servers (like "Role=WebServer" for example). That way you can just use the ec2 command line interface to list the instances and run the pull command on each of them.
for i in \
`ec2din --filter "tag-value=WebServer" --region us-east-1 \
| grep "running" \
| cut -f17`\
; do ssh $i "cd /var/www/html && git pull origin"; done
Note: I've tested the code that fetches the ip addresses of all tagged instances and connects to them via ssh, but not the specific git pull command.
You need the amazon cli tools installed wherever you want this to run, as well as the ssh keys installed for the servers you're trying to update. Not sure what bitbucket's capabilities are but I'm guessing this code won't be able to run there. You'll either need to do as E_p suggests and push your updates to a separate management instance, and include this code in your post-commit hook, OR if you want to save the headache you could do as I've done and just install the CLI tools on your local machine and run it manually when you want to deploy the updates.
Credit to AdamK for his response to another question which made it easy to extract the ip address from the ec2din output and iterate over the results: How can I kill all my EC2 instances from the command line?
EC2 CLI Tools Reference: http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/Welcome.html
Your best bet might be to actually use AMI's for deployments.
Personally, I typically have a staging instance where I can pull any repo changes into. Once I have confirmed it is operating the way I want, I create an AMI from that instance.
For deployment, I use an autoscaling group behind the load balancer (doesn't need to be dynamically scaling or anything). In a simple set up where you have a fixed number of servers in the autoscale group (for example 10 instances). I would change the AMI associated with the autoscale group to the new AMI, then start terminating a few instances at a time in the autoscale group. So, say I have 10 instances and I terminate two to take it down to 8 instances. The autoscale group is configured to have a minimum of 10 instances, so it will automatically start up two new instances with the new AMI. You can then keep removing instances at whatever rate makes sense for your level of load, so as to not impact the performance of your fleet.
You can obviously do this manually, even without an autoscale group by directly adding/removing instances from the ELB as well.
If you are looking to make this fully automated (i.e. continuous deployment), then you might want to look at using a build system such as Jenkins, which would allow for a commit to kick off a build and then run the necessary AWS commands to make AMI's and deploy them.
I am looking for a solution to the same problem. I came across this post and thought it was an interesting approach
https://gist.github.com/Nilpo/8ed5e44be00d6cf21f22#pc
Go to "Pushing Changes To Multiple Servers"
Basically the idea is to create another remote call it "production" or whatever you want, and then add multiple urls (The ip's of all of the servers) to that remote. This can be done by editing .git/config
Then you can run git push production <branch> and it should push out to all of the urls listed under "production"
One requirement to this approach is that the repo on the servers need to be bare repos and you will need to have a post-receive hook to update the working tree.
Here is an example of how to do that: Setting up post-receive hook for bare repo
I have a Website with an FTP Server in BigRock.com. What my issue is, whenever there is a deployment, i will have to manually search and find all the files that need to be changed. Now the project is getting larger and larger and making these kind of changes is taking a lot of my valuable time.
So is there any Software/Tools available for Syncing with the FTP Server and changing files based on changes made locally? I am not sure about FileZilla Client since i couldn't find much options in it. I am pretty sure that there would be some solution for this. My project is done in Zend Framework with Doctrine ORM and Many other Libraries.
If you need a one-way synchronization from local files to server, you can check this free tool: http://sourceforge.net/p/phpfilesync/wiki/Home/
It could not be much easier to install or use.
try Allway Sync
It uses innovative synchronization algorithms to synchronize your data between desktop PCs, laptops, USB drives, remote FTP/SFTP servers, different online data storages and more. Encrypted archives are supported. Allway Sync combines bulletproof reliability with extremely easy-to-use interface.
url http://allwaysync.com/
I tried personally is working fine for ftp and local file sync. and also it is free..
Working with Assembla, i found the FTP Tool in it.
Here is the reference link,
http://blog.assembla.com/assemblablog/tabid/12618/bid/78103/Deploying-a-Web-site-is-easy-with-Assembla-s-FTP-tool-Video.aspx
Its Easy to Work Out,
Add the FTP tool using the Tools section of your Admin tab.
Deploy code to any server with ftp or sftp installed.
Set the deploy frequency to manual (you push a button) or schedule automatic deployments for every commit, hourly, daily, or weekly.
Only changed files are deployed for fast and accurate deployments.
Easily roll back and deploy prior revisions.
Add multiple servers for staging and production releases or simply to deploy different repository directories to different locations.