How to create a Vagrantfile that matches Elastic Beanstalk? - php

We're putting together a PHP site that we plan on deploying to Elastic Beanstalk, and I'd like to create a virtualized dev environment to match production. I've got a few questions about it, though.
So it looks like I can use vagrant-aws and feed it a custom AMI - presumably one that would be created by Beanstalk. Will this actually work, though? It seems like it uses rsync to copy any new files up. Is that going to slow down development as I wait for it to sync up?
More importantly, it seems like this relies on a network connection if you want to do any development. Is it possible to take it offline so I can develop without a connection (e.g. on a plane, on a bus, etc.)?
As an alternative, has anyone put together a Vagrantfile that matches the packages and setup of Elastic Beanstalk? I couldn't find anything in my searches, but maybe I was looking in the wrong spot?
Finally, are there any recommendations for pulling off this sort of dev testing? Am I thinking about this the right way, or is there a better way to do this?

I'd say that the cleanest way is to use a tool like Packer to create an image for both EC2 and Vagrant. That way you control the image you're using and you know it's the same for both EC2 and Vagrant.
Elastic Beanstalk can be handed a custom AMI and if your devs have the vagrant box downloaded they can work when disconnected.
It's not the easiest option since you have to configure all the packages on the image yourself but it's a good way to keep your dev boxes and production boxes in sync.

Related

Run same web application on multiple linux systems (AWS EC2) with version control

I'm wondering what the best approach is to run a application on multiple Linux servers.
Using EC2 AWS linux on different zones.
The web application is the same on each zone, using a different database for each application which is defined in a single config file.
At the moment when I update the application I have to manually update every single server, which is very much work and a high risk for mistakes.
What service or approach could I use to easily update all applications on all servers to the latest version, with excluding some files, I was thinking about Github, but I'm not sure if this is the right thing to use in a production environment.
Also does AWS might have a solution for this?
Thank you,
Roel.
What service or approach could I use to easily update all applications on all servers to the latest version, with excluding some files,
The AWS services designed for this sort of thing are AWS CodePipeline/CodeBuild/CodeDeploy. Your code could be stored in a Git repository in AWS CodeCommit, or GitHub.
I was thinking about Github, but I'm not sure if this is the right thing to use in a production environment.
If you mean GitHub Actions, there are plenty of people using that in production. If you have specific concerns about using that in production then include those concerns in your question.

Creating docker images from puppet for jenkins

I'm currently trying to set up some continuous integration testing for the company I work for. System is LAMP based web application and I'm trying to get jenkins set up to do some continuous integration testing.
Current stumbling block is finding a docker image to run jenkins within PHP, most of the images I've found are out of date (haven't been updated in a year) and then don't work when you try to run through the jenkins setup.
We currently have a puppet configuration for our production servers and my current line of thinking is to use puppet image build to create a docker image based on our production set up; with the added bonus of like for like testing.
The problem here though is that we don't want to install jenkins on all our production servers. Is there a way to add jenkins to a created docker image? (preferably via CLI). And am I tackling this the right way or am I putting the cart before the horse i.e. should I be looking at trying to put PHP on a working jenkins image?
Where I work, we have a... few lines of puppet that we maintain, and we've been discussing internally leveraging the existing puppet in docker, but how? I know nothing of the project you've referenced here, but it's a but of a bad smell that the project has sat dormant for nearly a year without a commit. We're taking a hard look at HashiCorp's Packer—for your purposes, the free and open source should do just fine!
But, to the second half of your question, the cart is probably before the horse right now. I'm confused as to what you're trying to accomplish with installing Jenkins inside the docker image to conduct your testing. I think what you're looking for is a longer-running jenkins server (with or without slaves) to run through setting up, running unit tests, integration tests, and so on. If you have the resources, gitlab-ce with gitlab-ci is a pretty simple way to get started in developing an on-premise ci/cd workflow.

How should i sort out versioning and offline testing on a website

I am an avid webdev hobbyist and freelance, up until now I simply edit the website live (put a maintenance message up while its being made), now all my projects up until now have also been very small.
eg I make a site, show em, take money and go, I've never had to work on a site after it's gone live.
Now my new project is pretty big and I know I will have to edit it after its gone live and maybe have a small team of devs (atm just me)
So how do people professionally handle this? I know I will need a prefix-amp app cos i run an apache server, I've also heared that people use github for versioning, but I'm not really sure because apparently its not svn?
Thanks
ps. I have a windows 7 pc, so no mac apps please
up until now i simply edit the website live
Terrible in my book ;)
so how do people professionally handle this?
First you need to setup a development server (it would be best to keep it as close as possible to the expected live environments). On this server you would install all the software you need.
You may also want to setup a staging server.
i know i will need a prefix-amp app
I hope you are not talking about those one click installers. If you would do it professionally you should install everything yourself that way you can set it up the way you need it.
ive also heared that people use github for versioning, but im not really sure because apparently its not svn?
GitHub is just a website. What you are looking for is git or svn for versioning. You could also setup a git or svn server locally instead of using services like GitHub. Basically what versioning is is that when somebody makes a change to the code he/she would need commit the changes. This way it is easy to keep track of changes in the codebase (like what was changed, when was it changed and by whom).
Local XAMP-stack (LAMP, or WAMP) for development
intranet-system for test and maybe staging
Of course the live system
Versioncontrol, I prefer git. Of course you can use SVN too, but... lets say: It's SVN.
Make changes local, test this changes local
everythings fine: Push it into the "master" vcs-repository
New version ready (or it's "sunday-night-release-time")? Push all that stuff on test/stage
Everythings fine there too: Push it into the live system
Thats very shortened of course, but it should give you an idea.
The tool where you manage your software version is not that important. Use Git, or SVN or whatever, the one you like most. But use _one_.
Equally important is that you run the "page" on two sites, a test and a live system, strictly apart. Both systems have to be very close in their layout, all changes must first be done in the test system, be verified and then done in the same manner in the live system. Do not allow changes only to be made to the live system ('cause it's just a small change'). No exceptions.
Then think about deployment: how will you transfer changed files to the target system ? You need routines for this, that run once started and don't forget a step in between.
Firstly you need some kind of versioning system: either SVN or Git. GitHub is simply an online service that provides managed Git repositories. Secondly you need a development server.
If it were just you doing development, you could host both of these on your local desktop PC, but since other developers are going to be joining, you need a remote server. If you don't want to be running a server out of your home, the best option is a VPS (virtual private server) on which you can install Git, Apache, etc. and anything else you need.
As for development software, take your pick- there are loads of options. A common choice is the NetBeans IDE and TortoiseGit combo. You use NetBeans to develop your code on, automatically uploading to your development server, then you TortoiseGit to commit and sync changes.
Only when you're ready to go live do you copy the code from the dev server to the production server.

Deploying a PHP webapp to multiple EC2 instances behind a Elastic Load Balancer

my question is basically two questions, but since they are closely related I thought that it makes sense to ask them en-bloque.
Case:
I am running a webapplication, which is distributed over multiple AWS EC2 instances behind a AWS Elastic Load Balancer
Intended goals:
a) When deploying new app code (php) it should be automatically distributed to all EC2 instances.
b) When new EC2 instances are added, they should automatically "bootstrap" with the latest appcode
My thoughts so far:
ad a)
phing (http://phing.info) is probably the answer for this part. i would probably add multiple targets for each EC2 instance and when running a deploy it would deploy to all machines. probably unfortunately not in parallel. but that might be even beneficial when scripting it in a way where the EC2 instance is "paused" in the load balancer, upgraded, "unpaused" again and on to the next instance.
ad b)
not sure how i would achieve that. in a conventional "hardware based setup" i probably had a "app code" volume on a network storage device and when adding a new server i'd simply attach that volume. when deploying new appcode i had just one deploy operation to this volume. so i need some "central storage" from where the newly bootstrapped machine/instance downloads it's appcode. i thought about git, but after all git is not a deploy tool and probably shouldn't be forced to be used as one.
I'd be happy to see your setups for such tasks and hear your hints and ideas for such a situation.
Thanks,
Joshua
This could be made using phing. However, I'm not sure why you want new instances to automatically retrieve the appcode? Do you get extra instance very often? And in order for a) to deploy code to several instances, it would still need to know them?
This setup requires a master deploy server and uses a push strategy. The master server needs phing, any required phing packages, and optionally ssh keys for the EC2 instances.
Suggestion for a)
(This is just a general outline of the phing tasks required)
Retrieve instance list (either config file or suuplied parameters)
Export appcode from repository to master server(e.g. SubVersion)
Tar appcode
scp tarball to all EC2 instances (to a deploy folder)
With rsh unpack tarball on EC2 instances
With rsh update symbolic link on EC2 instances so webserver folder points at new deploy folder
Clear any caches on webserver
The above could be called after you have made a new release.
Suggestion for b)
This could be achieved by running the phing script every couple of hours, have it login to the EC2 instances and check for the appcode. If it doesn't find it, it will deploy the newest final release.
This of course, requires the EC2 instances to be setup correctly in regard to webservers, config files, etc. (However, this could also be achieved by remote shell via phing).
I've previously used similar setups, but haven't tried it with services like EC2.
a) Take a look a Capistrano and since you're not using Ruby (and RoR) use the railsless-deploy plugin. Capistrano can deploy to multiple servers.
b) Haven't gotten any experience with this, but I wouldn't be surprised if there isn't a plugin/gem for Capistrano that can pretty much do this for you.
I believe phing is a good tool for this. Capistrano might also help.
I deploy my application in a very similar environment. So far, I've been using simple bash scirpts, but I'll probably be moving towards a phing based solution, mostly because of the difficulty involved in developing shell scripts (you have to know a new syntax that's not very flexible, not to mention cross-platform) and the availability of parallel processing, which is present in phing.
Phing is a great tool for managing your deployment tasks, that pretty much covers most of (a).
We use OpsCode Chef for the infrastructure management. It has a deploy_revision target that supports both SVN and Git repos for application code deployment as well, and knife (the main Chef command-line tool) has an EC2 plug-in that allows you, with one command, to spin up a new EC2 instance with whatever roles and environments you have defined in Chef, and deploy your application code.
For managing this with multiple EC2 instances behind an ELB, we have used the Python boto library to write very simple scripts that connect to the ELB, get a list of all instances attached to that ELB and one-by-one removes an instance from the ELB, runs the Chef update on that instance (which deploys the new application code and any machine configuration changes), re-attaches the instance to the ELB and moves on to the next instance.

Is there a way to use SVN for web development in a Mac shop that uses coda?

So we are pushing to create good processes in our office. I work in a web shop that has been doing web sites for over a decade. And we don't use version control. I know! It's bad, not my fault. I'm the guy with a SoftE background pushing for this at a minimum.
The tech lead has been looking into it. We all use Mac workstations and mostly use Coda for editing since it is a great IDE. It has SVN support built in but expects it to work on local files. We're trying to explore mounting the web directory as a local network drive with an SFTP tool.
We are a LAMP shop, BTW.
I am wondering what the model is here. I think we have typically would checkout the whole site to our local machine where we have apache running and then test it there? This isn't how we work yet, we do everything on the server. We've looked at checking things in and out, but some files are owned by apache and the ownerships change when I check them in, because I'm not apache.
I just want to know a way to do this that works given my circumstances. Would be nice to not have to run apache locally.
You might want to checkout the Coda mailing list and ask there. Lots of Coda enthusiasts there with specific experience.
If you don't want to have to run locally could make Apache on your server run a copy of the site for every developer, on a different port per person, and then mount those web-roots to the local macs and make that the working directory. If you're a small shop that's not hard to manage. I find that pretty easy to set up and saves a lot of resources on the local machines. The one-site-per-person helps to avoid conflicts with multiple people working on files at the same time.
What I'd additionally recommend is to have a script that gets the latest changes from SVN and deploys the entire site to the production server when you're ready. You could have that script change permissions on appropriate files/folders as needed to be owned by Apache. The idea once you're using source control is to never manually edit the production files -- you should have something that deploys it from SVN for you.
A few notes:
Take a look at MacFuse / MacFusion (the latter is the application, the former is the library behind it) to mount remote directories via SSH / FTP as local ones.
Allow your developers to check out into their local environment (with their own LAMP stack if they're savvy), or look into a shared dev environment with individual jails. This way your developers can run their own LAMP stack (which you could deploy for them on the machine) without interfering with others.
The idea being, let them use a workflow that works best for them, to minimize the pain in adapting to this change (if change management might be an issue!)
Just as an example, we have a shared dev server where jails are created with a single command for new developers. They have a full LAMP stack ready to go, and we can upgrade and re-deploy jails easily to keep software up to date. Developers have individual control to add custom settings / extensions if they need it for work, while the sys admins have the ability to reset everything when someone accidently breaks their environment :)
Those who prefer not to use jails, and are able to, manage their own local environments (typically through Macports or MAMP).

Categories