I'm trying to figure out a way to deploy my company intranet PHP apps automatically, using GitLab, but so far, I'm not sure if the options that I've found on the internet will do the job.
Here's the scenario:
VM1: Remote Git server, that uses GitLab to administrate projects repos.
VM2: Development server. Each project has it's own development server.
VM3: Production server. Each project has it's own, as well.
Developers: Each developer uses a vagrant box, based on the project's development server.
What I want to do:
Whenever a developer push it's commits to the development branch, a hook in the remote server must update the development server tree with the last commit on that branch.
The same must occur when the master branch is updated, but it must update the production server tree.
However, since we use Laravel, we must run some extra console commands using Artisan, depending on each case.
We are following Vincent Driessen's Git Branching Model.
What I know so far:
I know that GitLab uses Web Hooks, but I'm not sure if that's going to work in this case, since I need to access a custom URL, doesn't sound very safe, but if it's the only solution, I can write a script to handle just that.
The other possible solution is to use Jenkins but I'm not an expert and I don't know yet if Jenkins is too much for my case, since I'm not using unit tests yet.
Do you guys have implemented a solution that could help in my case? Can anyone suggest an easy and elegant way to do that?
Thanks guys! Have a nice one
We do it the following way:
Developers checkout whatever Git branches, and as many branches as they want/need locally (Debian in VM Ware on Mac)
All branches get pulled to dev server whenever a change is pushed to Git. They are available in feature-x.dev.domain.com, feature-y.dev.domain.com etc., running against dev database.
Release branches are manually checked out on live system for testing, made available on release-x.test.domain.com etc. against the live database (if possible, depending on migrations).
We've semi-automated this using own scripts.
Database changes are made manually, due the sensitivity of their nature. However, we don't fint that a hassle, after getting used to migrations - and just remembering to note the alterations. We find good support by cloning databases locally for each branch that needs changes. An automated schema comparison quickly helps then, if changes to a migration file have been forgotten.
(The second point is the most productive one, making instant test on the dev platform available to everyone as soon as the first commit of a new branch is pushed)
I would suggest to keep things simple and work with git, hooks and remote repositores.
Pulling out heavy guns, like Jenkins or Gitlab for this task could be a bit too much.
I see your request as the following: "git after push and/or git after merge: push to remote repo".
You could setup "bare" remote repositories - one for "dev-stage", one for "production-stage".
Their only purpose is to receive pushes.
Each developer works on his feature-branch, based on the development branch.
When the feature-branch is ready, it is merge back to the main development branch.
Both trigger a "post merge" or "post-receive" hook, which execute a script.
The executed script can do whatever you want.
(Same approach for production: When the dev-branch has enough new features, it is merged to prod branch - triggers merge event - scripts...)
Here you want two things:
You want to push a specific branch to a specific remote repo.
In order to do this, you have to find out the specific branch in your hook script.
That's tricky, but solveable, see: https://stackoverflow.com/a/13057643/1163786 (writing a "git post-receive hook" to deal with a specific branch)
You want to execute additional steps for configuration/setup, like artisan, etc.
You might add these steps directly or as triggers to the hook script.
I think this request is related to internal and external deployment via git.
You might also search for tutorials, like "deployment with git", which might be helpful.
For example: http://ryanflorence.com/deploying-websites-with-a-tiny-git-hook/
http://git-scm.com/book/en/Git-Basics-Working-with-Remotes
http://githooks.com/ & https://www.kernel.org/pub/software/scm/git/docs/githooks.html
If you prefer to keep things straightforward and don't mind using paid third-party options, check out one of these:
http://deploybot.com/
https://www.deployhq.com/
https://envoyer.io/
Alternatively, if you fancy shifting to an integrated solution, I've not used better than Beanstalk.
Related
We work in teams of developers on different using the git workflow as below:
Receive ticket
git pull/checkout
Create feature/bugfix branch
Do changes
Commit to branch
Merge to test branch, git pull to test environment
Test on same environment as live
If test is successful, merge to proof, git pull to proof
Client sign off
Merge to live
git pull on live
However I can't decide on the best way for developers to pull their changes to live servers.
Currently, developers SSH to the live server (with individual user accounts) and perform a git pull - however they need to have read/write access to the codebase on the server.
I dislike this as then only one person (or a sysadmin) has to perform the deployment.
An alternative is to create a web accessible git pull script, so when a developer wants to do a pull, the script literally executes git pull on the server and outputs to the browser.
The best alternative, in my opinion, is for a hook to be triggered when a repo is pushed to - we use gitlab so I think this implementation would be relatively trivial, the script on the web server receives a POST object containing information about the repository, so it could be scripted to only trigger a pull if the branch that receives the hook has been updated (if that makes sense). An email could also be sent to the user that did the push with an output of the git pull message, to ensure that everything went according to plan.
However I am uncomfortable that developers could accidentally push to the wrong branch and the commits become live prematurely - ideally there should be some kind of github style merge request feature.
Does anyone have any other recommendations or suggestions?
At work we use capistrano + webistrano. Although it's ruby based, and many features are ruby specific, it works perfectly for what we need.
Here is our workflow:
1-5: Same as yours
6: Go to dev webistrano -> select branch -> deploy
7: Client sign-off, etc
8: Go to live webistrano -> select branch -> deploy
It also supports deploy scripts, and a bunch of other stuff. With the deploy script, we use a shared folder and current folder. The deploy script creates symlinks to the shared folder, that contains libraries and other stuff that is not on the git repository.
Here is a sample deploy script for magento.
Jenkins is another option for continuous integration. It supports some extra stuff compared to capistrano (automated test execution), so it may be worth checking out.
We like to use Gerrit for code review. A Gerrit instance sits on top of the central Git repository and lets the project leader(s) review stuff. Once a commit is reviewed and accepted, Jenkins CI kicks in, does automatic code style checks, unit tests, generates API documentation automatically etc and finally deploys (if all tests passed) using Apache Ant scripts. This is a fairly complex setup, but we've come to love it.
These great tutorials go into detail about the setup.
Aside from the issues you've identified with the process, I'd also be concerned about parallel changes colliding and creating ineffective testing in your "Proof" environment.
Say Bob pulls change #1 to Proof, then Dave pulls change #2 to Proof, then client tests change #1 (against a codebase with #1 and #2), when you go to pull change #1 to Live, you're effectively pulling untested code.
I'd recommend thinking in terms of immutable builds and build artifacts. A colleague of mine wrote a good article on this topic. The main change to your process would be:
1-5: (Same)
6: Retrieve from Test branch; capture build artifact; deploy build artifact to Test
7: (same)
8: Promote build to Proof; deploy build artifact
9 (same)
10 Promote build to Live; deploy build artifact
You should also consider using a deployment/delivery automation tool like Inedo's BuildMaster. The free version should be more than what you need, and it will help you go from "log in and run a script" to "click a button after the appropriate approvals".
disclaimer: I work for Inedo
So - let's say I develop a PHP app which I develop in a vagrant box identical to production envrionment. So - as an end result I would have a *.tar.zip file with a code...
How would one organize a deployment into production environment where there are a lot of application servers? I mean - I'm confused how to push code into production synchronously all at once?
More information:
on server code is stored like this:
project
+current_revision ->link to revisions/v[n]
+revisions
+v1
+v2
+v3
...
+data
So when I have to deploy changes I usually run a deploy script that uploads updated tar onto server with ssh, untars into specific dir under revisions, symlinks it into current_revision and restart php-fpm.... This way I can rollback anytime just by symlinking to an older revision.
with multipe servers what bothers me is that not all boxes will be updated at once, ie. technically some glitches might be possible.
If you're looking for a "ready-to-go" answer, you'll need to provide some more info about your setup. For example, if you plan to use git for VCS, you could write a simple shell script that pulls the latest commit and rsyncs with the server(s). Or if you're building on top of Symfony, capifony is a great tool. If your using AWS, there's a provider plugin written by the author of Vagrant that's super easy to use, and you can specify a regex for which machines to bring up or provision.
If instead you're looking for more of a "roadmap", then the considerations that you'll want to take are:
Make building of identical boxes in the remote and local environments as easy as possible, and try to make sure that your provisioning emphasizes idempotence.
Consider your versioning/release structure; what resources will rarely or never change? Include those in a setup function instead of a deploy function, and don't include them in your sync run.
Separate your development and system administration concerns; i.e. do not just package a vagrant box with a *.tar.gz and tie it through config.vm.box_url. The reason for this is that you'd have to repackage every production server with a new box every time you deploy, instead of just changing files on the server, or adding/removing some packages from the server.
Check out some config management tools like Chef and Puppet; even if you don't end up using them, they'll give you an idea of how sysadmin professionals approach this problem.
Lots of ways. If starting from barebones (no cloud infrastructure), I'm a fan of the SVN branch hook. Have a SVN repo for your code. Set up a post-commit hook on it, which checks if anything in /branch/production/ has been changed.
If it has, let the post-commit hook fire all your automated roll-out procedure - and in this case, an easy way to do so is to let all your servers known* to svn export the branch. As simple as that!
(* that's the hard step)
I am developing a PHP project using github.
My editor of choice is Coda 2 which has a function of saving at the same time on local computer and on the FTP server.
Now, I also have the need to commit the changes to git, and therefore to github as well so, every time I am saving (both locally and remotely) I commit to github.
The problem arises now:
What if I need to revert changes? Those would be only affected on github and will probably lead to a mess. What I am doing currently to "revert" is just writing manually the pieces of code that need to be backed up.
You are using Git wrong. There is no reason to commit when you save. Also it's okay if you keep your local git repo for developing and sync with github from time to time only.
So all you need to do is to change your workflow. Remove the commit on save.
Don't upload manually using ftp, instead clone the repository on the server and pull there.
Of course this is only possible if you have shell access to the server. If you want to do serious development you should have a server with shell access.
You misunderstand the purpose of git. It is not a backup tool, but a collaboration tool along with version control system. It never meant to backup your code. It is of course possible, but git's and any other VCS' nature is to provide readable and trackable flow of the project's development phase.
You basically never commit code without explaining changes you've done to the file(s). VCSs encourage you to describe the commit and it's purpose so that the project's team could get into it and see why a certain change was made.
However, to adopt git's capabilities I would suggest making a backup branch and committing files there, so that you know that specific branch is nothing more than chaotic code flow. That at least will make things clear.
Then, when a certain feature is ready and tested, you can squash-merge backup branch into dev branch.
This way you will get organized structure of your repo and you'll be able to revert or pull from the dev branch the state of the code that you really need.
EDIT:
I also suggest having a look at successful git branching blogpost. In it's time it made me understand how git project evolves and develops.
I apologize if this is obvious or easy, I have looked at a good number of git/github tutorials and read other articles, but I want to make sure what I'm doing is right.
I want to incorporate VC (for obvious reasons) into my development team and process.
Current development process (using Dreamweaver):
* Receive a ticket (or work order)
* Download file on Development server
* Make changes to the file
* Upload file back to development server
* Changes tested/verified
* Send to production server
I'm trying to figure out how to make our new development process with using Git.
I am switching over to PHPStorm (which is an actual PHP IDE with direct integration with Git).
Would it be something like
Receive a ticket (or work order)
Checkout/Update/Download file(s)
Change Files
Upload file (which I assume is also the current working directory...?)
At the end of the day, do a commit
Have build script send data to testing server (nightly build)
Or would it be better to do something like
Receive a ticket (or work order)
Checkout/Update/Download file(s)
Change Files
Upload file/commit
Have build script send data to testing server (nightly build)
Or is there another way? Having a bit of trouble understanding what would be the optimal flow?
Any help would be greatly appreciated.
Edit
I'm trying see if it is best to have a version of the server locally (every developer), and if so, how does that work if you have 7 or so branches?
If not, how do you deal with 7 or so branches with them on the web? Do you FTP files up or use Git Hooks to make them auto update?
Update 07/26/2012
After working successfully with Git for quite a while now I've been following this branching model with great success:
A Successful Git Branching Model
The answer to the above was yes -- should definitely have a local version of the server.
Assuming you have a live server and a development server I would do something along these lines.
Before even starting with a development cycle I would at least have two branches:
Master - the development server runs on this branch
Stable - the live server runs on this branch.
So if a developer gets a ticket or a work order he/she will perform the following actions:
git pull origin master
git branch featureBranch (named as the ticket id or as a good description for the work order)
git checkout featureBranch
Make changes which will accomplish the desired change. Commit as often as is necessary. Do this because you will create valuable history. For instance you can try an approach to a problem and if it doesn't work, abandon it. If a day later you see the light and want to re-apply the solution, it is in your history!
When the feature is fully developed and tested locally, checkout master.
git merge featureBranch
git push origin master
Test the pushed changes on your development server. This is the moment to run every test you can think of.
If all is working out, merge the feature or fix into the stable branch. Now the change is live for your customers.
Getting the code on the server
The updating of servers shouldn't be a problem. Basically I would set them up as users just like you're developers are. At my company we've setup the servers as read-only users. Basically that means the servers can never push anything but can always pull. Setting this up isn't trivial though, so you could just as well build a simple webinterface which simply only allows a git pull. If you can keep your developers from doing stuff on live implementations you're safe :)
[EDIT]
In response to the last questions asked in the comments of this reaction:
I don't know if I understand your question correctly, but basically (simplified a bit) this is how I would do this, were I in you shoes.
The testing machine (or the webroot which acts as testing implementation) has it source code based in a git repository with the master branch checked out. While creating this repository you could even remove all other references to all other branches so you'll be sure no can checkout a wrong branch in this repository. So basically the testing machine has a Git repository with only a master branch which is checked out.
For the live servers I would do exactly the same, but this time with the stable branch checked out. Developer should have a local repository cloned in which all branches exist. And a local implementation of the software you guys build. This software gets its source from a the local git repository. In other words: from the currently checked out branch in this repository.
Actual coding
When a new feature is wanted, a local feature branch can be made based on the current master. When the branch is checked out the changes can be made and checked locally by the developer (since the software is now running on the source of the feature branch).
If everything seems to be in order, the changes get merged from feature branch to master and pushed to your "git machine". "your github" so to speak. Testing can now pull the changes in so every test necessary can be done by QA. If they decide everything is ok, the developer can merge the changes from master to stable and push again.
All thats left now is pulling form your live machines.
I am wondering what is your procedure method of a web development using Git?
When you finish coding, do you just overwrite the files on the FTP to the live server?
How does git handle number of version of same project? like v1, v1.5, etc
Let say 2 people working on the project locally at work (same office), how do you work together? Do I have to keep asking them to give me a source ready (save on USB?) for merge?
Can two people work on the same project on the same server? Wouldn't this be easier than question 3?
The idea behind git is that it actually takes care of all that for you.
When you write code you commit your code and you can push it out to the server. Git tracks the changes so its easy to rollback to a previous version.
It tracks the versions of files as they change so you can easily undo any changes that was made in the past, see tags for more details.
NO. You can push your changes to the server and the other person can pull these changes. Some merging will have to occur but its quite easy with git. No need to transfer files from one dev to another. Branching and merging is discussed here.
Yes. Thats the idea.
To better understand the concepts behind a distributed version control system you can read this tutorial by Joel Spolsky. It is about Mercurial, but you will find the concepts very similar and this is probably the best tutorial written about this subject on the web.
This is how I would do it.
Each developer has his own git repository to develop his code. You as merger hold a third repository, and this repository has separate branches for each developer, for your test system and your production site.
Your developers can push their changes to you, or you can pull their changes from them into branches specifically for them. You hold a branch that you control which contains the merged code in a state for testing. You either use git-cherry-pick, or maybe just git-merge to pull their changes into your testing branch were you test things (and possibly make your own changes - or fire bug reports of to the develops and you re-incorporate their changes). When you are happy you will merge off to a "production" branch. This is normally initially derived from the test branch, but with changes necessary for the live system (I always find there is something, even if its just the database name and password).
I normally use a git hook with some code which checks which branch I am on and then uses rsync over ssh to push the code to my production site.
#!/bin/bash
branch=$(git branch | sed -n s/^\*\ //p)
version=$(git describe --tags)
cd "$git rev-parse --show cdup)"
if [ "$branch" == "production" ]; then
echo "?php echo '$version';?>" > web/version.inc
rsync -axq --delete web/ site:public_html/
fi
google "git flow", it shows you a way of managing work and releasing when you want.
For deploying via a branch, see:
Deploy a project using Git push