we have a scenario as follows:
Multiple items of work are ready to test, we would like to place each 'branch' which is ready for testing on to a container and then allow each team responsible to test their work in silo before we then merge work back in.
I think my first question is, is this possible? And if so, can anybody point me in the right direction?
The basic stack of the system is LAMP.
Any help would be much appreciated
It sure is possible.
I suppose that your different 'items' are features of your app so you will not have to modify your Dockerfile as all your feature should work on the same environment.
Depending of your infrastructure and if you are familiar/have some CI/CD tools you could automate the deployment per branch and so have all your feature version of the app running in different container.
If you are not, you have to run your applications locally.
So create a single Dockerfile which you put in all your branches then ask the dev to build and run the image on their branch then to verify that eveything is fine (you could automate some test to avoid manually doing this) before submitting a pull/merge request.
Related
I have seen Jenkins being used as CI for Docker containers. Is Dokku also a CI platform like Jenkins?
If, what is the difference when I want to do CI with Docker containers for a PHP application?
Are you maybe confusing drone with Dokku? Dokku is a platform for execution of heroku apps drone is a docker based CI. I don't know much about drone but since docker can't be run inside a docker container without some hacking you are better off sticking to a traditional CI like jenkins, bamboo, team city or such.
Continuing from Usman Ismail's answer...
If you look at dokku-alt, the distinction is less clear. In particular dokku-alt allows you to use a Dockerfile for the build rather than buildstep, so it's not specific to Heroku like apps.
Dokku-alt is not in itself a CI system, but out of the box it does verify that the build completes without error before it's deployed, and using git hooks you could connect in your test-suite to run on every git push and block deployment when it fails.
CI typically is a bit more than this. You'd normally have multiple deployments for test, staging and live, and to some extent it also encompasses a set of practices. Dokku-alt gives you some very useful parts of CI, and a fairly clear path to building more of it fairly easily, but it's not a complete CI system in itself.
You might well prefer to keep your main git repository elsewhere, and keep jenkins in the picture for automating transfer to dokku-alt.
I'm trying to figure out a way to deploy my company intranet PHP apps automatically, using GitLab, but so far, I'm not sure if the options that I've found on the internet will do the job.
Here's the scenario:
VM1: Remote Git server, that uses GitLab to administrate projects repos.
VM2: Development server. Each project has it's own development server.
VM3: Production server. Each project has it's own, as well.
Developers: Each developer uses a vagrant box, based on the project's development server.
What I want to do:
Whenever a developer push it's commits to the development branch, a hook in the remote server must update the development server tree with the last commit on that branch.
The same must occur when the master branch is updated, but it must update the production server tree.
However, since we use Laravel, we must run some extra console commands using Artisan, depending on each case.
We are following Vincent Driessen's Git Branching Model.
What I know so far:
I know that GitLab uses Web Hooks, but I'm not sure if that's going to work in this case, since I need to access a custom URL, doesn't sound very safe, but if it's the only solution, I can write a script to handle just that.
The other possible solution is to use Jenkins but I'm not an expert and I don't know yet if Jenkins is too much for my case, since I'm not using unit tests yet.
Do you guys have implemented a solution that could help in my case? Can anyone suggest an easy and elegant way to do that?
Thanks guys! Have a nice one
We do it the following way:
Developers checkout whatever Git branches, and as many branches as they want/need locally (Debian in VM Ware on Mac)
All branches get pulled to dev server whenever a change is pushed to Git. They are available in feature-x.dev.domain.com, feature-y.dev.domain.com etc., running against dev database.
Release branches are manually checked out on live system for testing, made available on release-x.test.domain.com etc. against the live database (if possible, depending on migrations).
We've semi-automated this using own scripts.
Database changes are made manually, due the sensitivity of their nature. However, we don't fint that a hassle, after getting used to migrations - and just remembering to note the alterations. We find good support by cloning databases locally for each branch that needs changes. An automated schema comparison quickly helps then, if changes to a migration file have been forgotten.
(The second point is the most productive one, making instant test on the dev platform available to everyone as soon as the first commit of a new branch is pushed)
I would suggest to keep things simple and work with git, hooks and remote repositores.
Pulling out heavy guns, like Jenkins or Gitlab for this task could be a bit too much.
I see your request as the following: "git after push and/or git after merge: push to remote repo".
You could setup "bare" remote repositories - one for "dev-stage", one for "production-stage".
Their only purpose is to receive pushes.
Each developer works on his feature-branch, based on the development branch.
When the feature-branch is ready, it is merge back to the main development branch.
Both trigger a "post merge" or "post-receive" hook, which execute a script.
The executed script can do whatever you want.
(Same approach for production: When the dev-branch has enough new features, it is merged to prod branch - triggers merge event - scripts...)
Here you want two things:
You want to push a specific branch to a specific remote repo.
In order to do this, you have to find out the specific branch in your hook script.
That's tricky, but solveable, see: https://stackoverflow.com/a/13057643/1163786 (writing a "git post-receive hook" to deal with a specific branch)
You want to execute additional steps for configuration/setup, like artisan, etc.
You might add these steps directly or as triggers to the hook script.
I think this request is related to internal and external deployment via git.
You might also search for tutorials, like "deployment with git", which might be helpful.
For example: http://ryanflorence.com/deploying-websites-with-a-tiny-git-hook/
http://git-scm.com/book/en/Git-Basics-Working-with-Remotes
http://githooks.com/ & https://www.kernel.org/pub/software/scm/git/docs/githooks.html
If you prefer to keep things straightforward and don't mind using paid third-party options, check out one of these:
http://deploybot.com/
https://www.deployhq.com/
https://envoyer.io/
Alternatively, if you fancy shifting to an integrated solution, I've not used better than Beanstalk.
So - let's say I develop a PHP app which I develop in a vagrant box identical to production envrionment. So - as an end result I would have a *.tar.zip file with a code...
How would one organize a deployment into production environment where there are a lot of application servers? I mean - I'm confused how to push code into production synchronously all at once?
More information:
on server code is stored like this:
project
+current_revision ->link to revisions/v[n]
+revisions
+v1
+v2
+v3
...
+data
So when I have to deploy changes I usually run a deploy script that uploads updated tar onto server with ssh, untars into specific dir under revisions, symlinks it into current_revision and restart php-fpm.... This way I can rollback anytime just by symlinking to an older revision.
with multipe servers what bothers me is that not all boxes will be updated at once, ie. technically some glitches might be possible.
If you're looking for a "ready-to-go" answer, you'll need to provide some more info about your setup. For example, if you plan to use git for VCS, you could write a simple shell script that pulls the latest commit and rsyncs with the server(s). Or if you're building on top of Symfony, capifony is a great tool. If your using AWS, there's a provider plugin written by the author of Vagrant that's super easy to use, and you can specify a regex for which machines to bring up or provision.
If instead you're looking for more of a "roadmap", then the considerations that you'll want to take are:
Make building of identical boxes in the remote and local environments as easy as possible, and try to make sure that your provisioning emphasizes idempotence.
Consider your versioning/release structure; what resources will rarely or never change? Include those in a setup function instead of a deploy function, and don't include them in your sync run.
Separate your development and system administration concerns; i.e. do not just package a vagrant box with a *.tar.gz and tie it through config.vm.box_url. The reason for this is that you'd have to repackage every production server with a new box every time you deploy, instead of just changing files on the server, or adding/removing some packages from the server.
Check out some config management tools like Chef and Puppet; even if you don't end up using them, they'll give you an idea of how sysadmin professionals approach this problem.
Lots of ways. If starting from barebones (no cloud infrastructure), I'm a fan of the SVN branch hook. Have a SVN repo for your code. Set up a post-commit hook on it, which checks if anything in /branch/production/ has been changed.
If it has, let the post-commit hook fire all your automated roll-out procedure - and in this case, an easy way to do so is to let all your servers known* to svn export the branch. As simple as that!
(* that's the hard step)
I apologize if this is obvious or easy, I have looked at a good number of git/github tutorials and read other articles, but I want to make sure what I'm doing is right.
I want to incorporate VC (for obvious reasons) into my development team and process.
Current development process (using Dreamweaver):
* Receive a ticket (or work order)
* Download file on Development server
* Make changes to the file
* Upload file back to development server
* Changes tested/verified
* Send to production server
I'm trying to figure out how to make our new development process with using Git.
I am switching over to PHPStorm (which is an actual PHP IDE with direct integration with Git).
Would it be something like
Receive a ticket (or work order)
Checkout/Update/Download file(s)
Change Files
Upload file (which I assume is also the current working directory...?)
At the end of the day, do a commit
Have build script send data to testing server (nightly build)
Or would it be better to do something like
Receive a ticket (or work order)
Checkout/Update/Download file(s)
Change Files
Upload file/commit
Have build script send data to testing server (nightly build)
Or is there another way? Having a bit of trouble understanding what would be the optimal flow?
Any help would be greatly appreciated.
Edit
I'm trying see if it is best to have a version of the server locally (every developer), and if so, how does that work if you have 7 or so branches?
If not, how do you deal with 7 or so branches with them on the web? Do you FTP files up or use Git Hooks to make them auto update?
Update 07/26/2012
After working successfully with Git for quite a while now I've been following this branching model with great success:
A Successful Git Branching Model
The answer to the above was yes -- should definitely have a local version of the server.
Assuming you have a live server and a development server I would do something along these lines.
Before even starting with a development cycle I would at least have two branches:
Master - the development server runs on this branch
Stable - the live server runs on this branch.
So if a developer gets a ticket or a work order he/she will perform the following actions:
git pull origin master
git branch featureBranch (named as the ticket id or as a good description for the work order)
git checkout featureBranch
Make changes which will accomplish the desired change. Commit as often as is necessary. Do this because you will create valuable history. For instance you can try an approach to a problem and if it doesn't work, abandon it. If a day later you see the light and want to re-apply the solution, it is in your history!
When the feature is fully developed and tested locally, checkout master.
git merge featureBranch
git push origin master
Test the pushed changes on your development server. This is the moment to run every test you can think of.
If all is working out, merge the feature or fix into the stable branch. Now the change is live for your customers.
Getting the code on the server
The updating of servers shouldn't be a problem. Basically I would set them up as users just like you're developers are. At my company we've setup the servers as read-only users. Basically that means the servers can never push anything but can always pull. Setting this up isn't trivial though, so you could just as well build a simple webinterface which simply only allows a git pull. If you can keep your developers from doing stuff on live implementations you're safe :)
[EDIT]
In response to the last questions asked in the comments of this reaction:
I don't know if I understand your question correctly, but basically (simplified a bit) this is how I would do this, were I in you shoes.
The testing machine (or the webroot which acts as testing implementation) has it source code based in a git repository with the master branch checked out. While creating this repository you could even remove all other references to all other branches so you'll be sure no can checkout a wrong branch in this repository. So basically the testing machine has a Git repository with only a master branch which is checked out.
For the live servers I would do exactly the same, but this time with the stable branch checked out. Developer should have a local repository cloned in which all branches exist. And a local implementation of the software you guys build. This software gets its source from a the local git repository. In other words: from the currently checked out branch in this repository.
Actual coding
When a new feature is wanted, a local feature branch can be made based on the current master. When the branch is checked out the changes can be made and checked locally by the developer (since the software is now running on the source of the feature branch).
If everything seems to be in order, the changes get merged from feature branch to master and pushed to your "git machine". "your github" so to speak. Testing can now pull the changes in so every test necessary can be done by QA. If they decide everything is ok, the developer can merge the changes from master to stable and push again.
All thats left now is pulling form your live machines.
I am wondering what is your procedure method of a web development using Git?
When you finish coding, do you just overwrite the files on the FTP to the live server?
How does git handle number of version of same project? like v1, v1.5, etc
Let say 2 people working on the project locally at work (same office), how do you work together? Do I have to keep asking them to give me a source ready (save on USB?) for merge?
Can two people work on the same project on the same server? Wouldn't this be easier than question 3?
The idea behind git is that it actually takes care of all that for you.
When you write code you commit your code and you can push it out to the server. Git tracks the changes so its easy to rollback to a previous version.
It tracks the versions of files as they change so you can easily undo any changes that was made in the past, see tags for more details.
NO. You can push your changes to the server and the other person can pull these changes. Some merging will have to occur but its quite easy with git. No need to transfer files from one dev to another. Branching and merging is discussed here.
Yes. Thats the idea.
To better understand the concepts behind a distributed version control system you can read this tutorial by Joel Spolsky. It is about Mercurial, but you will find the concepts very similar and this is probably the best tutorial written about this subject on the web.
This is how I would do it.
Each developer has his own git repository to develop his code. You as merger hold a third repository, and this repository has separate branches for each developer, for your test system and your production site.
Your developers can push their changes to you, or you can pull their changes from them into branches specifically for them. You hold a branch that you control which contains the merged code in a state for testing. You either use git-cherry-pick, or maybe just git-merge to pull their changes into your testing branch were you test things (and possibly make your own changes - or fire bug reports of to the develops and you re-incorporate their changes). When you are happy you will merge off to a "production" branch. This is normally initially derived from the test branch, but with changes necessary for the live system (I always find there is something, even if its just the database name and password).
I normally use a git hook with some code which checks which branch I am on and then uses rsync over ssh to push the code to my production site.
#!/bin/bash
branch=$(git branch | sed -n s/^\*\ //p)
version=$(git describe --tags)
cd "$git rev-parse --show cdup)"
if [ "$branch" == "production" ]; then
echo "?php echo '$version';?>" > web/version.inc
rsync -axq --delete web/ site:public_html/
fi
google "git flow", it shows you a way of managing work and releasing when you want.
For deploying via a branch, see:
Deploy a project using Git push