I am in charge of launching web projects and it takes a little too long currently from client sign off to final launch. It is on a server which I have root access to, but it runs Plesk so that the boss can setup VirtualHosts, which means there are many sites running on it.
Each project has its own git repository so currently I have the following setup.
On my staging server there is a clone of the repo and I have two bare repositories. One is on the forge (powered by Indefero) and the other is on the live server.
Each release of a project is tagged with todays date eg. git tag -a deployed-2011-04-20.
So on the staging server I execute something similar to git push --tags live master, which targets the bare repo on the live server.
Then over SSH on the live server I execute a short bash script which basically clones the repository from the live bare repo to the folder Apache will serve.
So if that all makes sense would you be able to recommend a tool or anything to make my life easier that follows that work flow or can be adapted?
It looks something like this:
Forge (authoritative source)
^
|
v
Staging/development server
|
v
Live server bare repo
|
v
Releases folder (symlinked to htdocs)
One solution that comes to mind is to add some post-receive hook on the live server bare repo in order to detect any deployed-2011-xx-yy tag coming from the staging repo, and to trigger the ssh script from there.
The other solution is to have a scheduler (like Hudson mention in pderaaij's answer, in order to:
monitor the stating repo and, on the right tag, trigger the push on the live server
monitor the live bare repo, and trigger the ssh script.
The second solution has the advantage to keep a trace of all release instances in an Hudson job report, each time said job detect the right tags and execute the release process.
Take a look at Capistrano, which happily does the symlink dance you describe here.
If you use Hudson as a continious integration server, you can make use of the build pipeline plugin.
You have your normal build process, but add an extra job which contains the commands to deploy your application. The plugin gives you a nice button to execute that build.
The hudson job could execute all the needed commands or you can take a peek at Maven for PHP and use the available plugins to invoke the remote scripts
Perhaps it is a bit out of range considering the path you've chosen already, but it's worth the research.
We got a couple of drupal sites we develop for, we are a team of 4 developers and about 20+ non-technical content managers.
Each developer has his own dev environment, we all got a beta environment where we test code integration and performance, a staging environment where the content managers test features before we push to the live environment, a training environment we use to train people and an environment specific for usability testing.
All that is setup with only 1 bare repo on a central server where each environment is a branch. We do use the post-receive hook with password-less ssh certs doing the auto pulling on the appropriate repo based on a case statement like the following:
BRANCH=`echo $line | sed 's/.*\///g'`
LOG="`date` - updating $BRANCH instance"
case $BRANCH in
"beta" )
ssh www-data#beta "cd /var/www/beta.example.com; git pull"
;;
Related
I have decided that it's time for me to start using Git on a PHP project that I have been developing casually for over a decade. (Please, no lectures from the version control police!) Due to the complex setup required on my VPS to do everything the project needs (esp. single-codebase-multiple-client structure and a Japanese-capable installation of TeX to create specialty PDFs), it is not possible to set up a development environment on my local Windows box. But I do have a testbed area on the server that I can play in, so it's my development area. Currently I use Filezilla to access the server and open files directly into Notepad++, and when I'm ready to see my edit in action, I just save and let Filezilla upload. When everything looks good on the testbed, I copy the files to the production codebase area. Yeah, that gives me no history of my changes other than my own comments, and I have to be careful not to mix bug fixes with half-finished new features. I can see the value of Git's branches for different upgrades in progress.
Yesterday I got my toes wet. First I created a Github account, and then (at the recommendation of a tutorial) installed Git For Windows (with its own Bash and tiny-looking GUI) and Kdiff3, and followed some instructions for configuring Git Bash. After all that, though, I ended up having to install something else in order to interface with my Github account (appropriately named Github for Windows), which seem to do all the stuff the other two programs were supposed to do for me. Anyway, then I did a simple task as my first foray into the Github world - I had added functionality to someone else's jQuery plugin and wanted to share it with the developer, so I forked his repo, cloned it to my machine, overwrote the file I had previously edited and tested, synced to my Github account, and sent a pull request. All the terminology in that last sentence was brand new to me, so I was pretty proud of myself that I got that far. ;) But I guess I only needed the Github software, not the Git software - it's hard to know what tutorials to believe.
Anyway, now I want to figure out a workflow for my own stuff, which is my actual question for you guys. From what I can tell, having the master repo anywhere but the public Github costs money, and I don't care if others see my code (I don't expect anyone else to work on my oddball project made of spaghetti code, but if they want to, that's great). Okay, but then what? Perhaps one of these scenarios, or something else:
Clone branches of the repo to my PC, do edits on the local files, and upload them in Filezilla for testing (a couple more clicks than my current workflow because Filezilla doesn't automatically see the relationship between the local file and the remote file, but not a big deal). Then when I'm happy with the code, commit locally, sync to Github, and copy the files (from somewhere - not sure on this point) to the production area.
Install the Linux flavor of Git on my VPS so that the "local" Git file location is the testbed, and use Git through PuTTY to do the local commits. Simpler for file structure (no need for a copy on my PC at all) but more cumbersome to use Git:
I'm not on PuTTY very frequently, and for some reason the connection often dies on me and I have to restart.
Even though the Linux command line is Git's native habitat, I am probably more comfortable with a GUI (because I forget command syntax quickly - old brain, I guess).
Also, since I never ended up using the Git program I installed here, I'm not sure whether it would be Git or Github I would be using on the server.
Some other scenario, since neither #1 or #2 uses Git/Github to manage the production file area at all, which would probably be a good idea so that I don't forget to copy everything I need.
I tried to research the possibility of a PHP-based GUI to go with idea #2 (so I don't have to use PuTTY for day-to-day operations), but it seems that the discussions of such tools all assume either that you are trying to create your own Github service, or that the "local" cloned repo is physically on your local PC (with xAMP running on whatever OS it is). But maybe the Github software I used is enough to do all that - it's hard to tell. I don't yet understand the interplay between a master public repo on Github, branches somewhere (on Github also?), at least two sets of files on my web server (the testbed and the production area), Github software, Git software, and the keyboard/screen of the computer I'm sitting at.
So pardon my newbie ramblings, but if someone out there has a similar development situation, What's your workflow? Or what would you suggest for me?
Here's one way to aproach the issue:
You will need three repositories:
a local repo to edit code. [1]
a bare remote repository on your server. This will be in a location that in not publicly viewable, but you can ssh in to. [2]
The production environment. [3]
Here's the implementation:
workstation$ cd localWorkingDirectory/
workstation$ git init
workstation$ git add .
workstation$ git commit -m 'initial commit'
workstation$ ssh login#myserver
myserver$ mkdir myrepo.git
myserver$ cd myrepo.git
myserver$ git init --bare
myserver$ exit
workstation$ cd localWorkingDirectory/
workstation$ git remote add origin login#myserver:myrepo.git
workstation$ git push origin master
every time you make a commit on any branch, back it up with:
workstation$ git push origin BRANCH
When you are ready to move branch version2 into production: do this
workstation$ git push origin version2
workstation$ ssh login#myserver
myserver$ git clone path/to/myrepo.git productionDirectory
myserver$ cd productionDirectory
myserver$ git checkout version2
Oh no! It dsoesn't work! better switch back to version1!
workstation$ ssh login#myserver
myserver$ cd productionDirectory
myserver$ git checkout version1
You don't need github (or any other central store) to start using git. Especially since you're a lone developer. Git runs directly on your own machine, without any server component (unlike for example subversion). Just git init and start committing away.
I agree with the other commenters here, that you should aim to get a local development environment up and running. Even if it takes some effort, it's certainly worth it. One of the side effects of doing so may be that you are forced to decouple some of your current hard dependencies and thereby getting a better overall application architecture out of it. The things that can't easily be replicated in your development environment could instead be replaced with mock services.
Once that is in place, look into a scripted deployment process. E.g. write a shell script that syncs your development machine's codebase with the production server. There are many ways to do this, but I suggest you start really simple, then revise your options (Capistrano is one option).
I'd certainly look at something like capistrano for your current development setup.
I can understand why you might have a reticence to use the terminal but it would probably help your understanding of git in context. Doesn't take long to get to grips with the commands and when tied into a system such as capistrano you'll be rocking development code up to your environment in no time:
git commit -a
git push origin develop
cap deploy:dev
When i'm working on windows i generally try to replicate the deployment environment i have with virtual machines locally using something like sun's virtualbox. That way you can minimise potential environment issues while still developing locally. Then you can just use putty to ssh to your local vm. Setup sharing between the vm and your host OS and all your standard IDEs/editors will work too. I find this preferable to having to setup a vps remotely but whatever works.
We're a web development shop using PHP or .Net. Here's the proposed path for development:
Developer -> SVN Server -> Jenkins Server -> Dev Server
A developer commits or creates a tag to the SVN Server
Either a commit or tag triggers a job on Jenkins asynchronously
The job on Jenkins will run a svn update (or clear destination space, then svn export) on Dev Server bringing the code in from the SVN Server
For .Net, after having copied the code, I need to compile it using aspnet_compile
My question is two fold, is this a proper use of Jenkins? And, if I were to compile .Net using aspnet_compile.exe, would installing a slave on the Microsoft OS be the way to go (to enable this)?
To futher the answer:
Check the Restrict where this project can be run, select slave.
This sounds like a fairly standard build / deploy workflow and therefore a good use of Jenkins, but I'd suggest doing step 4 as part of your Jenkins build and then deploying the compiled artifacts to your dev server also using Jenkins. This way you'll be able to have Jenkins report compilation problems as build failures and have the ability to run tests against the compiled code as part of the Jenkins job, per #thescientist's comment.
If your Jenkins master is running on an operating system other than Windows, you can use a Windows slave to do the compilation. You could even have the slave process running on your dev server to make deployment easier; I wouldn't recommend this for a production setup, but it's OK for dev.
Here are some links which may help you:
Running Jenkins on Windows
Building ASP.NET code with Jenkins
Jenkins and PHP
Recently we have started exploring GIT with the target of enabling our developers to work from any place and secondly to automate the overall deployment process.
We have a central test server where we host all apps/sites for testing and/or demo purpose and once the development and testing is finalized we move the application to their respective live servers.
Whatever i have set up with GIT, is as follows
1. Create a bare repo on test server
2. Get a local clone for each involved developer, Developers will push to remote(test server) dev branch
3. Someone will merger all changes from dev branch to master branch and push it to remote
4. The test server (bare repo) has a post-receive hook, which checks out the master branch to public_html folder (using GIT_WORKING_DIR and checkout -f).
As of now, everything works good and i am able to see merge on master branch on hosted pages (on test server, of course). Now my questions are ...
1. Am I doing this right?
2. I guess the post-receive hook I have set, executes on push to dev branch as well. How to avoid this?
3. How I can ship these contents to my live server? As I have some projects with large code base, checking out everything on test server and then ship it to live doesn't looks good enough.
I've heard of CI servers, but as much as I know they check out locally and upload everything to live using 'rsync' (don't know if it just syncs changes or uploads everything) or such tools. I just want to avoid that everything part and keep an option open to rollback, if anything goes wrong. I am good with setting up git on live servers.
Yes. You can see other considerations at "reset hard on git push".
You can test the name of the branch when receiving the commits.
branch=$(git rev-parse --symbolic --abbrev-ref $refname)
See also "Writing a git post-receive hook to deal with a specific branch"
a rsync is usually recommended for live server (where git isn't necessary): it will update only what has changed.
If you have git on live server, then various approaches are described in "Git for Websites / post-receive / Separation of Test and Production Sites"
Regarding deployment, as seen in "Deploy with rsync(or svn, git, cvs) and ignore inconsistent state during deployment?", deploying (even everything) in a separate directory and symlink the prod instance to that directory is a nice way of avoiding inconsistencies during deployment, and to facilitate rollback (symlink back to the previous live directory) in case of trouble.
Before I begin, I know there are a lot of questions similar to this one, but I am really having difficulty finding a concise, secure, best practice since the feedback on them has been so widely varied.
What I want to do:
Complete work on my local machine on a development branch.
Push changes to git. Git posts to a webhook URL and automatically has my remote server pull the changes on a development site.
Once QA'd and confirmed to be proper on the development site, push the master branch to the production site (on the same server as the development site).
Where I am at:
I have git installed on my local machine and the remote server. I can push mods to the development branch to git. On the remote server, I can pull the updates and it works like a charm. The problem is that I cannot get the remote server to automatically update when changes are pushed from my local machine.
My questions are:
For the remote server development site directory, should I git init or git init --bare? I don't plan on having updates made on the server itself. I would like my dev team to work locally and push mods to the server. I believe I need to use git init as the working tree is needed to set-up a remote alias to the git repository, but I wanted to confirm.
I am pretty sure the webhook post from git issue is due to user privileges. How can I safely get around this? I have read many tutorials that suggest updating git hook files, but I feel as though that is more drastic of a measure than I need to take. I would love to be able to have the webhook hit a URL that safely pulls the files without adding a boatload of code (if it is possible).
I am a web developer by nature, so git and sysadmin tasks are generally the bane of my existence. Again, I know this question is similar to others, but I have yet to find a comprehensive, concise, secure, and most logical approach to resolving the issue. I am about 16 hours in and have officially hit the "going in circles with no progress" point.
You can do this quite easily with GitHub service hooks.
You ll need to create one more file that will handle the process of performing the git pull. Add a new file, called github.php (or anything you wish ), and add:
<?php `git pull`;
Save that file, and upload it to the repository directory on your server. Then, go to Services Hooks -> Post-Receive URL and copy the URL to that file, and paste it into the “Post-Receive URL” E.g. http://demo.test.com/myfolder/github.php
So, when you push, GitHub will automatically visit this URL, thus causing your server to perform a git pull.
To see this in more details to go to this tutorial
I had the same exact issue, strong in code and development skills, weak in sysadmin skills. When I was finally ready to push code I had to ask a GitHub rep what their suggested method was, and they responded with Capistrano. It's a Ruby application that runs commands (such as git pull) on remote servers, along with pretty much any other command you can imagine.
Here a few articles you can read to get more info:
GitHub - Deploy with Capistrano
How to compile the Capistrano stack on your *nix system
Another example of how to deploy code with Capistrano
Not going to lie, the learning curve was pretty steep, but once you start working with Capistrano, you will see that it works well for pushing code. I develop my applications in Symfony and I have my Capistrano set-up to pull code, clear cache and clear log files, all in one command from my local machine.
We have a PHP project that we would like to version control. Right now there are three of us working on a development version of the project which resides in an external folder to which all of our Eclipse IDEs are linked, and thus no version control.
What is the right way and the best way to version control this?
We have an SVN set up, but we just need to find a good way to check in and out that allows us to test on the development server. Any ideas?
We were in a similar situation, and here's what we ended up doing:
Set up two branches -- the release and development branch.
For the development branch, include a post-commit hook that deploys the repository to the dev server, so you can test.
Once you're ready, you merge your changes into the release branch. I'd also suggest putting in a post-commit hook for deployment there.
You can also set up individual development servers for each of the team members, on their workstations. I find that it speeds things up a bit, although you do have some more setup time.
We had to use a single development server, because we were using a proprietary CMS and ran into licensing issues. So our post-commit hook was a simple FTP bot.
Here is what we do:
Each dev has a VM that is configured like our integration server
The integration server has space for Trunk, each user, and a few slots for branches
The production server
Hooks are in Subversion to e-mail when commits are made
At the beginning of a project, the user makes a branch and checks it out on their personal VM as well as grabs a clean copy of the database. They do their work, committing as they go.
Once they have finished everything in their own personal space they log into the integration server and check out their branch, run their tests, etc. When all that passes their branch is merged into Trunk.
Trunk is rebuilt, the full suite of tests are run, and if all is good it gets the big ol' stamp of approval, tagged in SVN, and promoted to Production at the end of the night.
If at any point a commit by someone else is made, we get an e-mail and can merge those changes into our individual branches.
Beanstalk has built-in post-commit hooks for deploying to development, staging, and production servers.
One way to use subversion for PHP development is too setup a repository for one or all three developers, and use this repository, more as a syncing tool, than true version control.
You could,
Make a repo
Add your entire PHP document structure of your project
Checkout a copy of this repo into the correct spot on your dev server
Use an svn hook, that activates on commit
This hook, will automatically update the contents of the dev sever, whenever anybody on the team checks in any code.
Hook resides in:
svn_dir/repo_name/hooks/post-commit
And could look like:
/usr/bin/svn up /path_to/webroot --username svn_user --password svn_pass
That will update your working copy on the dev server to the latest check in.
What about something distributed? You can start for example with Mercurial, try different workflows, and see which one fits you the best.
Each of you could run it locally, or on your own dev server (or even the same one with a different port...).
One possible way (there are probably better ways):
Each of you should have your own checked out version of the project.
Have a local copy of the server on your computer and test it there throughout the day. Then at the end of each day (or whenever), you merge together whatever you are ready to test, and you check it out onto the dev server and test it.
Another tool you can use for the builds is TeamCity which is free for 20 build configurations (enough for most small companies/projects.) This way you can run your tests as well as schedule builds.