Recently we have started exploring GIT with the target of enabling our developers to work from any place and secondly to automate the overall deployment process.
We have a central test server where we host all apps/sites for testing and/or demo purpose and once the development and testing is finalized we move the application to their respective live servers.
Whatever i have set up with GIT, is as follows
1. Create a bare repo on test server
2. Get a local clone for each involved developer, Developers will push to remote(test server) dev branch
3. Someone will merger all changes from dev branch to master branch and push it to remote
4. The test server (bare repo) has a post-receive hook, which checks out the master branch to public_html folder (using GIT_WORKING_DIR and checkout -f).
As of now, everything works good and i am able to see merge on master branch on hosted pages (on test server, of course). Now my questions are ...
1. Am I doing this right?
2. I guess the post-receive hook I have set, executes on push to dev branch as well. How to avoid this?
3. How I can ship these contents to my live server? As I have some projects with large code base, checking out everything on test server and then ship it to live doesn't looks good enough.
I've heard of CI servers, but as much as I know they check out locally and upload everything to live using 'rsync' (don't know if it just syncs changes or uploads everything) or such tools. I just want to avoid that everything part and keep an option open to rollback, if anything goes wrong. I am good with setting up git on live servers.
Yes. You can see other considerations at "reset hard on git push".
You can test the name of the branch when receiving the commits.
branch=$(git rev-parse --symbolic --abbrev-ref $refname)
See also "Writing a git post-receive hook to deal with a specific branch"
a rsync is usually recommended for live server (where git isn't necessary): it will update only what has changed.
If you have git on live server, then various approaches are described in "Git for Websites / post-receive / Separation of Test and Production Sites"
Regarding deployment, as seen in "Deploy with rsync(or svn, git, cvs) and ignore inconsistent state during deployment?", deploying (even everything) in a separate directory and symlink the prod instance to that directory is a nice way of avoiding inconsistencies during deployment, and to facilitate rollback (symlink back to the previous live directory) in case of trouble.
Related
I have decided that it's time for me to start using Git on a PHP project that I have been developing casually for over a decade. (Please, no lectures from the version control police!) Due to the complex setup required on my VPS to do everything the project needs (esp. single-codebase-multiple-client structure and a Japanese-capable installation of TeX to create specialty PDFs), it is not possible to set up a development environment on my local Windows box. But I do have a testbed area on the server that I can play in, so it's my development area. Currently I use Filezilla to access the server and open files directly into Notepad++, and when I'm ready to see my edit in action, I just save and let Filezilla upload. When everything looks good on the testbed, I copy the files to the production codebase area. Yeah, that gives me no history of my changes other than my own comments, and I have to be careful not to mix bug fixes with half-finished new features. I can see the value of Git's branches for different upgrades in progress.
Yesterday I got my toes wet. First I created a Github account, and then (at the recommendation of a tutorial) installed Git For Windows (with its own Bash and tiny-looking GUI) and Kdiff3, and followed some instructions for configuring Git Bash. After all that, though, I ended up having to install something else in order to interface with my Github account (appropriately named Github for Windows), which seem to do all the stuff the other two programs were supposed to do for me. Anyway, then I did a simple task as my first foray into the Github world - I had added functionality to someone else's jQuery plugin and wanted to share it with the developer, so I forked his repo, cloned it to my machine, overwrote the file I had previously edited and tested, synced to my Github account, and sent a pull request. All the terminology in that last sentence was brand new to me, so I was pretty proud of myself that I got that far. ;) But I guess I only needed the Github software, not the Git software - it's hard to know what tutorials to believe.
Anyway, now I want to figure out a workflow for my own stuff, which is my actual question for you guys. From what I can tell, having the master repo anywhere but the public Github costs money, and I don't care if others see my code (I don't expect anyone else to work on my oddball project made of spaghetti code, but if they want to, that's great). Okay, but then what? Perhaps one of these scenarios, or something else:
Clone branches of the repo to my PC, do edits on the local files, and upload them in Filezilla for testing (a couple more clicks than my current workflow because Filezilla doesn't automatically see the relationship between the local file and the remote file, but not a big deal). Then when I'm happy with the code, commit locally, sync to Github, and copy the files (from somewhere - not sure on this point) to the production area.
Install the Linux flavor of Git on my VPS so that the "local" Git file location is the testbed, and use Git through PuTTY to do the local commits. Simpler for file structure (no need for a copy on my PC at all) but more cumbersome to use Git:
I'm not on PuTTY very frequently, and for some reason the connection often dies on me and I have to restart.
Even though the Linux command line is Git's native habitat, I am probably more comfortable with a GUI (because I forget command syntax quickly - old brain, I guess).
Also, since I never ended up using the Git program I installed here, I'm not sure whether it would be Git or Github I would be using on the server.
Some other scenario, since neither #1 or #2 uses Git/Github to manage the production file area at all, which would probably be a good idea so that I don't forget to copy everything I need.
I tried to research the possibility of a PHP-based GUI to go with idea #2 (so I don't have to use PuTTY for day-to-day operations), but it seems that the discussions of such tools all assume either that you are trying to create your own Github service, or that the "local" cloned repo is physically on your local PC (with xAMP running on whatever OS it is). But maybe the Github software I used is enough to do all that - it's hard to tell. I don't yet understand the interplay between a master public repo on Github, branches somewhere (on Github also?), at least two sets of files on my web server (the testbed and the production area), Github software, Git software, and the keyboard/screen of the computer I'm sitting at.
So pardon my newbie ramblings, but if someone out there has a similar development situation, What's your workflow? Or what would you suggest for me?
Here's one way to aproach the issue:
You will need three repositories:
a local repo to edit code. [1]
a bare remote repository on your server. This will be in a location that in not publicly viewable, but you can ssh in to. [2]
The production environment. [3]
Here's the implementation:
workstation$ cd localWorkingDirectory/
workstation$ git init
workstation$ git add .
workstation$ git commit -m 'initial commit'
workstation$ ssh login#myserver
myserver$ mkdir myrepo.git
myserver$ cd myrepo.git
myserver$ git init --bare
myserver$ exit
workstation$ cd localWorkingDirectory/
workstation$ git remote add origin login#myserver:myrepo.git
workstation$ git push origin master
every time you make a commit on any branch, back it up with:
workstation$ git push origin BRANCH
When you are ready to move branch version2 into production: do this
workstation$ git push origin version2
workstation$ ssh login#myserver
myserver$ git clone path/to/myrepo.git productionDirectory
myserver$ cd productionDirectory
myserver$ git checkout version2
Oh no! It dsoesn't work! better switch back to version1!
workstation$ ssh login#myserver
myserver$ cd productionDirectory
myserver$ git checkout version1
You don't need github (or any other central store) to start using git. Especially since you're a lone developer. Git runs directly on your own machine, without any server component (unlike for example subversion). Just git init and start committing away.
I agree with the other commenters here, that you should aim to get a local development environment up and running. Even if it takes some effort, it's certainly worth it. One of the side effects of doing so may be that you are forced to decouple some of your current hard dependencies and thereby getting a better overall application architecture out of it. The things that can't easily be replicated in your development environment could instead be replaced with mock services.
Once that is in place, look into a scripted deployment process. E.g. write a shell script that syncs your development machine's codebase with the production server. There are many ways to do this, but I suggest you start really simple, then revise your options (Capistrano is one option).
I'd certainly look at something like capistrano for your current development setup.
I can understand why you might have a reticence to use the terminal but it would probably help your understanding of git in context. Doesn't take long to get to grips with the commands and when tied into a system such as capistrano you'll be rocking development code up to your environment in no time:
git commit -a
git push origin develop
cap deploy:dev
When i'm working on windows i generally try to replicate the deployment environment i have with virtual machines locally using something like sun's virtualbox. That way you can minimise potential environment issues while still developing locally. Then you can just use putty to ssh to your local vm. Setup sharing between the vm and your host OS and all your standard IDEs/editors will work too. I find this preferable to having to setup a vps remotely but whatever works.
I'm part of a team of 3 (2 developers and 1 designer) who sometimes work in the office and sometimes remotely and I'm looking at a way of using GIT to develop our websites seamlessly. I've got a managed account with Rackspace and have 3 servers setup on the account - development, staging and production.
I'm looking at the best way for our team to develop daily on our websites without having to FTP the files up to the server each time we make any changes. I've used SVN in the past but i'm looking to use Git for version control. The workflow I had in mind for an example website called 'test' was the following:
Development Server would have a directory (called trunk but not sure if it should be called something else?) for each user as well as a central directory. E.g /var/www/test/jbloggs/, /var/www/test/asmith/, /var/www/test/rjohnson/ and /var/www/test/central/trunk/.
The central repository would be installed within /var/www/test/central/trunk/ and then /asmith/, /rjohnson/ and /jbloggs/ would clone the trunk which would mean they would become /var/www/test/asmith/trunk, /var/www/test/rjohnson/trunk/ and /var/www/test/jbloggs/trunk/.
Each user would then have a copy of /trunk/ which will contain all the website files, will all have a subdomain configured i.e jbloggs.test.development, rjohnson.test.development etc and will configure their IDE to automatically SFTP to the server so that they are working directly within their directory the development server. The central directory domain will be test.development. When they come to committing any changes to the central repository they will SSH into the server and commit their changes and when we want to update the central repository we will pull these changes to get the latest version which can then be viewed at test.development.
Is this the right method of doing things or should we all have a local LAMP stack installed (apart from the designer who uses Windows) and have our repositories locally? If so, should the central repo still be on the rackspace server? The developers will be using phpstorm and the designer dreamweaver.
Hope the above makes sense.
Thanks
I strongly advise you to work local and then commit on the shared server. This is what git is made for. Development will be more reactive and easier for everybody. Make sure all dev master git so they can do their internal soup as they want. If one dev destroy the database, the others can keep on working. But you'll also need a convenient way to synch databases so developers work with an up to date local database.
The rest of your chain is ok, you can still have two test step like dev server for dev team and test server for testers. This will make testers working on a more stable version and it will also make you test the upgrade process when you copy changes from dev server to test server. Lot of errors arise because of untested upgrade procedures.
You can updates changes on test and production server either by installing GIT on them or just using a simple script that will ftp changed files. I don't like having git on a production server but this is a personal opinion.
I'm struggling with the best approach to test php development code which is dependent on certain framework files to be present. I think there are three possible scenario's with git:
Create a copy of the live production directory and clone this 'dev' directory to the local workstation. The next step would be to edit code on the local workstation and commit/push every change. You can check your work via the 'dev' url on the production server. If everything is alright you can push the changes to the 'live' directory. This approach may result in a lot of commits as you are editing/fixing your code (syntax errors or other obvious mistakes) and it adds an extra step (commit/push) to see your result.
Create a 'dev' server which mirrors the production server. This server will contain all the framework files and you'll be able to edit a copy of the 'live' directory directly and immediately see your changes. If you prefer you can mount the remote 'dev' directory to your local workstation. This requires an extra server which needs to be maintained and you would need the resources to set it up.
Create a local 'dev' workstation environment and clone the repository on the 'live' or 'dev' server. This way you'll be able to test all the code on your local machine and only push out the commits which have been tested and approved. This reduces the number of commits as opposed to method one. To recreate the 'dev' environment locally you might have to install a lot framework/dependent files to your local workstation and even then it might not be 100% reliable when the code is ported to the actual live server.
Basically I want to find the best method for the 'write-test-revise-test-revise-test-commit'
cycle if you are dependent on framework files (whatever framework that may be). Would you create a 'dev' server or would you recreate the exact production environment on your local workstation? Ideally you would only commit the code when you have done some initial testing (obvious syntax errors etc.). A 'dev' server with local git repo would require that you commit every little change to test your work which may be tedious....
I hope I have made myself clear. I'm looking for the best way to integrate git and the 'write-test-commit' cycle. Normally you would test on the local machine but with web development you may need a webserver + framework to be able to test your code. Editing directly on the 'live' server is what I want to avoid.
Thanks for your input!
There are definitely many ways to do this, but here is my 2 cents from how I have been working lately.
First off, I would probably avoid a dev server per se, because if you have more than 1 developer, each developer may try to update the dev server with conflicting code, or if they are working on similar areas, overwrite some of your test code since you both probably are working from the same branch but have both modified the code and not yet pushed the changes.
That said, you may want a dev server that closely resembles your live server so that after you and some of your other developers have made a number of changes, you can test them on the dev server before updating the code on the live server.
In my environment, I develop on Linux and have Apache/PHP running the same versions and configs as the live server. This way, I clone my git repo, have my environment set up so that my document root is the "public" directory of my git repo (e.g. htdocs). In this case, we have a dev MySQL server which is usually shared, and not on the local machine, but you can do whatever is easiest there. Our system depends on constantly updated data from the field so this is why the shared database, we have a system which adds a lot of this necessary "test" data automatically to the database.
This way, I can pull the latest code from git, work on it all I want, work-break-fix-work-work-work etc etc and when I have completed my task, I can push the changes back to git for other developers.
When you are ready for a release, you can do all of your testing and stuff on the dev server, verify it is good to go and then push to the live.
In my case for updating the "live" server, one person is responsible for that, and I use rsync to sync my local working directory to the live server. So when I am absolutely sure we are ready to deploy, I pull the most recent code from git, and run my script which rsyncs my git directory to the server.
I'd avoid your methods 1 & 2, and go with something like 3. That will probably be the sanest thing for you to do and easiest to manage. Depending on what your team is like, you could create a dev VM that is pre-built with all the dependencies, correct software, and development tools you are all using, or leave it up to the developer to set themselves up.
So far this method has worked pretty well for me and the others on my team.
Call me opinionated, but every developer should have a local development AMP stack which they can develop against. If you don't know how to set up an exact mirror of your production server, solve that problem first.
Once you're there, it should be trivial to have each developer set up a virtual machine with a clean OS install, configure web/php/db servers and libraries/framewroks to match the production environment, check out your project, and get to work.
Developers commit against personal branches in their own local repos as they go, and after local testing, ship their code (via either a push, or a pull request, or whatever).
The exact rules about how to merge changes into master depend on your team an preferences. But developers should almost always have a complete local dev environment. If it seems like it's hard to set one up, that's a big problem. Figure out how to make it easy and then document it.
I am in charge of launching web projects and it takes a little too long currently from client sign off to final launch. It is on a server which I have root access to, but it runs Plesk so that the boss can setup VirtualHosts, which means there are many sites running on it.
Each project has its own git repository so currently I have the following setup.
On my staging server there is a clone of the repo and I have two bare repositories. One is on the forge (powered by Indefero) and the other is on the live server.
Each release of a project is tagged with todays date eg. git tag -a deployed-2011-04-20.
So on the staging server I execute something similar to git push --tags live master, which targets the bare repo on the live server.
Then over SSH on the live server I execute a short bash script which basically clones the repository from the live bare repo to the folder Apache will serve.
So if that all makes sense would you be able to recommend a tool or anything to make my life easier that follows that work flow or can be adapted?
It looks something like this:
Forge (authoritative source)
^
|
v
Staging/development server
|
v
Live server bare repo
|
v
Releases folder (symlinked to htdocs)
One solution that comes to mind is to add some post-receive hook on the live server bare repo in order to detect any deployed-2011-xx-yy tag coming from the staging repo, and to trigger the ssh script from there.
The other solution is to have a scheduler (like Hudson mention in pderaaij's answer, in order to:
monitor the stating repo and, on the right tag, trigger the push on the live server
monitor the live bare repo, and trigger the ssh script.
The second solution has the advantage to keep a trace of all release instances in an Hudson job report, each time said job detect the right tags and execute the release process.
Take a look at Capistrano, which happily does the symlink dance you describe here.
If you use Hudson as a continious integration server, you can make use of the build pipeline plugin.
You have your normal build process, but add an extra job which contains the commands to deploy your application. The plugin gives you a nice button to execute that build.
The hudson job could execute all the needed commands or you can take a peek at Maven for PHP and use the available plugins to invoke the remote scripts
Perhaps it is a bit out of range considering the path you've chosen already, but it's worth the research.
We got a couple of drupal sites we develop for, we are a team of 4 developers and about 20+ non-technical content managers.
Each developer has his own dev environment, we all got a beta environment where we test code integration and performance, a staging environment where the content managers test features before we push to the live environment, a training environment we use to train people and an environment specific for usability testing.
All that is setup with only 1 bare repo on a central server where each environment is a branch. We do use the post-receive hook with password-less ssh certs doing the auto pulling on the appropriate repo based on a case statement like the following:
BRANCH=`echo $line | sed 's/.*\///g'`
LOG="`date` - updating $BRANCH instance"
case $BRANCH in
"beta" )
ssh www-data#beta "cd /var/www/beta.example.com; git pull"
;;
We have a PHP project that we would like to version control. Right now there are three of us working on a development version of the project which resides in an external folder to which all of our Eclipse IDEs are linked, and thus no version control.
What is the right way and the best way to version control this?
We have an SVN set up, but we just need to find a good way to check in and out that allows us to test on the development server. Any ideas?
We were in a similar situation, and here's what we ended up doing:
Set up two branches -- the release and development branch.
For the development branch, include a post-commit hook that deploys the repository to the dev server, so you can test.
Once you're ready, you merge your changes into the release branch. I'd also suggest putting in a post-commit hook for deployment there.
You can also set up individual development servers for each of the team members, on their workstations. I find that it speeds things up a bit, although you do have some more setup time.
We had to use a single development server, because we were using a proprietary CMS and ran into licensing issues. So our post-commit hook was a simple FTP bot.
Here is what we do:
Each dev has a VM that is configured like our integration server
The integration server has space for Trunk, each user, and a few slots for branches
The production server
Hooks are in Subversion to e-mail when commits are made
At the beginning of a project, the user makes a branch and checks it out on their personal VM as well as grabs a clean copy of the database. They do their work, committing as they go.
Once they have finished everything in their own personal space they log into the integration server and check out their branch, run their tests, etc. When all that passes their branch is merged into Trunk.
Trunk is rebuilt, the full suite of tests are run, and if all is good it gets the big ol' stamp of approval, tagged in SVN, and promoted to Production at the end of the night.
If at any point a commit by someone else is made, we get an e-mail and can merge those changes into our individual branches.
Beanstalk has built-in post-commit hooks for deploying to development, staging, and production servers.
One way to use subversion for PHP development is too setup a repository for one or all three developers, and use this repository, more as a syncing tool, than true version control.
You could,
Make a repo
Add your entire PHP document structure of your project
Checkout a copy of this repo into the correct spot on your dev server
Use an svn hook, that activates on commit
This hook, will automatically update the contents of the dev sever, whenever anybody on the team checks in any code.
Hook resides in:
svn_dir/repo_name/hooks/post-commit
And could look like:
/usr/bin/svn up /path_to/webroot --username svn_user --password svn_pass
That will update your working copy on the dev server to the latest check in.
What about something distributed? You can start for example with Mercurial, try different workflows, and see which one fits you the best.
Each of you could run it locally, or on your own dev server (or even the same one with a different port...).
One possible way (there are probably better ways):
Each of you should have your own checked out version of the project.
Have a local copy of the server on your computer and test it there throughout the day. Then at the end of each day (or whenever), you merge together whatever you are ready to test, and you check it out onto the dev server and test it.
Another tool you can use for the builds is TeamCity which is free for 20 build configurations (enough for most small companies/projects.) This way you can run your tests as well as schedule builds.