We have started using Hudson, and the current workflow is:
checkout locally > code > run tests > update > run tests > commit
Rather that polling, Hudson simply sits there until we instantiate a build. It then:
checkout locally > run Phing script
The Phing script then:
svn export latest revision > run tests (if successful) > generates reports etc.. > compresses export > scp to production server > .. do magic to make site live...
That all works fine and dandy, however it doesn't really give us the ability to any kind of of "staging" QA and every build builds the repo head revision. Ideally we would like Hudson to poll or use post commit hooks build each commit and:
checkout locally > run Phing task to run tests and if successful, generates reports etc..
Then be able to manually instantiate an automated deployment (via Phing task) to either "staging QA environment or production from with each specific build. Not every commit will be deployed to QA.
Is this work flow even possible from with Hudson, or are we going to need to manually run our deployment Phing tasks after.
I separated the build/test job (job1) and the deploy job (job2). Job1 runs on trunk after every commit (Hudson polls, but post commit hook would work as well). It also archives the build artifacts. Job2 will be started manually. It gets the build_number from job1 as a build parameter (I like the run parameter) and downloads the artifacts from job1, into it's own workspace. It them runs the deployment. In you case I would add another parameter (choice parameter) to determine to what environment you want to deploy.
With the description setter plugin, you can print out the run number from job1 and the environment and you can than easily see in the job history what build was deployed when to what environment.
I ended up doing something similar to Peter Schuetze suggestion. I used only the only job however. I use 3 build parameters, deploy (bool), environment (choice) and revision (text). I then altered my Phing scripts to only do deployments if the deploy parameter is true, in which case it will deploy the specified revision to the specified environment. By default deploy is false, revision is head and environment is staging. Now when Hudson polls svn, it sees the deploy parameter is false and bypasses the deployment tasks.
I'm not entirely clear on what you want to achieve, but I'm wondering whether you are using the Phing plugin? Perhaps what you want is not currently possible through Hudson and may require changing your development process to make it possible.
Related
Is there any way to set up git such that it listens for updates from a remote repo and will pull whenever something changes? The use case is I want to deploy a web app using git (so I get version control of the deployed application) but want to put the "central" git repo on Github rather than on the web server (Github's interface is just soooo nice).
Has anyone gotten this working? How does Heroku do it? My Google-fu is failing to give me any relevant results.
Git has "hooks", actions that can be executed after other actions. What you seem to be looking for is "post-receive hook". In the github admin, you can set up a post-receive url that will be hit (with a payload containing data about what was just pushed) everytime somebody pushes to your repo.
For what it's worth, I don't think auto-pull is a good idea -- what if something wrong was pushed to your branch ? I'd use a tool like capistrano (or an equivalent) for such things.
On unix-likes you can create cron job that calls "git pull" (every day or every week or whatever) on your machine. On windows you could use task scheduler or "AT" command to do the same thing.
There are continuous integrations programs like Jenkins or Bamboo, which can detect commits and trigger operations like build, test, package and deploy. They do what you want, but they are heavy with dependencies, hard to configure and in the end they may use periodical check against git repository, which would have same effect like calling git pull by cron every minute.
I know this question is a bit old, but you can use the windows log and git to autopull your project using a webhook and php (assuming your project involves a webserver.
See my gist here :
https://gist.github.com/upggr/a6d92e2808e9628ebe0d01fd93569f4a
As some have noticed after trying this, if you use php exec(), it turns out that solving for permissions is not that simple.
The user that will execute the command might not be your own, but www-data or apache.
If you have root/sudo access, I recommend you read this Jonathan's blog post
When you aren't allowed/can't solve permissions
My solution was a bit creative. I noticed I could create a script under my username with a loop and git pull would work fine. But that, as pointed out by others, bring the question of running a lot of useless git pull every, say, 60 seconds.
So here the steps to a more delicate solution using webhooks:
deploy key: Go to your server and type:
ssh-keygen -t rsa -b 4096 -C "deploy" to generate a new deploy key, no need write-permissions (read-only is safer). Copy the public key to your github repository settings, under "deploy key".
Webhook: Go to your repository settings and create a webhook. Lets assume the payload address is http://example.com/gitpull.php
Payload: create a php file with this code example bellow in it. The purpose of the payload is not to git pull but to warn the following script that a pull is necessary. Here the simple code:
gitpull.php:
<?php
/* Deploy (C) by DrBeco 2021-06-08 */
echo("<br />\n");
chdir('/home/user/www/example.com/repository');
touch(GITPULLMASTER);
?>
Script: create a script in your preferred folder, say, /home/user/gitpull.sh with the following code:
gitpull.sh
#!/bin/bash
cd /home/user/www/example.com/repository
while true ; do
if [[ -f GITPULLMASTER ]] ; then
git pull > gitpull.log 2>&1
mv GITPULLMASTER GITPULLMASTER.`date +"%Y%m%d%H%M%S"`
fi
sleep 10
done
Detach: the last step is to run the script in detached mode, so you can log out and keep the script running in background.
There are 2 ways of doing that, the first is simpler and don't need screen software installed:
disown:
run ./gitpull.sh & to put it in background
then type disown -h %1 to detach and you can log out
screen:
run screen
run ./gitpull.sh
type control+a d to detach and you can log out
Conclusion
This solution is simple and you avoid messing with keys, passwords, permissions, sudo, root, etc., and also you prevent the script to flood your server with useless git pulls.
The way it works is that it checks if the file GITPULLMASTER exists; if not, back to sleep. Only if it exists, then do a git pull.
You can change the line:
mv GITPULLMASTER GITPULLMASTER.date +"%Y%m%d%H%M%S"`
to
rm GITPULLMASTER
if you prefer a cleaner directory. But I find it useful for debug to let the pull date registered (and untracked).
For our on-premises Windows test servers, we use Windows Task Scheduler tasks, set to run every 3 minutes, pulling from Bitbucket Cloud to repositories on those servers. While not instantaneous, it meets our needs, and has proven to be reliable.
I'm sure there are answers for this all over stackoverflow but I was unable to find anything specific.
I have a PHP project which I am revisiting. Its running on a RHEL5 box. I have SVN on the same box.
Out of curiosity I recently added Jenkins to the machine and have the jenkins php template at...
http://jenkins-php.org/
There was a bit of playing around with the setup but I more or less have this all running and doing Continuous Inspection builds when something is committed to SVN.
What I want to do now is have Jenkins copy my updated files across to the server when the build completes.
I am running a simple LAMP setup and would ideally only like to copy across the files that have actually changed.
Should I just use ANT & sync? Currently the files reside on the same box as the server but this may change whereby I will need to sync these files across to multiple remote boxes.
Thanks
Check these - Copy Artifact Plugin and the job's env variables.
Now set 2 jobs - 1 on source machine and 1 on destination server (make it a slave). Use the plugin to copy required artifacts by using environment variables.
Do you have your project (not the jenkins but that with LAMP setup) under the SVN? If yes I'd recommend to create standalone job in Jenkins that will just do an svn up and you could tie it to jenkins job the way like - you running your main job, and if build is ok jenkins automatically runs job to update your project.
For copying to other servers take a look at Publish Over plugins
It's very easy to setup server and rules. The bad thing is that you can't setup copying only the new files for current build which means that the entire project is uploaded every build.
If your project is too big, another solution is to use rsync as post build action.
I have Jenkins setup to build and deploy my project using Phing.
Inside the Phing build.xml file, I create a set of targets for doing useful stuff, such as running tests, static code analyzer and such.
Phing also is responsible for deploying the application, creating a structure similar to this:
var/www
current -> /var/www/develop/releases/20100512131539
releases
20100512131539
20100509150741
20100509145325
features
hotfixes
The thing is, Jenkins does a git clone of the project and Phing also does it, inside the releases directory. With this, I have two clones on the same build and deploy process.
My question is: the responsibility of cloning the repository should be of Phing or Jenkins?
I leave Phing the same sort of tasks you mentioned: like static code analyzer, running tests, linters, etc.
The reason for this, is that as a dev, I might want to run all these test (or a set of them) regularly during my development, and Im able to trigger them on my local environment without the need of jenkins.
For deployment stuff, I leave jenkins in charge of it, so would leave jenkins to continue doing it.
But, if you want to have everything in phing, I think that would be ok, too. I would split the tasks to have a group of dev test to be run on console, and a jenkins tasks to be run on the deploy.
Definitively, only one of them should be doing so.
I am in charge of launching web projects and it takes a little too long currently from client sign off to final launch. It is on a server which I have root access to, but it runs Plesk so that the boss can setup VirtualHosts, which means there are many sites running on it.
Each project has its own git repository so currently I have the following setup.
On my staging server there is a clone of the repo and I have two bare repositories. One is on the forge (powered by Indefero) and the other is on the live server.
Each release of a project is tagged with todays date eg. git tag -a deployed-2011-04-20.
So on the staging server I execute something similar to git push --tags live master, which targets the bare repo on the live server.
Then over SSH on the live server I execute a short bash script which basically clones the repository from the live bare repo to the folder Apache will serve.
So if that all makes sense would you be able to recommend a tool or anything to make my life easier that follows that work flow or can be adapted?
It looks something like this:
Forge (authoritative source)
^
|
v
Staging/development server
|
v
Live server bare repo
|
v
Releases folder (symlinked to htdocs)
One solution that comes to mind is to add some post-receive hook on the live server bare repo in order to detect any deployed-2011-xx-yy tag coming from the staging repo, and to trigger the ssh script from there.
The other solution is to have a scheduler (like Hudson mention in pderaaij's answer, in order to:
monitor the stating repo and, on the right tag, trigger the push on the live server
monitor the live bare repo, and trigger the ssh script.
The second solution has the advantage to keep a trace of all release instances in an Hudson job report, each time said job detect the right tags and execute the release process.
Take a look at Capistrano, which happily does the symlink dance you describe here.
If you use Hudson as a continious integration server, you can make use of the build pipeline plugin.
You have your normal build process, but add an extra job which contains the commands to deploy your application. The plugin gives you a nice button to execute that build.
The hudson job could execute all the needed commands or you can take a peek at Maven for PHP and use the available plugins to invoke the remote scripts
Perhaps it is a bit out of range considering the path you've chosen already, but it's worth the research.
We got a couple of drupal sites we develop for, we are a team of 4 developers and about 20+ non-technical content managers.
Each developer has his own dev environment, we all got a beta environment where we test code integration and performance, a staging environment where the content managers test features before we push to the live environment, a training environment we use to train people and an environment specific for usability testing.
All that is setup with only 1 bare repo on a central server where each environment is a branch. We do use the post-receive hook with password-less ssh certs doing the auto pulling on the appropriate repo based on a case statement like the following:
BRANCH=`echo $line | sed 's/.*\///g'`
LOG="`date` - updating $BRANCH instance"
case $BRANCH in
"beta" )
ssh www-data#beta "cd /var/www/beta.example.com; git pull"
;;
We have a PHP project that we would like to version control. Right now there are three of us working on a development version of the project which resides in an external folder to which all of our Eclipse IDEs are linked, and thus no version control.
What is the right way and the best way to version control this?
We have an SVN set up, but we just need to find a good way to check in and out that allows us to test on the development server. Any ideas?
We were in a similar situation, and here's what we ended up doing:
Set up two branches -- the release and development branch.
For the development branch, include a post-commit hook that deploys the repository to the dev server, so you can test.
Once you're ready, you merge your changes into the release branch. I'd also suggest putting in a post-commit hook for deployment there.
You can also set up individual development servers for each of the team members, on their workstations. I find that it speeds things up a bit, although you do have some more setup time.
We had to use a single development server, because we were using a proprietary CMS and ran into licensing issues. So our post-commit hook was a simple FTP bot.
Here is what we do:
Each dev has a VM that is configured like our integration server
The integration server has space for Trunk, each user, and a few slots for branches
The production server
Hooks are in Subversion to e-mail when commits are made
At the beginning of a project, the user makes a branch and checks it out on their personal VM as well as grabs a clean copy of the database. They do their work, committing as they go.
Once they have finished everything in their own personal space they log into the integration server and check out their branch, run their tests, etc. When all that passes their branch is merged into Trunk.
Trunk is rebuilt, the full suite of tests are run, and if all is good it gets the big ol' stamp of approval, tagged in SVN, and promoted to Production at the end of the night.
If at any point a commit by someone else is made, we get an e-mail and can merge those changes into our individual branches.
Beanstalk has built-in post-commit hooks for deploying to development, staging, and production servers.
One way to use subversion for PHP development is too setup a repository for one or all three developers, and use this repository, more as a syncing tool, than true version control.
You could,
Make a repo
Add your entire PHP document structure of your project
Checkout a copy of this repo into the correct spot on your dev server
Use an svn hook, that activates on commit
This hook, will automatically update the contents of the dev sever, whenever anybody on the team checks in any code.
Hook resides in:
svn_dir/repo_name/hooks/post-commit
And could look like:
/usr/bin/svn up /path_to/webroot --username svn_user --password svn_pass
That will update your working copy on the dev server to the latest check in.
What about something distributed? You can start for example with Mercurial, try different workflows, and see which one fits you the best.
Each of you could run it locally, or on your own dev server (or even the same one with a different port...).
One possible way (there are probably better ways):
Each of you should have your own checked out version of the project.
Have a local copy of the server on your computer and test it there throughout the day. Then at the end of each day (or whenever), you merge together whatever you are ready to test, and you check it out onto the dev server and test it.
Another tool you can use for the builds is TeamCity which is free for 20 build configurations (enough for most small companies/projects.) This way you can run your tests as well as schedule builds.