I have Jenkins setup to build and deploy my project using Phing.
Inside the Phing build.xml file, I create a set of targets for doing useful stuff, such as running tests, static code analyzer and such.
Phing also is responsible for deploying the application, creating a structure similar to this:
var/www
current -> /var/www/develop/releases/20100512131539
releases
20100512131539
20100509150741
20100509145325
features
hotfixes
The thing is, Jenkins does a git clone of the project and Phing also does it, inside the releases directory. With this, I have two clones on the same build and deploy process.
My question is: the responsibility of cloning the repository should be of Phing or Jenkins?
I leave Phing the same sort of tasks you mentioned: like static code analyzer, running tests, linters, etc.
The reason for this, is that as a dev, I might want to run all these test (or a set of them) regularly during my development, and Im able to trigger them on my local environment without the need of jenkins.
For deployment stuff, I leave jenkins in charge of it, so would leave jenkins to continue doing it.
But, if you want to have everything in phing, I think that would be ok, too. I would split the tasks to have a group of dev test to be run on console, and a jenkins tasks to be run on the deploy.
Definitively, only one of them should be doing so.
Related
Basically what I'm trying to do is to create a simple multi-node env with varnish+nginx+mariadb+memcached. By now I've managed to launch the environment and attach git project to it. The problem is that we work with php and symfony2, which requires composer to be executed at least once in order to properly deploy the application.
Outside of jelastic we use Jenkins + Ant (but we don't scale horizontally in automatic on the projects where this setup is used, so it's not a problem to add node manually).
So the question is: How can I run composer or ant with build.xml on each deploy?
I see that Java environments have a build server option, is there something like this for php environments?
PHP projects do not have a "standard" build server in the way that many Java projects do - requirements for PHP build tools are more varied depending on the particular project.
For example one customer may ask for grunt, another for ant, and another for phing.
If you want to perform a sophisticated build, you can create your own build node for your PHP project using an Elastic VPS or separate Docker environment. To deploy the built project to your servers you can use SSH connections, or simply git push and set the runtime environment to auto-update (e.g. via ZDT feature) from that git repo / branch.
If your needs are more simple, you can install composer directly onto your php runtime node in the normal way via SSH.
E.g.
$ curl -sS https://getcomposer.org/installer | php
There are more detailed tips about how to tidy that up (add to your PATH etc.) at http://kb.layershift.com/jelastic-install-composer
Is Behat compatible with Jenkins and Maven.Can it be easily integrated? if not , then which BDD frameworks integrate well with Jenkins
Thanks
The simplest way to get jenkins running your behat suite is to use a shell script. Providing you've got a PHP environment setup on your jenkins host and the project dependancies installed through composer, you can specify a simple command such as bin/behat. This will run your behat feature suite, returning an error code if the suite fails. Jenkins should pick this up and fail your job too.
In my experience, having a specific profile in behat.yml for specifying formatters and parameters is useful. As a result your command may look something like: bin/behat --profile ci
You can of course wrap these commands in something like maven or rake and instead of running a shell script in your jenkins job, you can invoke a rake/maven etc task. I use rake for my projects as it's much easier to maintain your build when it starts doing more than just running behat.
Currently using LAMP stack for my web app. My dev and prod are in the same cloud instance. Now I am getting a new instance and would like to move the dev/test environment to the new instance, separating it from the prod environment.
It used to be a simple Phing script that would do a SVN export into the prod directory (pointed to by my vhost.conf). How do I make a good build process now with the environments separated?
Thinking of transferring the SVN repository to the dev server and then doing a ssh+svn push (is this possible with Phing?)
What's the best/common practice for this type of setup?
More Info:
I'm currently using CodeIgniter for MVC framework, Phing for automated builds for localhost deployment. The web app is also supported by a few CRON scripts written in Java.
Update:
Ended up using Phing + Jenkins. Working well so far!
We use Phing for doing deployments similar to what you have described. We also use Symfony framework for our projects (which is not so much important for this but Symfony supports the concept of different environments so it's a plus).
However we still need to produce different configuration files for database, front controllers etc.
So we ended up having a folder with build.properties that define configuration for different environments (and in our case also for different clients we ship our product to). This folder is linked to the file structure using svn externals (again not necessary).
The Phing build.xml file then accept a property file as a parameter on the command line, takes the values from it and produces all necessary configuration files, controllers and other environment specific files.
We store the configuration in template files and then use copy/filter feature in Phing to replace the placeholders in the templates with the specific values.
The whole task of configuring the given environment can then be as simple as something like this:
phing configure-environment -DpropertyFile=./build_properties/build.properties.prod
In your build file you check if the propertyFile property that specifies the properties file is defined and load the file using <property file="./build_properties/build.properties.prod" override="true" />. Then you just do any magic with the values as you need.
You can still use your svn checkout/update and put all the resulting configuration files into svn ignore (you will have them generated by phing). We actually use additional steps in Phing. Those steps in the end produce a Linux shell installation self-deploy package. This is produced automatically in Jenkins. We then send the package to our clients or the support team can grab the package from Jenkins and they can do the whole deployment just by executing it (we still prefer manual deployments to production servers) or Jenkins can deploy it automatically (for example to test servers).
I'll be happy to write more info if needed.
I recommend using Capistrano (looks like they haven't updated the docs since they moved the site) and railsless-deploy for doing deployment. Eventually, you are probably going to need to add more app boxes and run other tasks as part of your deployment so choosing a framework that will support this can save you a lot of time in the future. I have used capistrano for two PHP deployments (one small and one large) and although its not perfect, it works well. It also handles all of the code checkout / update, moving symlinks into place, and rolling back if something goes wrong.
Once you have capistrano configured, all you have to do is something like:
cap dev deploy
cap prod deploy
Another option that I have explored for doing this is fabric. Although I haven't used it, if I had to deploy a complex app again, I would consider it. The interface is simple and straightforward.
A third option you might take a look at thought its still in the early stages of development is gantry (pardon the self promoting). This is something I have been working on out of frustration with using capistrano to deploy a PHP application in an environment with a lot of moving pieces. Capistrano is great and works well for non PHP application deployments, but you still have to some poking around in the code to understand what is happening and tweak it to suit your needs. This is also why I suggest giving fabric a good look.
I use a similar config now. Lamp + SVN + codeigniter + prd and dev servers.
I run the svn repos on dev. I checkout the repos into the root folder of the dev domain. Then use a post-commit hook to update the root folder everytime any developer commits.
When we are happy and have fully tested the code I ssh into the prd server and rsync the dev root to the prd root.
Heres my solution for the different configs. Outside the root folder I have a config.ini file. I parse the file in my codeigniter constants.php script. This means that the prd and dev server can have separate settings without them ever being in the repos.
If you want help with post-commit, rsync and ini code let me know.
I am in charge of launching web projects and it takes a little too long currently from client sign off to final launch. It is on a server which I have root access to, but it runs Plesk so that the boss can setup VirtualHosts, which means there are many sites running on it.
Each project has its own git repository so currently I have the following setup.
On my staging server there is a clone of the repo and I have two bare repositories. One is on the forge (powered by Indefero) and the other is on the live server.
Each release of a project is tagged with todays date eg. git tag -a deployed-2011-04-20.
So on the staging server I execute something similar to git push --tags live master, which targets the bare repo on the live server.
Then over SSH on the live server I execute a short bash script which basically clones the repository from the live bare repo to the folder Apache will serve.
So if that all makes sense would you be able to recommend a tool or anything to make my life easier that follows that work flow or can be adapted?
It looks something like this:
Forge (authoritative source)
^
|
v
Staging/development server
|
v
Live server bare repo
|
v
Releases folder (symlinked to htdocs)
One solution that comes to mind is to add some post-receive hook on the live server bare repo in order to detect any deployed-2011-xx-yy tag coming from the staging repo, and to trigger the ssh script from there.
The other solution is to have a scheduler (like Hudson mention in pderaaij's answer, in order to:
monitor the stating repo and, on the right tag, trigger the push on the live server
monitor the live bare repo, and trigger the ssh script.
The second solution has the advantage to keep a trace of all release instances in an Hudson job report, each time said job detect the right tags and execute the release process.
Take a look at Capistrano, which happily does the symlink dance you describe here.
If you use Hudson as a continious integration server, you can make use of the build pipeline plugin.
You have your normal build process, but add an extra job which contains the commands to deploy your application. The plugin gives you a nice button to execute that build.
The hudson job could execute all the needed commands or you can take a peek at Maven for PHP and use the available plugins to invoke the remote scripts
Perhaps it is a bit out of range considering the path you've chosen already, but it's worth the research.
We got a couple of drupal sites we develop for, we are a team of 4 developers and about 20+ non-technical content managers.
Each developer has his own dev environment, we all got a beta environment where we test code integration and performance, a staging environment where the content managers test features before we push to the live environment, a training environment we use to train people and an environment specific for usability testing.
All that is setup with only 1 bare repo on a central server where each environment is a branch. We do use the post-receive hook with password-less ssh certs doing the auto pulling on the appropriate repo based on a case statement like the following:
BRANCH=`echo $line | sed 's/.*\///g'`
LOG="`date` - updating $BRANCH instance"
case $BRANCH in
"beta" )
ssh www-data#beta "cd /var/www/beta.example.com; git pull"
;;
We have started using Hudson, and the current workflow is:
checkout locally > code > run tests > update > run tests > commit
Rather that polling, Hudson simply sits there until we instantiate a build. It then:
checkout locally > run Phing script
The Phing script then:
svn export latest revision > run tests (if successful) > generates reports etc.. > compresses export > scp to production server > .. do magic to make site live...
That all works fine and dandy, however it doesn't really give us the ability to any kind of of "staging" QA and every build builds the repo head revision. Ideally we would like Hudson to poll or use post commit hooks build each commit and:
checkout locally > run Phing task to run tests and if successful, generates reports etc..
Then be able to manually instantiate an automated deployment (via Phing task) to either "staging QA environment or production from with each specific build. Not every commit will be deployed to QA.
Is this work flow even possible from with Hudson, or are we going to need to manually run our deployment Phing tasks after.
I separated the build/test job (job1) and the deploy job (job2). Job1 runs on trunk after every commit (Hudson polls, but post commit hook would work as well). It also archives the build artifacts. Job2 will be started manually. It gets the build_number from job1 as a build parameter (I like the run parameter) and downloads the artifacts from job1, into it's own workspace. It them runs the deployment. In you case I would add another parameter (choice parameter) to determine to what environment you want to deploy.
With the description setter plugin, you can print out the run number from job1 and the environment and you can than easily see in the job history what build was deployed when to what environment.
I ended up doing something similar to Peter Schuetze suggestion. I used only the only job however. I use 3 build parameters, deploy (bool), environment (choice) and revision (text). I then altered my Phing scripts to only do deployments if the deploy parameter is true, in which case it will deploy the specified revision to the specified environment. By default deploy is false, revision is head and environment is staging. Now when Hudson polls svn, it sees the deploy parameter is false and bypasses the deployment tasks.
I'm not entirely clear on what you want to achieve, but I'm wondering whether you are using the Phing plugin? Perhaps what you want is not currently possible through Hudson and may require changing your development process to make it possible.