Let's say you have a project running on apache. I use capistrano to deploy new code and update a httpd.conf/other configuration files, I then reload all of my services (reloading the configs).
How is rollback managed? I wouldn't assume cap rollback would put the old configs in place and reload. Is this possible? Can you show me an example?
Is there a better way of managing configuration?
Capistrano comes with built-in recipes to manage Rails application rollbacks. They may work for your PHP/Apache deployment...but if they don't you can easily write your own Cap recipies in Ruby. You'll have to try it out on a test server to see how it works.
I ended up making my own hooks into deploy_code and on_rollback that copied the apache conf from the repository and reloaded apache.
Related
I have developed a symfony application and it's done.
It's been a couple of days and I can't figure out how to deploy this into a real apache server, when I copy it to the public_html it doesn't work assets don't load properly.
Can some one give me a step by step description of how to deploy it so that when I navgiate to example.com url I see my symfony application.
Thank you
If you try to deploy your application on a server, you have to configure apache. The best way its to make a virtual host.
Here the documentation:
http://httpd.apache.org/docs/current/en/vhosts/examples.html
http://symfony.com/doc/current/cookbook/configuration/web_server_configuration.html
I would suggest using Capifony which provides a specialized set of tools on top of Capistrano, tailored specifically to symfony and Symfony2 projects. (according to the documentation)
The advantages of using capifony are,
deploying is as simple as running cap deploy from your project root directory.
It stores multiple releases.
It allows you to use SCM(s) to pull your application code down to the server.
You can configure it to run any batch command during the deployment.
It performs a transactional-like deployment process (if any step fails, the deployment is
rolled back and the current directory points to your last release)
You'll also need to troll through this part of the documentation to figure out how to get your application running under Apache.
Currently using LAMP stack for my web app. My dev and prod are in the same cloud instance. Now I am getting a new instance and would like to move the dev/test environment to the new instance, separating it from the prod environment.
It used to be a simple Phing script that would do a SVN export into the prod directory (pointed to by my vhost.conf). How do I make a good build process now with the environments separated?
Thinking of transferring the SVN repository to the dev server and then doing a ssh+svn push (is this possible with Phing?)
What's the best/common practice for this type of setup?
More Info:
I'm currently using CodeIgniter for MVC framework, Phing for automated builds for localhost deployment. The web app is also supported by a few CRON scripts written in Java.
Update:
Ended up using Phing + Jenkins. Working well so far!
We use Phing for doing deployments similar to what you have described. We also use Symfony framework for our projects (which is not so much important for this but Symfony supports the concept of different environments so it's a plus).
However we still need to produce different configuration files for database, front controllers etc.
So we ended up having a folder with build.properties that define configuration for different environments (and in our case also for different clients we ship our product to). This folder is linked to the file structure using svn externals (again not necessary).
The Phing build.xml file then accept a property file as a parameter on the command line, takes the values from it and produces all necessary configuration files, controllers and other environment specific files.
We store the configuration in template files and then use copy/filter feature in Phing to replace the placeholders in the templates with the specific values.
The whole task of configuring the given environment can then be as simple as something like this:
phing configure-environment -DpropertyFile=./build_properties/build.properties.prod
In your build file you check if the propertyFile property that specifies the properties file is defined and load the file using <property file="./build_properties/build.properties.prod" override="true" />. Then you just do any magic with the values as you need.
You can still use your svn checkout/update and put all the resulting configuration files into svn ignore (you will have them generated by phing). We actually use additional steps in Phing. Those steps in the end produce a Linux shell installation self-deploy package. This is produced automatically in Jenkins. We then send the package to our clients or the support team can grab the package from Jenkins and they can do the whole deployment just by executing it (we still prefer manual deployments to production servers) or Jenkins can deploy it automatically (for example to test servers).
I'll be happy to write more info if needed.
I recommend using Capistrano (looks like they haven't updated the docs since they moved the site) and railsless-deploy for doing deployment. Eventually, you are probably going to need to add more app boxes and run other tasks as part of your deployment so choosing a framework that will support this can save you a lot of time in the future. I have used capistrano for two PHP deployments (one small and one large) and although its not perfect, it works well. It also handles all of the code checkout / update, moving symlinks into place, and rolling back if something goes wrong.
Once you have capistrano configured, all you have to do is something like:
cap dev deploy
cap prod deploy
Another option that I have explored for doing this is fabric. Although I haven't used it, if I had to deploy a complex app again, I would consider it. The interface is simple and straightforward.
A third option you might take a look at thought its still in the early stages of development is gantry (pardon the self promoting). This is something I have been working on out of frustration with using capistrano to deploy a PHP application in an environment with a lot of moving pieces. Capistrano is great and works well for non PHP application deployments, but you still have to some poking around in the code to understand what is happening and tweak it to suit your needs. This is also why I suggest giving fabric a good look.
I use a similar config now. Lamp + SVN + codeigniter + prd and dev servers.
I run the svn repos on dev. I checkout the repos into the root folder of the dev domain. Then use a post-commit hook to update the root folder everytime any developer commits.
When we are happy and have fully tested the code I ssh into the prd server and rsync the dev root to the prd root.
Heres my solution for the different configs. Outside the root folder I have a config.ini file. I parse the file in my codeigniter constants.php script. This means that the prd and dev server can have separate settings without them ever being in the repos.
If you want help with post-commit, rsync and ini code let me know.
I work a lot with the WindowsAzure4E(clipse) IDE. And it's always pain to wait for the local test deployment)
Isn't there a way to develop on the deployed PHP files which must be stored somewhere to inetput or something else?
thx for your ideas.
Yes! In fact, I just got this working myself yesterday.
After installing PHP 5.3 with CGI support for IIS (making the necessary php.ini modifications of course), I simply created a new site in IIS that mapped to a role in the workspace for my Eclipse project.
Keep in mind that there's one hiccup to this and that is that the php_azure.dll file, used to access the service configuration and mount azure drives, was built to run in the azure fabric (either development or hosted). In my case, I don't NEED these features so I removed referrences to things like getconfig and poof the project loads in IIS just fine. I only need to make sure I start Azure Storage prior to launching the application.
I've been told that some folks are able to update their systems path environment variable with the location of the azure diagnostics dll (diagnostics.dll) and have it work without this modification. But this route didn't work for me. :(
I'll actually be blogging on this more this weekend as it took me a week of evenings to get things sorted out.
I found out that after the deployment the project files are copied to the folder ServiceDefinition.csx.
When you now edit the source code in this place, you can see the changes directly, without another deployment.
Does anyone have a good Drupal upgrade strategy for an install that is in production? No one talks about this in books and it's hard to find a definitive answer in forums and email lists.
Ex:
Lock down prod, don't allow
updates to data
copy prod
copy prod database to dev
turn off all modules in dev
upgrade core Drupal in dev (update db if necessary)
upgrade modules in dev (update db if
necessary)
turn on modules
test
migrate code and db to prod
turn site back on
Your strategy sounds good, but it would require a site to be in “read only” mode for quite a while. This is not always feasible. Also I am not quite sure why you would turn on and off all of the modules?
May I propose a slightly different approach
copy prod database to dev
replicate prod code in dev
upgrade core Drupal in dev
run update.php
test
For each module
. Upgrade modules in dev
. Run update.php
. Test
Put into maintenance mode
Backup database
Migrate code to production
Run update.php
Put back online test
This way there is a lot more testing but less downtime, also you will be able to work out which module breaks things if there is an error. It also dosn't rely on you uploading the DB from dev to live.
Don't think there's any need to turn off modules before running update.php anymore (except maybe between major versions). And I definitely wouldn't run update.php once per module - that doesn't make sense with the way update hooks work.
If you're at all comfortable with the command-line (and running on a Linux server) then definitely have a look at Drush. It can simplify the process and allow parts of it to be scripted.
In addition, if you're looking for a formal update process to move stuff from your dev server to production for a large site, you should also be getting up to speed on the hooks that run during install and update.
I have to deploy my php/html/css/etc code to multiple servers and i am looking at my options for software that allows easy and secure deployment to multiple servers.
Also helps if it could be tied into my SVN.
Any suggestions?
Capistrano is pretty handy for that. There's a few people using it (1, 2, 3) for deploying PHP code as evidenced by doing a quick search.
Setting up password-less publickey authentication with ssh would allow you to scp your files to any of your servers very quickly (or be automated by a shell script).
Here's a simple tutorial: http://rcsg-gsir.imsb-dsgi.nrc-cnrc.gc.ca/documents/internet/node31.html
If you're running on Redhat or Debian, consider packaging up your code into RPM's or Debs. Then build a yum or dpkg repository and put your packages there. You can then use your system's package management to do upgrades/rollbacks, etc. You can even use puppet to automate the process.
If you want to tie it into subversion, you can create a branch for each new version. Use the commit scripts to build the RPM's when a new branch shows up in a directory.
I'll second Capistrano. It's incredibly powerful and flexible. Our current project uses Capistrano for deploying to different servers as well as multiple servers. We pass two arguments to the cap command:
1) the name of the set of machine specific config options to run and
2) the name of the action to run
ends up looking like this:
cap -f deploy.rb live deploy
or
cap -f deploy.rb dev deploy
Of course the default use case - deploy to lots of machines at once - is a doddle with Capistrano AND you don't need to have Capistrano on the machines you are deploying to. All in all, tasty technology.
I've used Automated Build Studio before for a similar task. It gives you a lot of flexibility in what you can do.
I concur -- set your svn tree up, and use rsync over ssh to copy the tree out to the remote locations. rsync will make it fast and efficient, only copying changes rather than full files.
You want to export your svn tree to some directory, then rsync from there to the remote host's directory tree.
I also forgot to mention that if you use rsync, you can set up rsync to use ssh, so you will only transfer the files that have changed, saving on time and bandwidth.
You can also use kwateeSDCM which is free and allows remote installation via ssh. It also enables you to manage server-specific configuration from a central location and make upgrades seemless.
I had marked a post on how to deploy your websites using Subversion : http://blog.lavablast.com/post/2008/02/I2c-for-one2c-welcome-our-new-revision-control-overlords!.aspx
I found capistrano to be very easy to use once it's setup. The configuration file can be a bit confusing at first for more complicated environments but it soon becomes worthwhile. I deploy to 14 servers on production. I also use multiple environments for deployment to a staging server. One quirk, there's a bug in Ruby that breaks parallel deployment but serially isn't too bad with svn exports.
Capistrano setup is just too complicated. We found that KwateeSDCM was very straightforward to use with a simple web interface and no scripting. We've got our deployment configuration done in no time for Dev and QA configuration on windows and linux servers.