I have a website say www.livesite.com which is currently running. I have been developing a new version of the website on my local machine with http://localhost and then committing my changes with svn to www.testsite.com where I would test the site on the livesite.com server but under another domain (its the same environment as the live site but under a different domain).
Now I am ready to release the new version to livesite.com. Doing it the first time is easy, I could just copy & paste everything from testsite.com to livesite.com (not sure its the best way to do it).
I want to keep testsite.com as a testing site where I would push updates, test them and once satisfied move to livesite.com but I am not sure how to do that after the new site is launched.. I don't think copy pasting the whole directory is the right way of doing it and it will break the operations of current users on the livesite.com.
I also want to keep my svn history on testsite.com. What is the correct way of doing this with SVN ? Thank you so much!
Other answers mentioning Hudson or Weploy are good. They cover more issues than what follows. That said, the following may be sufficient.
If you feel that's overkill, here's the poor-man's way of doing it with SVN and a little creative sysadminning.
Make your proudction document root a symlink, not an actual directory. Meaning you have something like this:
/var/www/myproject-1-0-0
/var/www/myproject-1-1-0
/var/www/myproject-1-1-1
/var/www/html -> myproject-1-1-1
This means you can check out code onto production (say, myproject-1-1-2) without overwriting stuff being served. Then you can switch codebases near-instantly by doing something like:
$ rm html && ln -s myproject-1-1-2 html
I'd further recommend not doing an svn checkout/svn export of your trunk on the production box. Instead, create a branch ahead of time (name it something like myproject-X-Y-Z). That way if you need to do some very stressful tweaking of production code, you can commit it back to the branch, and merge it back to trunk once the fire is extinguished)
I do this a lot, and it works quite well. However, it has some major drawbacks:
Mainly, you have to handle database migrations, or other upgrade scripts, all by yourself. If you have scripts (plain-old-SQL, or something more involved), you need to think about how best to execute them. Downtime of hopefully-just-a-minute might not be a bad idea. You could keep a "maintenance site" around (/var/www/mainenance), and point the symlink there for a few moments if you needed to.
This method is not nearly as cool as Weploy, for example, but for relatively small projects (running on a single server, with not-huge databases), it's often good enough, and dead simple.
My answer will complicate things a little bit, but here goes:
I would for this type of scenario use Hudson.
Hudson will allow for you to have an auto deploy / clean the current dir out / add new from svn process. You can then worry about development and less about jugling and deploying from one place to another.
The caveat is that you need to learn a little bit on how to setup Hudson and how to make him work for you.
How to get started with PHP for Hudson
I think that should get you on the right track, a bit of work like I said, but pays off later on.
If only the server side code changes, it is possible that you can simply copy the code across and things will be okay. But even there you have to think of possibility of people in mid-interaction. If the client side code changes, especially if you are heavily using ajax, you will have to get the current users to reload their pages. If the database also changes, then you have ensure that no database transactions happen during the time that you are applying the database change scripts.
In all cases, and irrespective of whether you are using any continuous integration tool, I believe it is safest to go for downtime to apply these changes. One of the reasons why people have the "beta" sticker on their sites is so that they can log everyone off and shut them all out to apply changes without notice. As long as they don't do it very frequently they can get away with it too. Once you are out of beta, applying changes becomes a ceremony where you start announcing downtime weeks in advance, then get a window of 30 minutes to a few hours to apply all changes.
For underlying things like patching security flaws in the OS or system software, adding hardware etc, downtime can be avoided if there is load balancing, and the patches are applied one by one.
Related
I'm working in a web agency with a small team (5 developers, 2 designers). We primarily work with PHP/MySQL web applications including Magento, Experession Engine and CakePHP. We use a combination of Windows 7 PC'S (developers) and Mac's on OSX (designers).
I've been looking into using github for our projects with 3 main goals:
To see who has edited files and allowing people to comment on files.
To avoid overwriting each others work as it's sometimes the case multiple people can try and work on the same file at the same time.
Allowing rollbacks to previous versions of a file.
This is our current workflow and I don't understand how github fits in with this at all. I realise that our workflow will need changing but I haven't been able to find a process which in any way seems to fit in with this:
All our work is done on a remote web server which is specifically for development (nothing "live" sits on here). The server is running Apache, PHP, MySQL, etc. Nobody has a local development environment set up on their machine and we don't want that if possible.
We all have FTP access to the development server mentioned above. We generally edit files directly on this dev server as it gives a very quick way to test things out (literally edit a file, upload it and run it in the browser). There are problems with conflicts, e.g. multiple people trying to edit the same file, which is why I'm looking into using git.
When everything has been approved on the development server it is made live by copying it to a different server. The live server can be anywhere - we use some servers we manage ourselves, sometimes we use third party hosting companies - it varies.
I've been looking into this for the last couple of days and all the approaches I'm finding seem impossible for us to use. Does anyone have any insight into the best way to achieve this? Or am I looking into something which isn't even applicable for the issues I'm trying to resolve?
I would appreciate any useful advice people can offer.
Thanks.
We have a very similar set up in the company I work. We actually have a different sandboxes on the dev server. In other words, we clone the repo into different sandboxes. Each developer/designer gets a sandbox. For example if there are 3 developers, there will be 3 sandbox directories + 1 staging directory
So, developer john gets /home/john/example.com and it can be viewed at john.example.hot (setting up vhosts)
mike gets /home/mike/example.com viewed at mike.example.com
tracy gets /home/tracy/example.com viewed at tracy.example.com
And there will be one additional staging directory. /home/staging/example.com staging.example.com
Staging merges all the changes together so it can be tested. All of these directories are accessible only with internal IPS.
We deploy these changes to production using RSYNC. More information here about RSYNC: http://www.cyberciti.biz/tips/linux-use-rsync-transfer-mirror-files-directories.html
You could create a git repository at your testing machine and have every one of your team use git to push their changes to that repository. This way they will get notified, when their changes will conflict with those of other people.
A typical workflow could look like this:
Developer1 changes something at his machine.
Designer1 changes some files on his machine
Designer1 commits those changes to his local git repository
Developer1 commits those changes to his local git repository
Developer1 pushes his changes to the development machine.
Designer1 pushes his changes to the development machine
in case of conflicts with the changes of Developer1 he will now be prompted what those conflicts are and will have to resolve them.
Then the resolved changes will be pushed to the development machine.
This should fix your problem 1.) and 3.) and will make 2.) an explicit action, which means that your developers and designers will see what they are overriding. If changes happen at different parts of a file at the same time, than git may be able to keep both changes without needing further interaction.
But beware, that this still has the problem, that no one will get to test his own changes without interference of other people as those may change things at any time while someone is trying to test something. With only 1 development machine you can not prevent this from happening with git alone. As your team is rather small and your current approach doesn't fix this either it might be of little significance to you.
I've built a CMS (using the Codeigniter PHP framework) that we use for all our clients. I'm constantly tweaking it, and it gets hard to keep track of which clients have which version. We really want everyone to always have the latest version.
I've written it in a way so that updates and upgrades generally only involve uploading the new version via FTP, and deleting the old one - I just don't touch the /uploads or /themes directories (everything specific to the site is either there or in the database). Everything is a module, and each module has it's own version number (as well as the core CMS), as well as an install and uninstall script for each version, but I have to manually FTP the files first, then run the module's install script from the control panel. I wrote and will continue to write everything personally, so I have complete control over the code.
What I'd like is to be able to upgrade the core CMS and individual modules from the control panel of the CMS itself. This is a "CMS for Dummies", so asking people to FTP or do anything remotely technical is out of the question. I'm envisioning something like a message popping up on login, or in the list of installed modules, like "New version available".
I'm confident that I can sort out most of the technical details once I get this going, but I'm not sure which direction to take. I can think of ways to attempt this with cURL (to authenticate and pull source files from somewhere on our server) and PHP's native filesystem functions like unlink(), file_put_contents(), etc. to preform the actual updates to files or stuff the "old" CMS in a backup directory and set up the new one, but even as I'm writing this post - it sounds like a recipe for disaster.
I don't use git/github or anything, but I have the feeling something like that could help? How should (or shouldn't) I approach this?
Theres a bunch of ways to do this but the least complicated is just to have Git installedo n your client servers and set up a cron job that runs a git pull origin master every now and then. If your application uses Migrations it should be easy as hell to do.
You can do this as it sounds like you are in full control of your clients. For something like PyroCMS or PancakeApp that doesn't work because anyone can have it on any server and we have to be a little smarter. We just download a ZIP which contains all changed files and a list of deleted files, which means the file system is updated nicely.
We have a list of installations which we can ping with a HTTP request so the system knows to run the download, or the click can hit "Upgrade" when they log in.
You can use Git from your CMS: Glip. The cron would be a url on your own system, without installing Git.
#Obsidian Wouldn't a DNS poisoning attack also compromise most methods being mentioned in this thread?
Additionally SSH could be compromised by a man in the middle attack as well.
While total paranoia is a good thing when dealing with security, Wordpress being a GPL codebase would make it easy to detect an unauthorized code change in your code if such an attack did occur, so resolution would be easy.
SSH and Git does sound like a good solution, but what is the intended audience?
Have you taken a look at how WordPress does it?
That would seem to do what you want.
Check this page for a description of how it works.
http://tech.ipstenu.org/2011/how-the-wordpress-upgrade-works/
I've just gotten my production site up and running. I have a lot more work to do, and I'm realizing the need now for a development server before pushing changing live onto the production site (with users) - obviously...
This thread (and a lot more on Stack) describe me:
Best/Better/Optimal way to setup a Staging/Development server
Anyhow... Reading these threads is outright confusing at times, with all of the thrown around terminology, and my smaller CentOS/Apache knowledge.
My goal is to:
Make some changes to files as needed.
Test the changes on the same server, to ensure settings are identical.
If all is ok, I can now save a version of this somewhere, perhaps locally is enough for now (Bazaar seems like a possibility?)
Finally, replace all of the changed files via SSH or SFTP or something...
My worries are:
Uploading changes while users are in the system.
How to upload the files that have changed, only.
I'd love somebody to either link to a great guide for what I'm thinking about (that leaves nothing to imagination I'd hope) - or some kind of suggestion/etc... I'm running in circles trying out different SVN's and progarms to manager them, etc...
I'm the only one developing, and just want a repeatable, trust-worthy solution that can work for me without making my life too miserable trying to get it set up (and keep it set up).
Thanks much.
If you have the ability to create a staging subdomain on the production server, here is how I would (and do) handle it:
Develop on your development machine, storing your code in a VCS. I use subversion, but you may find another you prefer. After making changes, you check in your code.
On your production server, you create a subdomain in an Apache VirtualHost which is identical to, but isolated from your production VirtualHost. Checkout your code from the VCS to the staging subdomain area. After making changes, you then run an update from your VCS, which pulls only changed files down. Staging and production share the same data set, or you may have a separate database for each.
The reason for using a subdomain instead of just a different directory is that it enables you to use the same DocumentRoot for both staging and production. It's also easy to identify where you are if you use something like staging.example.com.
When you're certain everything works as it's supposed to you can run a VCS update on the production side to bring the code up to date.
It is important to be sure that you have instructed Apache to forbid access to the VCS metadata directories (.svn, .git, whatever).
Addendum
To forbid access to .svn directories use a rewrite rule like:
RewriteEngine on
RewriteRule .*\.svn/.* - [F]
This will send a 403 on them. You could also redirect them to the homepage to make it less obvious they're even present.
In terms of worry #1, remember that even StackOverflow periodically goes down for maintenance when people are using it. Just provide a good mechanism for you to switch the site to maintenance mode (and back out) and you'll be fine.
Thank you everyone for the tips/hints/etc...
I've finally found the perfect solution for my needs, SpringLoops... http://www.springloops.com/v2/
It manages my SVN (which I use Tortoise SVN with) - and will actually deploy the changes onto my two sites: staging and production.
I highly recommend it, it's working wonderfully so far.
Thanks!
You need a version control system, like Subversion, Git or Mercurial.
When you change files, you commit those changes to a repository. This keeps track of all the changes you've made and lets you undo them if it turns out you did something stupid. It also lets you restore files you accidentally delete or lose.
On top of that, it makes deploying updates as simple as typing 'git update' or 'svn update' on the production server. Only the files that changed get transferred and put in place.
You will get this advice no matter how many times the question is re-asked. You need version control.
I am developing (solo web developer) a rather large web based system which needs to run at various different locations. Unfortunately, due to some clients having dialup, we have had to do this and not have a central server for them all. Each client is part of our VPN, and those on dialup/ISDN get dialed on demand from our Cisco router. All clients are accessable within a matter of seconds.
I was wondering what the best way to release an update to all these clients at once would be. Automation would be great as their are 23+ locations to deploy the system to, each of which is used on a very regular basis. Because of this, when deploying, I need to display a 'updating' page so that the clients don't try access the system while the update is partially complete.
Any thoughts on what would be the best solution
EDIT: Found FileSyncTask which allows me to rsync with Phing. Going to use that.
There's also a case here for maintaining a "master" code repository (in SVN, CVS or maybe GIT). This isn't your standard "keep editions of your code in the repo and allow roll backs"... this repo holds your current production code (only). Once an update is ready you check the working updated code into the master repo. All of your servers check the repo on a scheduled bases to see if it's changed, downloading new code if a change is found. That check process could even include turning on the maintenance.php file (that symcbean suggested) before starting the repo download and removing the file once the download is complete.
At the company I work for, we work with huge web-based systems which are both Java and PHP. For all systems we have our development environments and production environments.
This company has over 200 developers, so I guess you can imagine the size of the products we develop.
What we have done is use ANT and RPM build archives for creating deployment packages. This is done quite easily. I haven't done this myself, but might be worth for you to look into.
Because we use Linux systems we can easily deploy RPM packages, the setup scripts within a RPM package can make sure everything gets to the correct place. Also you get a more proper version handling and release process.
Hope this helped you.
Br,
Paul
There's 2 parts to this, lets deal with the simple one first:
I need to display a 'updating' page
If you need to disable the entire site while maintaining transactional integrity, and publishing a message to the users from the server being updated, then the only practical way to do this is via an auto-prepend - this needs to be configured in advance (note - I believe this can be done using a .htaccess file without having to restart the webserver for a new PHP config):
<?php
if (file_exists($_SERVER['DOCUMENT_ROOT'] . '/maintenance.php')) {
include_once($_SERVER['DOCUMENT_ROOT'] . '/maintenance.php');
exit;
}
Then just drop maintenance.php into your webroot and that file will be displayed instead of the expected file. Note that it should probably include a session_start() and auto-refresh to ensure the session is not expired. You might want to extend the above to allow a grace period where POSTs will still be processed e.g. by adding a second php file.
In terms of deploying to remote sites, I'd recommend using rsync over ssh for copying content files - which should be invoked via a controlling script which:
Applies the lock file(s) as shown above
runs rsync to replicate files
runs any database deployment script
removes the lock file(s)
If each site has a different set up then I'd recommend either managing the site specific stuff via a hierarchy of include paths, or even maintaining a comlpete image of each site locally.
C.
At my company we have a group of 8 web developers for our business web site (entirely written in PHP, but that shouldn't matter). Everyone in the group is working on different projects at the same time and whenever they're done with their task, they immediately deploy it (cause business is moving fast these days).
Currently the development happens on one shared server with all developers working on the same code base (using RCS to "lock" files away from others). When deployment is due, the changed files are copied over to a "staging" server and then a sync script uploads the files to our main webserver from where it is distributed over to the other 9 servers.
Quite happily, the web dev team asked us for help in order to improve the process (after us complaining for a while) and now our idea for setting up their dev environment is as follows:
A dev server with virtual directories, so that everybody has their own codebase,
SVN (or any other VCS) to keep track of changes
a central server for testing holding the latest checked in code
The question is now: How do we manage to deploy the changed files on to the server without accidentaly uploading bugs from other projects? My first idea was to simply export the latest revision from the repository, but that would not give full control over the files.
How do you manage such a situation? What kind of deployment scripts do you have in action?
(As a special challenge: the website has organically grown over the last 10 years, so the projects are not split up in small chunks, but files for one specific feature are spread all over the directory tree.)
Cassy - you obviously have a long way to go before you'll get your source code management entirely in order, but it sounds like you are on your way!
Having individual sandboxes will definitely help on things. Next then make sure that the website is ALWAYS just a clean checkout of a particular revision, tag or branch from subversion.
We use git, but we have a similar setup. We tag a particular version with a version number (in git we also get to add a description to the tag; good for release notes!) and then we have a script that anyone with access to "do a release" can run that takes two parameters -- which system is going to be updated (the datacenter and if we're updating the test or the production server) and then the version number (the tag).
The script uses sudo to then run the release script in a shared account. It does a checkout of the relevant version, minimizes javascript and CSS1, pushes the code to the relevant servers for the environment and then restarts what needs to be restarted. The last line of the release script connects to one of the webservers and tails the error log.
On our websites we include an html comment at the bottom of each page with the current server name and the version -- makes it easy to see "What's running right now?"
1 and a bunch of other housekeeping tasks like that...
You should consider using branching and merging for individual projects (on the same codebase), if they make huge changes to the shared codebase.
we usually have a local dev enviroment for testing (meaning, webserver locally) for testing the uncommited code (you don't want to commit non functioning code at all), but that dev enviroment could even be on a separeate server using shared folders.
however, committed code, should be deployed to a staging server for testing before putting it in production.
You can probably use Capistrano even though is more for ruby there are some articles that describe how to use it for PHP
I think Phing can be use with CVS but not with SVN (at least that what I last read)
There are also some project around that mimic Capistrano but written in PHP.
Otherwise there is also a custom made solution :
tag files you want to deploy.
checkout files using the tag in a
specific directory
symlink the directory to the current
document root (easy to rollback to
the previous version)
Naturally check out SVN for the repository, Trac to track things, and Apache Ant to deploy.
The basic process is managing in Subversion, tracking the repositroy and developers in Trac and using Ant deployment scripts to push your site out with the settings needed. Ant allows you to easily deploy a project to a specific location. (Dev/test/prod) etc.
You need to look at:
Continuous Integration
Running unit tests on check-in of code to check it is bug free
Potentially rejecting code if it contains a bug
Having nightly builds
Releasing only the last build that was bug free
You may not get to a perfect solution, especially not at first, but the more you use your chosen solution, the more comfortable everyone will get and be able to make suggestions on improving it.
We check for the stability with ant, every night. And use ant script to deploy. It is very easy to configure and use.
I gave a similar answer yesterday to another question. Basically you can work in branches and integrate before going live.
The biggest thing you will have to get your head round is that you are dealing with changes to files, rather than individual files. Once you have branches there isn't really a current version there are just versions with different changes in.