In my past projects I have failed to invest time in setting up my workspace correctly.
For backups + version control I simply copy my webdirectory files to a separate folder on my hard-disk, if I find I have made a mistake somewhere, I reload a previous backup and start again from that point, often wasting precious time repeating work I have already done.
My IDE has no ftp functionality, I have to manually copy the files from my desktop to my webserver, constantly overwriting files and duplicating.
I am certain there is a better, more efficient way of doing the above. I have read about Git for version control and know I should be using it.
What is the suggested way to work efficiently (OS is Windows) with an IDE, version control and FTP that will save me sweat, tears and data loss?
EDIT: I am currently using netbeans IDE
Version control - when used properly - is much better than simply copying files around. In a single user environment, version control allows you to have fine-grain control over your versioning, often in a way that is more space efficient than full file copies because the old versions are stored as diffs.
In a multi-developer environment, version control provides the same benefits, but you also have to consider the case when multiple people edit the same file at the same time. In the simplest case, two people edit the same file in different places and you can safely take both modifications in sequence. In more complex cases, two or more developers make changes to the same region of code and it needs to be manually merged.
Git is different from traditional revision control systems in that it was designed to be used in a distributed fashion. That is - each developer has their own repository, and merges happen when they need to happen. You can have an authoritative central server if you want one, but you don't need to make very commit to that server all the time. This makes git particularly suitable to individual or remote development. Git doesn't require a heavy server on your desktop, just one small binary.
There are a lot of tutorials out there on git. Some of them are:
Getting into SCM with git
github's Introduciton to Git
A Visual Git Reference
Git Tutorial
Here's how my setup works - this may or may not be feasible for you, but I hope it helps somehow. I no longer use FTP for anything.
You should get a DVCS setup, and which one you choose is entirely up to you. Any of them will be better than manually copying or not having anything at all. I suggest taking a look at both Git and Mercurial and making a decision from there. In my opinion, if you're using Windows primarily, Mercurial might be a better choice. If not, I'd say go for Git. You could always try both!
I setup a gitolite server which acts as a central repository for all of my git projects. It is great to have a remote central repository because your entire codebase is backed up in the case that your workstation should fail - and on top of that, you can use it to do some code coordination to move your files around (and stop using FTP).
Once that is setup, I start the process of pulling and pushing to it - you talk about your IDE here, and there are a lot of Git IDE options, but I just use the command line - I just find it faster. Again, up to you on how to incorporate that.
In terms of web development, I setup my gitolite server to use git-hooks to propagate changes to my servers. They all have the git client installed, so the codebase is usually in the webroot. When a change is pushed from my workstation to the gitolite server, it fires off some commands that automatically updates the production server. Not only is it convenient, but it also puts a copy of the codebase and its versions on your servers as well. Be careful with this though; you need to make sure you aren't sharing your /.git directory.
The basic idea is to improve your development ecosystem. My Git setup is perfect for that. You might need to assess your entire workflow and make adjustments based on your needs.
Here's the git plugin for NetBeans. I suggest using the command line when you get started, though.
Related
The current deployment process entails that we only move the differences between the current SVN revision and the SVN revision of the last deployment, which works flawlessly in my project.
Other projects complain from this method and want to deploy everything, move every file from the development environment to other environments like testing, staging, or live.
The Java team lead and the PHP team lead agree on this. I am a PHP developer and find this way inefficient, and useless. We don't have to use this much time and bandwidth when we decide to deploy to live by copying everything.
When we deploy using SVN differences, the server admins save a compressed file containing all of the modified files that relates to the current deployment, so it's easier to revert back when we want to.
I just want some good reasons to present to the manager of the company, who is technically aware of the problems with the deployment process, just to let him understand that when something gets messed up after it's deployed, it's because the developers didn't do it right, not because we have to deploy everything in order for things to work. I want to convince him that deploying using SVN is way better than deploying everything (primitive copy/paste) without relying at all on SVN.
(I had to use an aswer because there was not enough space in the comments)
Interesting question.
I guess that Java developers (as I am) are just used to deploy the whole application each time (and the same probably goes for any type of language that doesn't run from sources, as PHP does instead).
In a former company where I was employed, that was the way to release an update, and since the application WAR was more than one hundred megabytes, that always took a couple of hours for the whole process, even when just a couple of classes had changed.
In the company where I'm employed right now, instead, they put together a system that works with differences, so in a way similar to what you described (although Java class files have to be wholly replaced, of course).
I think that's a way better approach, much easier and lightweight to cope with.
Since PHP relies on source files even at runtime, I think that a difference-based approach like what you already have is better. So +1 for your current approach.
So, I think that faster deployment, easier backup and the other things you mention in your question are just enough good reasons for keeping the current approach.
Of course, it is important that a fully functional version can be produced and deployed from SVN at any time and that it can replace the corresponding delta-based version on the server without any fault (but I'm sure you already have that).
About the people that have opinions against yours: ask them to prove (with real world examples) where your approach is faulty.
(Maybe this would find a better fit on programmers.stackexchange.com?
A sketch of our deployment script (we are using Git, using Subversion doesn't make any difference for the algorithm, only the actual commands are different). We are using a working copy (a local repository with Git) and another directory (named export) where the next version of the live code is prepared (kind of staging if you prefer):
update the local copy of the code (it's git pull for Git or svn update for Subversion);
cleanup the export directory then copy the code into it; we are using rsync instead of cp because it's easier to provide it a list of directories and files it should ignore (.git, .svn a.s.o.);
apply any needed configuration settings to the files from the export directory; f.e. we don't keep the sensitive data (users, passwords) into the code stored in the repository but placeholder values; this step replaces the placeholder values with the actual users, passwords, keys a.s.o.
do other needed fixups; f.e. we use some symlinks to points to directories that contain data uploaded by the users; in the code repositories we have empty directories for them; on the fixup phase these directories are removed from export then symlinks having the same names are created; the symlinks points to permanent directories, external to the web root, where the data is stored; also, we use symlinks to 3rd party libraries - they are not stored in the repo and their deployment follows a different pattern (they are usually frozen to the version they were when the project started, to avoid incompatibilities);
use rsync with the appropriate parameters (--archive and others) to make the live version of the code identical with the version just prepared in the local export directory.
The problem you are most likely experiencing is not because of "copying everything everytime", but actually due to releasing individual file fixes as a release instead of a an entire build as a release. The best-practice is to capture build artifacts for the entire application or application component, and once you've gotten to that point, it is irrelevant whether or not you are copying all the files, or just the files that have changed since that is now an implementation detail of your deployment software (whether that's a rudimentary file copy, FTP, rsync, or an enterprise-level tool like my company's product BuildMaster).
In the past, I have been developing in a very amateurish fashion, meaning I had a local machine where I developed and tested code and a production machine to which I copied the code when I was done. Recently I modified this slightly to where I developed locally, checked the code into SVN and then updated the production machine through SVN.
Now I would like to start a new project and improve my workflow. Ideally I had the following in mind:
Have one or more local dev environments
Develop and test on local machine(s)
Use SVN (or Git) as code repository
Use a build tool to set up new environments (either dev, staging or production) and deploy code
Since I am not very familiar with this process, I am looking for suggestions on how to best set this idea up and the tools to use, especially when it comes to the build tools. I was looking into Ant and Phing (possibly make), but I am so new to this that I would really like to get some guidance. Are there any good tutorials or books about PHP deployment, especially for beginners? What I am especially interested in are the following topics:
Deployment to different types of servers with different settings (e.g. dev uses different db, db passwords, PHP error reporting than production or staging).
Deployment that automatically pulls code from SVN.
Deployment that temporarily sets a "Maintenance" page for production environment.
Once I mastered the above, maybe even do some testing in the build process.
I know my question might sound quite confused... I admit, I am new to this and might be a little off the target in what I really need. That's why any help is greatly appreciated.
I would suggest making your testing deployment strategy a production-ready install-script -- since you're going to need one of those anyway eventually.
A few tips that may seem obvious to some, but are worth pointing out:
Your config file saved in your VCS should be a template, and should be named differently from the file that will eventually contain the actual settings. E.g. config-dist.php or config-sample.conf or sample/config-mysql.php or something along those lines. Otherwise you will end up accidentally checking in a server-specific configuration file over your template.
For PHP deployment, anticipate that some users will not be able to run server-side scripts through any mechanism other than the web server itself. A PHP-based installer is almost non-negotiable.
You should include a consumer-friendly update mechanism, and for that, wordpress is a great example of a project to emulate. A PHP script can (a) download the latest build, (b) use the ftp functions to update your application's files, and (c) execute an update script which makes the appropriate changes to the database, etc.
For heaven's sake don't do like [redacted] and make your users download and install separate patches for each point release. Have them download the latest (final) release which contains all the updates to date, and applies the correct ALTER TABLE functions in sequence.
Whether the files are deployed via SVN or through FTP, the install/update mechanism should be the same: get the latest files, run the update script. The updater uses the version listed in the PHP script and the version listed in the DB, and uses that knowledge to apply the appropriate DB patches in order. As for how to generate those patches, there are other questions here that you can refer to for more info.
As for the "Maintenance" page, just use the version trick mentioned above to trigger it (compare the version in the DB against the version in the PHP code). It's also useful to be able to mark a site as "down" to the public but make it visible to admins (like Joomla does), which you can trigger through database or filesystem flags.
As for automatically pulling code from SVN, I'd say you're better off with either a cron script or with commit triggers than working that into your application, since it wouldn't be relevant to end users.
This isn't exactly part of your question, but it's relevant:
If you go into distributing code intended for a wide audience, I would advise you to go with building and distributing OpenSSL-signed PHAR packages. You can distribute them over HTTP without a problem, and because they're OpenSSL-signed, you're also mitigating the risk of man-in-the-middle attacks and protecting end-users/customers/clients from someone injecting code if you want to setup an automatic or one-click update.
There's a set of tools I've contributed to in the past that work great for this, but you'll either need PHP 5.3, or you'll need PHP 5.2 with PHAR installed via PECL. https://github.com/koto/phar-util
As far as testing goes, PHPUnit is the de facto standard.
If you are interested in using Git then you should check out this build system from CodeMeme. From what you described it sounds like it would be a good fit. You can add it to any project as a submodule and with the included code you can tailor a build script that will deploy to different multiple servers in multiple environments. It uses Git to build the code for deployment but unfortunately SVN is not supported.
https://github.com/CodeMeme/Phingistrano
I would like to have some input on how a professional development setup with the following requirements might look like.
several PHP-developers (say PHP)
each developer belongs to one group
each group has one team-leader who delegates tasks
each developer works on one Windows 7 machine
and developes either with NetBeans or Eclipse
each developer 'owns' one virtual test-server where he can run the code
the VCS in use is SVN
there is a staging server where the product is ultimately tested before it gets released/deployed
I gave some specific technology to not be too abstract and b/c I also would be interested in concrete suggestions for plug-ins etc.
There are several questions coming to my mind in that setup.
1) So every developer will work on
personal branch.
2) This branch is checked out in a working copy.
Now ... this working copy is edited locally on the PC with the dev's IDE and executed/tested on the server.
What would be in that case the best/usual way to do that? I mean - how do you get your edited code on the server without causing too much overhead?
Would the dev have the code on his local disk at all? Or would it be better to have the IDE write on the remote virtual server through a tunnel or via a specific protocol?
3) Every day a dev will commit his work into his personal branch which resides in a central repository.
Is there a best practice on where the repository is supposed to be located? A seperate server?
4) Then after a dev finished his task either s/he or the team-leader merges the new code into the respective main-branch or trunk.
The most confusing part is about what I wrote between 2) and 3). Because so far I only worked with a local server. For example a VM with a server running a code which is located in a shared folder so I will be able to edit it directly. I'm not sure how to bridge the gap efficiently when the server is now actually remote. Efficiently means not having to upload manually via FTP for example.
Also external sources or book recommendations are welcome.
edit
My question/s is/are aiming at a quasi-standard / best-practice. I think this is pretty much a standard development scenario so there must be a 'usual' solution.
edit 2
Okay ... so lets try with a picture:
V is the virtual test-server for one or more developers D. C and C' are the two code-versions. They should be kept as identical as possible.
Two solutions come to my mind:
1 : Edit C, then upload it to C', then execute C', then commit C.
2 : No C existant. Just C' which is edited through some tunnel technology and executed and committed.
My guts tell me both solutions are semi-optimal. So what would be "professional" / most efficient / fastest / most convenient / most friction-less / least error-prone / best practice / industry standard?
Any questions?
Maybe its not of great help but GIT sounds like a perfect fit to your problems, i recommend to take a look to the GIT features. And if you got time check Linus Torvalds him self talking ablout GIT. http://www.youtube.com/watch?v=4XpnKHJAok8
The standard procedure as you describe is more or less the same. I also you this approach for my team. It can also be called staged application development.
Here is how I am doing it, I use a remote SVN host (ex: assembla.com, unfuddle.com) to store all my codes. My team members store the information there on these remote svn servers. You can also buy an VPS and setup SVN there and user the same approach.
Best practices is to test locally and commit and commit as many times as you can but every commit must solve a problem or include a significant segment that adds any new feature.
Once the commit is done by everyone the lead developer then can login to the staging server via SSH using tools like PuTTY. First time the lead developer has to checkout the code into the folder where the codes are to be located. During this phase file conflict may arise if multiple developers edits same segment of a file. The lead developer should then resolve the code first and then proceed with the checkout. Once checked out, there onwards the lead developer will only need to do a svn update on the staging server to make the code up to date.
Basic idea is to get the code working on local setup then commit and update the staging for testing the application on a simulated scenario and then commit it to the live site.
There are a lot of if's and but's here which will need me to write a chapter on :) but in short this is the zest.
Tools (you can use to work under this setup):
- Tortoise SVN Manager
- PuTTy
- NetBeans
hope it helps :)
I don't like working with personal branches. I worked with ClearCase for almost 15 years and even though ClearCase probably handles personal branching better than most, it was still a big pain. Even worse, personal branches encourages people to not commit their work until the last minute -- usually a day or two before a major release.
For that reason, and to force developers to stay on track with each other, I highly recommend everyone working together on a single branch (or on the trunk) as much as possible. I keep telling developers to take small bites when they make changes.
What you sound like you need is a way to automate the deployment. That is, I make changes on my local machine, and with a single command, I make sure that the server has a duplicate copy of the code. You also want the deployment to be efficient. If you change a single 2 kilobyte file of a 2 gigabyte, 10,000 file deployment, you only want to copy over that one file, not 10,000 gigabytes. For that, I would recommend you write a deployment script in Ant.
Your developers can modify files, then deploy those files via an Ant script. The developers don't have to remember what files they had updated because Ant will automatically handle that. In fact, Ant can even modify files to make sure they contain the right environment information as they get copied over. And, of course, Ant can rearrange the files if the setup on the server is different from the setup in the source repository. And both Netbeans and Eclipse can execute Ant scripts right in the IDE.
So:
Have your developers modify code on their local machine.
Run an Ant script to make sure the server and the local machine are in sync.
Test on the server.
Then, check in their changes once they're happy with the results on the server.
Someone mentioned a Continuous Build System like Jenkins. That actually would be a good idea anyway even though it doesn't solve this particular issue. Jenkins could have its own server and database. Then when you commit your code, Jenkins would update the server and run automated tests. Jenkins can then create a report. It all gets displayed on Jenkin's webpage. Plus, you can archive your deployments on Jenkins, so if you tell someone to test "Build #20", they can simply pull it off of Jenkins where its easy to find.
I'm sure everyone has different ways of doing things but here are my thoughts.
"Best Practice" is probably "Continous Integration" ie each developer doesn't have their own branch but checks in to a common development branch. This forces them to handle conflicts and coordinate with each other early and often to avoid the lead developer from managing a huge train wreck merge later down the road. Take a look at cruisecontrol if you really want to go that route.
The best way is if they have a local apache web server and full php stack. You can use the Zend_Server community edition to get up and running on windows fast. Most standard php code will run just fine on both Windows and Linux, but if you are doing lots of file manipulation or cron job or cli stuff, or need memecache, etc you'll run into incompatabilities. If thats the case and the Linux only stuff is going to bite you use VMWARE or VirtualBox to run local linux instances and install the IDE inside those and just make sure they have goobs of RAM to deal with it.
Each developer needs to run a syncronize inside of Eclipse, basically an svn update, deal with any conflicts with the other developers right then and there, do local testing and commit their changes.
I setup a post_commit hook on the svn server that calls and /autobuild.php on my web server. autobuild.php runs svn update and gets the latest code changes as well as does any chown or chmod file permissions stuff and resets any server specific config files config.php. Its a little tricky to get it setup so that the apache user can run svn update, but once you do your beta/testing server always has the latest committed code. CruseControl, and several others can also help you do this sort of thing and add unit testing, etc
Now your Lead Developer still has a job to do merging the Development Branch into the Production One, testing on the dev server, and reviewing the commits of the others and deciding how and when to push out a release, but your not putting the burden on him of resolving every conflict and merging every change.
Your developers are not ftping files or ssh remoting into servers, they just work locally in their IDE and interact with each other through svn (and email, phone, chat, etc) updating to get the new code and commiting as they finish things.
I don't see any good coming out of having a seperate branch for each developer using SVN. Merging those branches might work in Git but with SVN your lead developer will be hating life very quickly with that type of setup.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm beginning a new project in PHP and I'd love to get some feedback from other developers on their preferred strategy for PHP deployment. I'd love to automate things a bit so that once changes are committed they can be quickly migrated to a development or production server.
I have experience with deployments using Capistrano with Ruby as well as some basic shell scripting.
Before I dive head first on my own it would be great to hear how others have approached this in their projects.
Further information
Currently developers work on local installations of the site and commit changes to a subversion repository. Initial deployments are made by exporting a tagged release from svn and uploading that to the server.
Additional changes are typically made piecemeal by manually uploading changed files.
For PHP, SVN with Phing build scripts are the way to go. Phing is similar to ANT but is written in PHP, which makes it much easier for PHP developers to modify for their needs.
Our deployment routine is as follows:
Everyone develops on the same local server at work, every developer has a checkout on his machine back home as well.
Commits trigger a post-commit hook which updates a staging server.
Tests are ran on staging server, if they pass - continue.
Phing build script is ran:
Takes down production server, switching the domain to an "Under construction" page
Runs SVN update on production checkout
Runs schema deltas script
Runs tests
If tests fail - run rollback script
If tests pass, server routes back to production checkout
There's also phpUnderControl, which is a Continuous Integration server. I didn't find it very useful for web projects to be honest.
I'm currently deploying PHP using Git. A simple git push production is all that's needed to update my production server with the latest copy from Git. It's easy and fast because Git's smart enough to only send the diffs and not the whole project over again. It also helps keep a redundant copy of the repository on the web server in case of hardware failure on my end (though I also push to GitHub to be safe).
We use Webistrano, a web frontend for Capistrano, and are very happy with it.
Webistrano allows multi-stage, multi-environment deployments from SVN, GIT and others. It has built-in rollback support, support for separate server roles such as web, db, app, etc., and deploys in parallel. It allows you to override config parameters on multiple levels, such as per stage, and logs the results of every deploy, optionally mailing it.
Even though Capistrano and Webistrano are Ruby applications, the syntax of the deployment 'recipes' is easy and powerful enough to understand for any PHP programmer. Originally Capistrano was built for Ruby on Rails projects, but easily accommodates PHP projects.
Once configured it is even easy enough to be used by non-programmers, such as testers deploying a staging version.
To provide the fastest deploy possible we installed the fast_remote_cache method, which updates a svn working-copy cache on the remote server, and then hardlinks the result.
I use Apache Ant to deploy to different targets (dev, QA and live). Ant is designed to work for Java deployment, but it provides a pretty useful general case solution for deploying arbitrary files.
The syntax of the build.xml file is pretty easy to learn - you define different targets and their dependencies which run when you call the ant program on the command line.
For example, I have targets for dev, QA and live, each of which depends on the cvsbuild target which checks out the latest head revision from our CVS server, copies the appropriate files to the build directory (using the fileset tag), and then rsyncs the build directory to the appropriate server. There are a few quirks to learn, and the learning curve is not totally flat, but I've been doing it this way for years with no trouble so I'd recommend it for your situation, though I'm curious what other answers I'll see on this thread.
I do stuff manually using Git. One repository for development, which gets git push --mirror'ed to a public repo, and the live server is a third repo pulled from that. This part I suppose is the same as your own setup.
The big difference is that I use branches for nearly every change I'm working on (I've got about 5 right now), and tend to flip back and forth between them. The master branch doesn't get changed directly except for merging other branches.
I run the live server direct from the master branch, and when I'm finished with another branch and ready to merge it, flip the server to that branch for a while. If it breaks, putting it back to master takes seconds. If it works, it gets merged into master and the live code gets updated. I suppose an analogy of this in SVN would be having two working copies and pointing to the live one via a symlink.
I know Phing has been mentioned a few times now, but I've had great luck with phpUnderControl. For us we
Check out individual copies of branches to local machines
Branches are tested and then merged into Trunk
Commits to Trunk are automatically built by phpUnderControl, runs tests and builds all documentation, applies database deltas
Trunk gets run through quality testing and then merged into our Stable branch
Again, phpUnderControl automatically builds Stable, runs tests, and generates documenation and updates database
When we're ready to push to production we run a rsync script that backs up Production, updates the database, and then pushes the files up. The rsync command is invoked by hand so that we make sure someone is watching the promotion.
an alternative to home-made deployment scripts is to deploy to a platform-as-a-service which abstracts away a lot of that work for you. A PaaS will typically offer its own code deployment tool, as well as scaling, fault-tolerance (eg. not going down when hardware fails), and usually a great toolkit for monitoring, log checking etc. There's also the benefit of deploying to a known good configuration which will be kept up-to-date over time (one less headache for you).
The PaaS I would recommend is dotCloud, in addition to PHP (see their PHP quickstart) it can also deploy MySQL, MongoDB and a whole bunch of additional services. It also has nice goodies like zero-downtime deployment, instant rollback, full support for SSL and websocket, etc. And there's a free tier which is always nice :)
Of course I'm slightly biased since I work there! Other options worth checking out in addition to dotCloud are Pagodabox and Orchestra (now part of Engine Yard).
Hope this helps!
Solomon
That you automatically and blindly take changes from a repository to production servers sounds dangerous. What if your committed code contains a regression bug, so your production application gets glitchy?
But, if you want a Continuous Integration system for PHP, I guess Phing is the best choice for PHP. I haven't tested it myself, though, as I do stuff the manual way of e.g. scp.
I am way late to the party, but I thought I would share our methods. We use Phing with Phingistrano, which provides Capistrano-like functionality to Phing via pre-built build files. It is very cool, but only works if you use Git at the moment.
I have a working copy of an SVN release branch on the server. Updating the site (when there aren't schema changes) is as easy as issuing an SVN update command. I don't even have to take the site offline.
Phing is probably your best bet, if you can stand the pain of xml configuration files. The Symfony framework has its own port of rake (pake), which works quite well, but is rather tightly coupled to the rest of Symfony (Though you could probably separate them).
Another option is to use Capistrano. Obviously it doesn't integrate as well with PHP, as it does with Ruby, but you can still use it for a lot of stuff.
Lastly, you can always write shell scripts. So far, that's what I have done.
http://controltier.org/wiki/Main_Page
we are going to use it for multi-server deployments & maintenance.
One year late but...
In my case, deployment is not automatic. I find it dangerous to deploy code and run database-migration scripts automatically.
Instead, subversion hooks are used to deploy only to testing/staging server. Code is deployed to production at the end of an iteration, after having run tests and made sure things will work. For the deployment itself, I use a custom-made Makefile that uses rsync for transferring files. The Makefile may also run the migration scripts on the remote server, pause/resume web and database servers.
At my work myself and my team have developed a Phing oriented replacement for capistrano's deploy and we've also incorporated some of the goodies available in phing like PHPUnit testing, phpcs and PHPDocumentor. We've made it a git repo that can be added to a project as a submodule in git and it works very well. I've attached it to a handful of projects and it's modular enough that it's easy to make it work with any project on any of our several environments (staging, testing, production, etc...).
With the phing build scripts you can run them from the command line manually, and I've also had success automating the build/deploy routines with Hudson and now Jenkins ci.
I can't post any links now because the repo isn't public yet, but I've been told we're going to open source it sometimes soon, so please feel free to contact me if you're interested or if you have any questions on automating your deployment with phing and git.
I guess SVN deploy way is not very good. Because:
You need to open the SVN access for the whole world
have many .svn in the production web servers
I think Phing to produce a branch + combine all the js/css + replace stage config + ssh upload to all www servers is better way.
ssh to 10 www server and svn up is also trouble.
I have heard that uploading your website with FTP is now for n00bs, but it's the only way I've known how for the 8 or so years I've been building websites. Apparently all the buzz now is using a version control system, like SVN or Git, and somehow using SSH to upload only the files that have changed (if I understand correctly). I'm wondering if someone could explain or point me to a "comprehensive" guide. What do I need to install, and will my shared host (Dreamhost) be compatible? I currently develop on a WAMP environment. I have never used version control and wouldn't know where to start. I usually develop with a framework such as CakePHP or Zend.
You've got things mixed up a bit. A version control system is used internally to keep track of your code during development. With centralized systems like SVN, you regularly upload your code to a SVN server, which keeps track of what has changed, makes sure conflicting changes are merged correctly, and keeps a history so you can roll back changes.
Decentralized or distributed version control systems eliminate the one central server, instead allowing every single copy of the code to track its own change history, and then letting you merge and combine these separate branches at will.
But once you have a complete product, you push it out to the production server any way you like. FTP is certainly one option for doing that.
For the file uploads, what you are looking for is rsync. There is a Windows wrapper for this called DeltaCopy and the DreamHost wiki has instructions.
First you'll want to decide what you want to use for version control. I hear great things about Git, but am still an SVN user myself.
Dreamhost actually lets you create SVN repositories with their webpanel, very keen there and I can't remember but I thought they had some additional really nice features to help.
I would suggest reading or skimming through at best: http://www.svnbook.org it is very comprehensive if you plan to actually use SVN over Git.
Everyone is completely missing the point. Development using a version control system is a great thing and has massive upside even for developers working on their own. The question here is about deployment using version control systems.
This is a newer and great idea, consider something like Magento which has 6,744 files in the base install, not mentioning when you start adding your own skins which usally run to around 500 files. Using version control to DEPLOY something like this saves major time uploading this many tiny files via FTP as only the modified ones are sent.
Asside from this, I've never actually tried deploying like this so I can't offer any real world experience, however there are several good articles on how to get this setup, a good one can be seen here.
Here's an wiki that should give you all the information you need on adding Subversion to Dreamhost.
http://wiki.dreamhost.com/index.php/Subversion
I've used Subversion now for my sites, and it does make it much easier. I use Aptana on my Windows machine and upload everything through that program. It allows me to compare old versions, revert to them, branch off new versions, etc...
It's a huge timesaver!
Eric Sink's articles about source control are a great place to learn about the basic concepts.
http://www.ericsink.com/scm/source_control.html
I also develop with Zend Framework and here is how I use FTP and Version control.
On my local machine I have Subversion and TortoiseSVN installed.
If I start a new project, I set up an SVN repository the way I like it (I use the trunk/branches/tags system).
I checkout an initial working copy from the trunk to a project folder in my local webroot.
I create a new project in Aptana and set the project path to my project folder on the localhost.
Aptana understands that this project is versioned and shows appropriate icons on each file. I can do many of the version control functions directly in my file tree in Aptana, no need for any shell or even Tortoise.
Once I have a stable, deployable version of my app, I create a version control tag. Then I do an export of that (unversioning it).
The exported app is then uploaded via FTP.
That's how I do it at the moment anyways, maybe it clarifies somethings. Tips on improving the procedure are welcome!
As others have said, you can set up version control locally ... or on your host. I recommend you do whatever works best for you.
You mention using Dreamhost. I support one small site there, and know that they do allow uploading via scp and sftp. This would allow you to upload your files with your password encrypted. (And you don't have to adopt a version control method if you don't want to! ;-) Scroll down the sftp page I linked to and you'll find some suggestions for scp & sftp clients.
FWIW, if you're using Windows, I've used WinSCP for years and liked it. Also, if you want full login access, I suggest PuTTY; its full download also includes command line based clients for sftp and scp.
There is no problem using ftp to upload. The only disadvantage is that the password is transferred as plain text.
It would be good to have a local version control system, that would allow you to easily see changes between versions, and quickly revert to an older version, and much more...
I don't think there is a need to install a version control system on your shared host. Only if you want to access the version control system from different sites (at home, at work, while traveling), it can be handy.
There is an awesome plugin for bzr called bzr-upload, designed exactly for your kind of use-case. bzr is very light-weight (no need to set up any repository) and super easy to start using, even if you haven't used any kind of source control before. It's a plugin for bzr and every time you make a commit on your local machine, it will s/ftp the changed files up to your web host. It doesn't push up all the version control info, just the files themselves.