How to get started deploying PHP applications from a subversion repository? - php

I've heard the phrase "deploying applications" which sounds much better/easier/more reliable than uploading individual changed files to a server, but I don't know where to begin.
I have a Zend Framework application that is under version control (in a Subversion repository). How do I go about "deploying" my application? What should I do if I have an "uploads" directory that I don't want to overwrite?
I host my application through a third party, so I don't know much other than FTP. If any of this involves logging into my server, please explain the process.

Automatic deploy + run of tests to a staging server is known as continuous integration. The idea is that if you check in something that breaks the tests, you would get notified right away. For PHP, you might want to look into Xinc or phpUnderControl
You'd generally not want to automatically deploy to production though. The normal thing to do is to write some scripts that automates the task, but that you still need to manually initiate. You can use frameworks such as Phing or other build-tools for this (A popular choice is Capistrano), but you can also just whisk a few shell-scripts together. Personally I prefer the latter.
The scripts themselves could do different things, depending on your application and setup, but a typical process would be:
ssh to production server. The rest of the commands are run at the production server, through ssh.
run svn export svn://path/to/repository/tags/RELEASE_VERSION /usr/local/application/releases/TIMESTAMP
stop services (Apache, daemons)
run unlink /usr/local/application/current && ln -s /usr/local/application/releases/TIMESTAMP /usr/local/application/current
run ln -s /usr/local/application/var /usr/local/application/releases/TIMESTAMP/var
run /usr/local/application/current/scripts/migrate.php
start services
(Assuming you have your application in /usr/local/application/current)

I wouldn't recommend automatic updating. Just because your unit tests pass doesn't mean your application is 100% working. What if someone checks in a random new feature without any new unit tests, and the feature doesn't work? Your existing unit tests might pass, but the feature could be broken anyway. Your users might see something that's half-done. With automatic deployment from a check-in, you might not notice for a few hours if something made it live that shouldn't have.
Anyhow, it wouldn't be that difficult to get an automatic deployment going if you really wanted. You'd need a post-check-in hook, and really the steps would be:
1) Do an export from the latest check-in
2) Upload export to production server
3) Unpack/config the newly uploaded export
I've always performed the last steps manually. Generally it's as simple as SVN export, zip, upload, unzip, configure, and the last two steps I just alias a couple of bash commands together to perform. Then I swap out the root app directory with the new one, ensuring I keep the old one around as a backup, and it's good to go.
If you're confident in your ability to catch errors before they'd automatically go live, then you could look at automating that procedure. It gives me the jibbly-jibblies though.

At my webdev company we recently started using Webistrano, which is a Web GUI to the popular Capistrano tool.
We wanted an easy to use, fast deployment tool with a centralized interface, accountability (who deployed which version), rollback to previous versions and preferably free. Capistrano is well-known as a deployment tool for Ruby on Rails applications, but not centralized and targeted mainly to Rails apps. Webistrano enhances it with a GUI, accountability, and adds basic support for PHP deployment (use the 'pure file' project type).
Webistrano is itself a Ruby on Rails app, that you install on a development or staging server. You add a project for each of your websites. To each project you add stages, such as Prod and Dev.
Each stage can have different servers to deploy to, and different settings. Write (or modify) a 'recipe', which is a ruby script that tells capistrano what to do. In our case I just used the supplied recipe and added a command to create a symlink to a shared uploads dir, just like you mentioned.
When you click Deploy, Webistrano SSHs into your remote server(s), does an svn checkout of the code, and any other tasks that you require such as database migrations, symlinking or cleanup of previous versions. All this can be tweaked of course, after all, it's simply scripted.
We're very happy with it, but it took me a few days to learn and set up, especially since I wasn't familiar with Ruby and Rails. Still, I can highly recommend it for production use in small and medium companies, since it's proven very reliable, flexible and has saved us many times the initial investment. Not only by speeding up deployments, but also by reducing mistakes/accidents.

This sort of thing is what you would call "Continous Integration". Atlassian Bamboo (cost), Sun Hudson (free) and Cruise Control (free) are all popular options (in order of my preference) and have support to handle PHPUnit output (because PHPUnit support JUnit output).
The deployment stuff can be done with a post build trigger. Like some other people on this thread, I would exercise great caution before doing automated deployments on checkin (and test passing).

check fredistrano, it's a capistrano clone
works great (litle bit confusing installing but after all runs great)
http://code.google.com/p/fredistrano/

To handle uploads, the classic solution is to move the actual directory out of the main webspace, leaving it only for a fresh version to be checked out (as I do in the script below) and then using Apache to 'Alias' it back into place as part of the website.
Alias /uploads /home/user/uploads/
There are less choices to you if you don't have as much control of the server however.
I've got a script I use to deploy a given script to the dev/live sites (they both run on the same server).
#!/bin/sh
REV=2410
REVDIR=$REV.20090602-1027
REPOSITORY=svn+ssh://topbit#svn.example.com/var/svn/website.com/trunk
IMAGES=$REVDIR/php/i
STATIC1=$REVDIR/anothersite.co.uk
svn export --revision $REV $REPOSITORY $REVDIR
mkdir -p $REVDIR/tmp/templates_c
chown -R username: $REVDIR
chmod -R 777 $REVDIR/tmp $REVDIR/php/cache/
chown -R nobody: $REVDIR/tmp $REVDIR/php/cache/ $IMAGES
dos2unix $REVDIR/bin/*sh $REVDIR/bin/*php
chmod 755 $REVDIR/bin/*sh $REVDIR/bin/*php
# chmod -x all the non-directories in images
find $IMAGES -type f -perm -a+x | xargs -r chmod --quiet -x
find $STATIC1 -type f -perm -a+x | xargs -r chmod --quiet -x
ls -l $IMAGES/* | grep -- "-x"
rm dev && ln -s $REVDIR dev
I put the revison number, and date/time which is used for the checked-out directory name. The chmod's in the middle also make sre the permissions on the images are OK as they are also symlinked to our dedicated image server.
The last thing that happens is an old symlink .../website/dev/ is relinked to the newly checked out directory. The Apache config then has a doc-root of .../website/dev/htdocs/
There's also a matching .../website/live/htdocs/ docroot, and again, 'live' is another symlink. This is my other script that will remove the live symlink, and replace it with whatever dev points to.
#!/bin/sh
# remove live, and copy the dir pointed to by dev, to be the live symlink
rm live && cp -d dev live
I'm only pushing a new version of the site every few dats, so you might not want to be using this several times a day (my APC cache wouldn't like more than a few versions of the site around), but for me, I find this to be very much problem-free for my own deployment.

After 3 years, I've learned a bit about deployment best practices. I currently use a tool called Capistrano because it's easy to set up and use, and it nicely handles a lot of defaults.
The basics of an automated deployment process goes like this:
Your code is ready for production, so it is tagged with the version of the release: v1.0.0
Assuming you've already configured your deployment script, you run your script, specifying the tag you just created.
The script SSH's over to your production server which has the following directory structure:
/your-application
/shared/
/logs
/uploads
/releases/
/20120917120000
/20120918120000 <-- latest release of your app
/app
/config
/public
...etc
/current --> symlink to latest release
Your Apache document root should be set to /your-application/current/public
The script creates a new directory in the releases directory with the current datetime. Inside that directory, your code is updated to the tag you specified.
Then the original symlink is removed and a new symlink is created, pointing to the latest release.
Things that need to be kept between releases goes in the shared directory, and symlinks are created to those shared directories.

It depends on your application and how solid the tests are.
Where I work everything gets checked into the repository for review and is then released.
Auto updating out of a repository wouldn't be smart for us, as sometimes we just check in so that other developers can pull a later version and merge there changes in.
To do what you are talking about would need some sort of secondary check in and out to allow for collaboration between developers in the primary check in area. Although I don't know anything about that or if its even possible.
There is also issues with branching and other like features that would need to be handled.

Related

Jenkins build strategy with a PHP project from Git

I have Jenkins set up on a remote server which hosts my PHP project. Jenkins creates a build from a git repository and I would like then to move the files to the servers document root.
However the problem is all the files have the Jenkins user as their owner. So if I moved the workspace to my document root I wouldn't have permission to access them.
I'm wondering what people do to automate this? Maybe execute shell commands after the build is completed? I'm very new to Jenkins so I may be on the completely wrong track.
Short answer would be yes, you need a post-build step that runs chown (change ownership) and/or chmod (change permissions) linux commands. This will allow you to switch owning user (if jenkins user has enough rights for it) and set read/write permissions for files up to 777 (free-for-all).
However, it's more than that. You've stumbled upon deployment phase of project, and it's much more complex than that. Usually that's handled by external tools called via bash (i'm not very familiar with those, but i certainly know capistrano is an industry standard in ruby), and those tools usually (every project may require special scenario) update project files, update dependencies, deploy migrations to database, manage permissions, clean cache, etc., etc.

Git workflow for lone developer (total git newbie)

I have decided that it's time for me to start using Git on a PHP project that I have been developing casually for over a decade. (Please, no lectures from the version control police!) Due to the complex setup required on my VPS to do everything the project needs (esp. single-codebase-multiple-client structure and a Japanese-capable installation of TeX to create specialty PDFs), it is not possible to set up a development environment on my local Windows box. But I do have a testbed area on the server that I can play in, so it's my development area. Currently I use Filezilla to access the server and open files directly into Notepad++, and when I'm ready to see my edit in action, I just save and let Filezilla upload. When everything looks good on the testbed, I copy the files to the production codebase area. Yeah, that gives me no history of my changes other than my own comments, and I have to be careful not to mix bug fixes with half-finished new features. I can see the value of Git's branches for different upgrades in progress.
Yesterday I got my toes wet. First I created a Github account, and then (at the recommendation of a tutorial) installed Git For Windows (with its own Bash and tiny-looking GUI) and Kdiff3, and followed some instructions for configuring Git Bash. After all that, though, I ended up having to install something else in order to interface with my Github account (appropriately named Github for Windows), which seem to do all the stuff the other two programs were supposed to do for me. Anyway, then I did a simple task as my first foray into the Github world - I had added functionality to someone else's jQuery plugin and wanted to share it with the developer, so I forked his repo, cloned it to my machine, overwrote the file I had previously edited and tested, synced to my Github account, and sent a pull request. All the terminology in that last sentence was brand new to me, so I was pretty proud of myself that I got that far. ;) But I guess I only needed the Github software, not the Git software - it's hard to know what tutorials to believe.
Anyway, now I want to figure out a workflow for my own stuff, which is my actual question for you guys. From what I can tell, having the master repo anywhere but the public Github costs money, and I don't care if others see my code (I don't expect anyone else to work on my oddball project made of spaghetti code, but if they want to, that's great). Okay, but then what? Perhaps one of these scenarios, or something else:
Clone branches of the repo to my PC, do edits on the local files, and upload them in Filezilla for testing (a couple more clicks than my current workflow because Filezilla doesn't automatically see the relationship between the local file and the remote file, but not a big deal). Then when I'm happy with the code, commit locally, sync to Github, and copy the files (from somewhere - not sure on this point) to the production area.
Install the Linux flavor of Git on my VPS so that the "local" Git file location is the testbed, and use Git through PuTTY to do the local commits. Simpler for file structure (no need for a copy on my PC at all) but more cumbersome to use Git:
I'm not on PuTTY very frequently, and for some reason the connection often dies on me and I have to restart.
Even though the Linux command line is Git's native habitat, I am probably more comfortable with a GUI (because I forget command syntax quickly - old brain, I guess).
Also, since I never ended up using the Git program I installed here, I'm not sure whether it would be Git or Github I would be using on the server.
Some other scenario, since neither #1 or #2 uses Git/Github to manage the production file area at all, which would probably be a good idea so that I don't forget to copy everything I need.
I tried to research the possibility of a PHP-based GUI to go with idea #2 (so I don't have to use PuTTY for day-to-day operations), but it seems that the discussions of such tools all assume either that you are trying to create your own Github service, or that the "local" cloned repo is physically on your local PC (with xAMP running on whatever OS it is). But maybe the Github software I used is enough to do all that - it's hard to tell. I don't yet understand the interplay between a master public repo on Github, branches somewhere (on Github also?), at least two sets of files on my web server (the testbed and the production area), Github software, Git software, and the keyboard/screen of the computer I'm sitting at.
So pardon my newbie ramblings, but if someone out there has a similar development situation, What's your workflow? Or what would you suggest for me?
Here's one way to aproach the issue:
You will need three repositories:
a local repo to edit code. [1]
a bare remote repository on your server. This will be in a location that in not publicly viewable, but you can ssh in to. [2]
The production environment. [3]
Here's the implementation:
workstation$ cd localWorkingDirectory/
workstation$ git init
workstation$ git add .
workstation$ git commit -m 'initial commit'
workstation$ ssh login#myserver
myserver$ mkdir myrepo.git
myserver$ cd myrepo.git
myserver$ git init --bare
myserver$ exit
workstation$ cd localWorkingDirectory/
workstation$ git remote add origin login#myserver:myrepo.git
workstation$ git push origin master
every time you make a commit on any branch, back it up with:
workstation$ git push origin BRANCH
When you are ready to move branch version2 into production: do this
workstation$ git push origin version2
workstation$ ssh login#myserver
myserver$ git clone path/to/myrepo.git productionDirectory
myserver$ cd productionDirectory
myserver$ git checkout version2
Oh no! It dsoesn't work! better switch back to version1!
workstation$ ssh login#myserver
myserver$ cd productionDirectory
myserver$ git checkout version1
You don't need github (or any other central store) to start using git. Especially since you're a lone developer. Git runs directly on your own machine, without any server component (unlike for example subversion). Just git init and start committing away.
I agree with the other commenters here, that you should aim to get a local development environment up and running. Even if it takes some effort, it's certainly worth it. One of the side effects of doing so may be that you are forced to decouple some of your current hard dependencies and thereby getting a better overall application architecture out of it. The things that can't easily be replicated in your development environment could instead be replaced with mock services.
Once that is in place, look into a scripted deployment process. E.g. write a shell script that syncs your development machine's codebase with the production server. There are many ways to do this, but I suggest you start really simple, then revise your options (Capistrano is one option).
I'd certainly look at something like capistrano for your current development setup.
I can understand why you might have a reticence to use the terminal but it would probably help your understanding of git in context. Doesn't take long to get to grips with the commands and when tied into a system such as capistrano you'll be rocking development code up to your environment in no time:
git commit -a
git push origin develop
cap deploy:dev
When i'm working on windows i generally try to replicate the deployment environment i have with virtual machines locally using something like sun's virtualbox. That way you can minimise potential environment issues while still developing locally. Then you can just use putty to ssh to your local vm. Setup sharing between the vm and your host OS and all your standard IDEs/editors will work too. I find this preferable to having to setup a vps remotely but whatever works.

Set up my DEV environment for my PHP site with SVN

I am creating my first website that I am taking live soon. It is a big application that I hope goes somewhere eventually but I will be working on it while it is actually up on the web. I want to do it the right way and set up everything efficiently. It is very difficult to actually gain knowledge on how to go about doing this from a book or even the web.
Firstly, I've been doing a lot of research about SVN and although I am currently the only one working on the site I would like the ability to add more people to the project eventually. I know that you have to create a repository for the SVN where your project will actually reside in, and that there are some places that offer it for free like Codesion. I have a good understanding of how it works.
I would like a dev. subdomain for the current website so that I can test and work on everything that I am developing on and that SVN would commit to. How is it that your SVN repository communicates and uploads files to that subdomain? Is this even how you would go about doing this? How would I go about managing all the different databases for each domain?
When I am done updating several changes to my new rollout do I just manually upload all the files to the live site? I am very noob on the process for all this stuff and it is difficult to find information on it. What is the best practice for rolling out new updates? Are there any other different mechanisms or applications that I would need to know about for maintaining my website as it grows?
Please, any help in the right direction would be greatly appreciated.
It depends on how much control you have over your Web server.
Most (IMO) people push their files up to the server through something like FTP/SFTP. Many SVN hosting services offer this as a 'deployment' service. You should absolutely use this option if you don't have shell access or anything of that nature.
Personally, I prefer to have finer levels of control, and let the Web server do the pulling. Using a system with a full SSH access, you can just have the server do a checkout of the code:
svn co http://server/project/trunk --username foo --password bar
If you want ease of use (mixed with potential control levels), you can also create a PHP administration class to do this for you.
<?php
echo `svn co http://server/project/trunk --username foo --password bar --non-interactive`;
And often enough, I build a bridged solution. You can create post-commit hooks and set things up such that the SVN server reach out to a specified URL with the comment. The client server then checks the comment and performs actions based on the comment (it might ignore anything without the tag 'AUTODEPLOY' for instance).
There is also GIT (which you should be able to do similar things with), but I haven't had blow up my current processes (and hundreds of SVN repos) in order to go there yet.
That is a lot of concepts to cover all in one question. First - I host my repo directly on my webserver - but I can cover what happens if its on a remote server as well.
When you setup an SVN repo you should have a post-commit(.tmpl) file - rename that to post-commit - this is what will run with every commit.
Inside of that:
#!/bin/sh
# POST-COMMIT HOOK
# [1] REPOS-PATH (the path to this repository)
# [2] REV (the number of the revision just committed)
REPO="$1"
REV="$2"
if ( svnlook log -r $REV $REPO | grep "*DEPLOY-DEV*" )
then
rm -rf ~/tmp/path/to/repo
svn export -r $REV "file://$REPO" ~/tmp/path/to/repo
rsync -az ~/tmp/path/to/repo/trunk/ ~/path/to/webroot
fi
if ( svnlook log -r $REV $REPO | grep "*DEPLOY-LIVE*" )
then
rm -rf ~/tmp/path/to/repo
svn export -r $REV "file://$REPO" ~/tmp/path/to/repo
rsync -avz -e "ssh -i /path/to/.ssh/rsync" ~/tmp/path/to/repo/trunk/ root#website.com:/path/to/webroot
ssh -i /path/to/.ssh/rsync root#website.com chown -R apache:apache /path/to/webroot/
fi
Now to go through the important parts - include *DEPLOY-DEV* or *DEPLOY-LIVE* in your commit to deploy your site to the path you specify.
The DEPLOY-DEV part of the example works nicely when its on the same server - you just need to apply the correct paths.
If the server lives somewhere else it may require some ssh keys and other trickery that is a little deeper than I'll get into - but that should give you the direction you need to set it up if you need.
If you have ssh access to your webserver checkout two working copies on the server (in dev and production document root).
That way you can easily deploy any changes.
You can use a stable branch you merge your changes into for production.
Database connection you define in a config ini (or something similar) file that is not under version control. That way you can have different values for dev and prod.
If you don't have ssh access and you want to do it "efficiently" look for a different server with ssh access.

can you make my life easier working this way? (large code base and patching issue)

So I am working on a really large code base, more than 3000 files, more than 1 million lines of code and more than 500+ tables.
Though that is not really the issue. The issue here is, when a new feature is required, I work on it locally on my machine and when the time comes to update/patch our live production:
I ssh to our prod server
I navigate to the directory, and open the file to patch
I copy and paste??? OMG
Anyway, here is my take, please suggest if you guys have alternatives or more comfortable of doing this
First, we migrate to GIT. (we're in SVN)
Everytime we make release, we branch out in our git repo, and then clone a new copy in our prod server (right now we do a branch in svn, and do svn export, then copy it to target dir
when patching the server with new feature, I can simply go to the target repo/release, and do a git pull?? or should i go with a git patch?
This is how i envision a more simpler life.
Would you guys come up anything much easier than this?
I think you are on the right track. I did something similar.
I have two branches.
Master -> holds latest in dev
Production -> holds latest in production
When I need to make a change to prod, I branch production branch, make my changes and the merge back in to production branch. This gives me the option to work on multiple features at the same time.
Then I log on to the box and do git pull.
Of course the bigger problem here is that with PHP there is no such thing as package like there is java (WAR packages). :( So I am sure this is a pain.
I wish I could help you more but I can't think of anything else to make your life easier.
install svn on your prod server, checkout a specific branch, lets say production,
( cd /var/www && svn co svn://domain/path/to/branch )
then once you are ready to push your code just login to prod and run a script that will svn up that specific folder and maybe do some owner updates on the files
( cd /var/www && svn up )
( cd /var/www && chown -R www-data:www-data )
( cd /var/www && svn info )
if you want to do it remotely w/o login to the server you can do remote commands from ssh
(if the script above is on your home and called updateRepo.sh)
ssh user#server updateRepo.sh
if you have your ssh key pair setup you wont even need to do your password
Note: above will output to bash so you might want to redirect all of this 2>&1
is this helpful?
It may be time to consider using a continuous integration tool like Hudson Jenkins.
If not, at the very least consider having different branches or repositories for development and production code as per #Amir Raminfar's suggestion.

Best practices for (php/mysql) deployment to shared hosting?

I have worked within a web development company where we had our local machines, a staging server and a a number of production servers. We worked on macs in perl and used svn to commit to stage, and perl scripts to load to production servers. Now I am working on my own project and would like to find good practices for web development when using shared web hosting and not working from a unix based environment (with all the magic I could do with perl / bash scripting / cron jobs etc)
So my question is given my conditions, which are:
I am using a single standard shared web hosting from an external provider (with ssh access)
I am working with at least one other person and intended to use SVN for source control
I am developing php/mysql under Windows (but using linux is a possibility)
What setup do you suggest for testing, deployment, migration of code/data? I have a xampp server installed on my local machine, but was unsure which methods use to migrate data etc under windows.
I have some PHP personnal-projects on shared-hosting ; here are a couple of thoughts, from what I'm doing on one of those (the one that is the most active, and needs some at least semi-automated synchronization way) :
A few words about my setup :
Some time ago, I had everything on SVN ; now, I'm using bazaar ; but the idea is exactly the same (except, with bazaar, I have local history and all that)
I have an ssh access to the production server, like you do
I work on Linux exclusivly (so, what I do might not be as easy with windows)
Now, How I work :
Everything that has te be on the production server (source-code, images, ...) is commited to SVN/bazarr/whatever
I work locally, with Apache/PHP/MySQL (I use a dump of the production DB that I import locally once in a while)
I am the only one working on that project ; it would probably be OK for a small team of 2/3 developpers, but not more.
What I did before :
I had some PHP script that checked the SVN server for modification between "last revision pushed to production" and HEAD
I'm guessing this homemade PHP script looks like the Perl script you are currently usng ^^
That script built a list of directories/files to upload to production
And uploaded those via FTP access
This was not very satisfying (there were bugs in my script, I suppose ; I never took time to correct those) ; and forced me to remember the revision number of the time I last pushed to production (well, it was automatically stored in a file by the script, so not that hard ^^ )
What I do now :
When switching to bazaar, I didn't want to rewrite that script, which didn't work very well anyway
I have dropped the script totally
As I have ssh access to the production server, I use rsync to synchronise from my development machine to the production server, when what I have locally is considered stable/production-ready.
A couple of notes about that way of doing things :
I don't have a staging server : my local setup is close enough to the production's one
Not having a staging server is OK for a simple project with one or two developpers
If I had a staging server, I'd probably go with :
do an "svn update" on it when you want to stage
when it is OK, launch the rsync command from the staging server (which will ba at the latest "stable" revision, so OK to be pushed to production)
With a bigger project, with more developpers, I would probably not go with that kind of setup ; but I find it quite OK for a (not too big) personnal project.
The only thing "special" here, which might be "linux-oriented" is using rsync ; a quick search seems to indicate there is a rsync executable that can be installed on windows : http://www.itefix.no/i2/node/10650
I've never tried it, though.
As a sidenote, here's what my rsync command looks like :
rsync --checksum \
--ignore-times \
--human-readable \
--progress \
--itemize-changes \
--archive \
--recursive \
--update \
--verbose \
--executability \
--delay-updates \
--compress --skip-compress=gz/zip/z/rpm/deb/iso/bz2/t[gb]z/7z/mp[34]/mov/avi/ogg/jpg/jpeg/png/gif \
--exclude-from=/SOME_LOCAL_PATH/ignore-rsync.txt \
/LOCAL_PATH/ \
USER#HOST:/REMOTE_PATH/
I'm using private/public keys mecanism, so rsync doesn't ask for a password, btw.
And, of course, I generally use the same command in "dry-run" mode first, to see what is going to be synchorised, with the option "--dry-run"
And the ignore-rsync.txt contains a list of files that I don't want to be pushed to production :
.svn
cache/cbfeed/*
cache/cbtpl/*
cache/dcstaticcache/*
cache/delicious.cache.html
cache/versions/*
Here, I just prevent cache directories to be pushed to production -- seems logical to not send those, as production data is not the same as development data.
(I'm just noticing there's still the ".svn" in this file... I could remove it, as I don't use SVN anymore for that project ^^ )
Hope this helps a bit...
Regarding SVN, I would suggest you go with a dedicated SVN host like beanstalk or use the same server machine to run an SVN server so both developers can work off it.
In the latter case, your deployment script would simply move the bits to a staging web folder (accessible via beta.mysite.com) and then another deployment script could move that to the live web directory. Deploying directly to the live site is obviously not a good idea.
If you decide to go with a dedicated host or want to deploy from your machine to the server, use rsync. This is also my current setup. RSync does differential syncs (over SSH) so it's fast and it was built for just this sort of stuff.
As you grow you can start using build tools with unit tests and whatnot. This leaves only the data sync issue.
I only sync data from remote -> local and use a DOS batch file that does this over SSH using mysqldump. Cygwin is useful for Windows machines but you can skip it. The SQL import script also runs a one line query to update some cells such as hostname and web root for local deployment.
Once you have this setup, you can focus on just writing code and remote deployment or local sync and deployement becomes a one click process.
One option is to use a dedicated framework for the task. Capistrano fits very well with scripting languages such as php. It's based on Ruby, but if you do a search, you should be able to find instructions on how to use it for deploying php applications.

Categories