Set up my DEV environment for my PHP site with SVN - php

I am creating my first website that I am taking live soon. It is a big application that I hope goes somewhere eventually but I will be working on it while it is actually up on the web. I want to do it the right way and set up everything efficiently. It is very difficult to actually gain knowledge on how to go about doing this from a book or even the web.
Firstly, I've been doing a lot of research about SVN and although I am currently the only one working on the site I would like the ability to add more people to the project eventually. I know that you have to create a repository for the SVN where your project will actually reside in, and that there are some places that offer it for free like Codesion. I have a good understanding of how it works.
I would like a dev. subdomain for the current website so that I can test and work on everything that I am developing on and that SVN would commit to. How is it that your SVN repository communicates and uploads files to that subdomain? Is this even how you would go about doing this? How would I go about managing all the different databases for each domain?
When I am done updating several changes to my new rollout do I just manually upload all the files to the live site? I am very noob on the process for all this stuff and it is difficult to find information on it. What is the best practice for rolling out new updates? Are there any other different mechanisms or applications that I would need to know about for maintaining my website as it grows?
Please, any help in the right direction would be greatly appreciated.

It depends on how much control you have over your Web server.
Most (IMO) people push their files up to the server through something like FTP/SFTP. Many SVN hosting services offer this as a 'deployment' service. You should absolutely use this option if you don't have shell access or anything of that nature.
Personally, I prefer to have finer levels of control, and let the Web server do the pulling. Using a system with a full SSH access, you can just have the server do a checkout of the code:
svn co http://server/project/trunk --username foo --password bar
If you want ease of use (mixed with potential control levels), you can also create a PHP administration class to do this for you.
<?php
echo `svn co http://server/project/trunk --username foo --password bar --non-interactive`;
And often enough, I build a bridged solution. You can create post-commit hooks and set things up such that the SVN server reach out to a specified URL with the comment. The client server then checks the comment and performs actions based on the comment (it might ignore anything without the tag 'AUTODEPLOY' for instance).
There is also GIT (which you should be able to do similar things with), but I haven't had blow up my current processes (and hundreds of SVN repos) in order to go there yet.

That is a lot of concepts to cover all in one question. First - I host my repo directly on my webserver - but I can cover what happens if its on a remote server as well.
When you setup an SVN repo you should have a post-commit(.tmpl) file - rename that to post-commit - this is what will run with every commit.
Inside of that:
#!/bin/sh
# POST-COMMIT HOOK
# [1] REPOS-PATH (the path to this repository)
# [2] REV (the number of the revision just committed)
REPO="$1"
REV="$2"
if ( svnlook log -r $REV $REPO | grep "*DEPLOY-DEV*" )
then
rm -rf ~/tmp/path/to/repo
svn export -r $REV "file://$REPO" ~/tmp/path/to/repo
rsync -az ~/tmp/path/to/repo/trunk/ ~/path/to/webroot
fi
if ( svnlook log -r $REV $REPO | grep "*DEPLOY-LIVE*" )
then
rm -rf ~/tmp/path/to/repo
svn export -r $REV "file://$REPO" ~/tmp/path/to/repo
rsync -avz -e "ssh -i /path/to/.ssh/rsync" ~/tmp/path/to/repo/trunk/ root#website.com:/path/to/webroot
ssh -i /path/to/.ssh/rsync root#website.com chown -R apache:apache /path/to/webroot/
fi
Now to go through the important parts - include *DEPLOY-DEV* or *DEPLOY-LIVE* in your commit to deploy your site to the path you specify.
The DEPLOY-DEV part of the example works nicely when its on the same server - you just need to apply the correct paths.
If the server lives somewhere else it may require some ssh keys and other trickery that is a little deeper than I'll get into - but that should give you the direction you need to set it up if you need.

If you have ssh access to your webserver checkout two working copies on the server (in dev and production document root).
That way you can easily deploy any changes.
You can use a stable branch you merge your changes into for production.
Database connection you define in a config ini (or something similar) file that is not under version control. That way you can have different values for dev and prod.
If you don't have ssh access and you want to do it "efficiently" look for a different server with ssh access.

Related

GITHUB : I want to add project from local to live server through github [duplicate]

Is there any way to set up git such that it listens for updates from a remote repo and will pull whenever something changes? The use case is I want to deploy a web app using git (so I get version control of the deployed application) but want to put the "central" git repo on Github rather than on the web server (Github's interface is just soooo nice).
Has anyone gotten this working? How does Heroku do it? My Google-fu is failing to give me any relevant results.
Git has "hooks", actions that can be executed after other actions. What you seem to be looking for is "post-receive hook". In the github admin, you can set up a post-receive url that will be hit (with a payload containing data about what was just pushed) everytime somebody pushes to your repo.
For what it's worth, I don't think auto-pull is a good idea -- what if something wrong was pushed to your branch ? I'd use a tool like capistrano (or an equivalent) for such things.
On unix-likes you can create cron job that calls "git pull" (every day or every week or whatever) on your machine. On windows you could use task scheduler or "AT" command to do the same thing.
There are continuous integrations programs like Jenkins or Bamboo, which can detect commits and trigger operations like build, test, package and deploy. They do what you want, but they are heavy with dependencies, hard to configure and in the end they may use periodical check against git repository, which would have same effect like calling git pull by cron every minute.
I know this question is a bit old, but you can use the windows log and git to autopull your project using a webhook and php (assuming your project involves a webserver.
See my gist here :
https://gist.github.com/upggr/a6d92e2808e9628ebe0d01fd93569f4a
As some have noticed after trying this, if you use php exec(), it turns out that solving for permissions is not that simple.
The user that will execute the command might not be your own, but www-data or apache.
If you have root/sudo access, I recommend you read this Jonathan's blog post
When you aren't allowed/can't solve permissions
My solution was a bit creative. I noticed I could create a script under my username with a loop and git pull would work fine. But that, as pointed out by others, bring the question of running a lot of useless git pull every, say, 60 seconds.
So here the steps to a more delicate solution using webhooks:
deploy key: Go to your server and type:
ssh-keygen -t rsa -b 4096 -C "deploy" to generate a new deploy key, no need write-permissions (read-only is safer). Copy the public key to your github repository settings, under "deploy key".
Webhook: Go to your repository settings and create a webhook. Lets assume the payload address is http://example.com/gitpull.php
Payload: create a php file with this code example bellow in it. The purpose of the payload is not to git pull but to warn the following script that a pull is necessary. Here the simple code:
gitpull.php:
<?php
/* Deploy (C) by DrBeco 2021-06-08 */
echo("<br />\n");
chdir('/home/user/www/example.com/repository');
touch(GITPULLMASTER);
?>
Script: create a script in your preferred folder, say, /home/user/gitpull.sh with the following code:
gitpull.sh
#!/bin/bash
cd /home/user/www/example.com/repository
while true ; do
if [[ -f GITPULLMASTER ]] ; then
git pull > gitpull.log 2>&1
mv GITPULLMASTER GITPULLMASTER.`date +"%Y%m%d%H%M%S"`
fi
sleep 10
done
Detach: the last step is to run the script in detached mode, so you can log out and keep the script running in background.
There are 2 ways of doing that, the first is simpler and don't need screen software installed:
disown:
run ./gitpull.sh & to put it in background
then type disown -h %1 to detach and you can log out
screen:
run screen
run ./gitpull.sh
type control+a d to detach and you can log out
Conclusion
This solution is simple and you avoid messing with keys, passwords, permissions, sudo, root, etc., and also you prevent the script to flood your server with useless git pulls.
The way it works is that it checks if the file GITPULLMASTER exists; if not, back to sleep. Only if it exists, then do a git pull.
You can change the line:
mv GITPULLMASTER GITPULLMASTER.date +"%Y%m%d%H%M%S"`
to
rm GITPULLMASTER
if you prefer a cleaner directory. But I find it useful for debug to let the pull date registered (and untracked).
For our on-premises Windows test servers, we use Windows Task Scheduler tasks, set to run every 3 minutes, pulling from Bitbucket Cloud to repositories on those servers. While not instantaneous, it meets our needs, and has proven to be reliable.

How to upgrade a website without or less down time?

I want to upgrade my website without downtime. I made researches and didn't find a way to upgrade it without downtime (few seconds are fine). I was thinking a way as follows, but I am not sure whether any good professional way is there. Please help me out on how to improve this.
Add new tables/CFs to the database (database is Cassandra,
we are not supposed to do any changes in existing tables/CF)
Deploy the project in online server in different Directory, so that
users can still use the existing site.
Point the uploaded project in different port and check whether
everything is working properly.
If everything is working change the symlink to the uploaded directory
Please let me know any other good methodology if you have.
I am using SVN in my local server
UPDATING THE SERVER ALMOST INSTANTANEOUSLY
Ok for this one of the best and the easiest method is using Git. So, what you should do is host your code in Git and then whenever you want to update the site, just SSH into the server and do a Git Pull. It's instantaneous and it would update it. Also you can use Git Hooks to further solve your problem as well.
There are a few ways you can go about it, but one of the easier methods would be this.
Let me explain it in detail on how to do it:
The source for your web site should live in a Git repository on the local workstation. I shall describe how I set things up so that I can make changes live by running just "git push online".
The local repository
It doesn't really matter how the local repository is set up, but for the sake of argument, let's suppose you're starting one from scratch.
$ mkdir somesite && cd somesite
$ git init
Initialized empty Git repository in /home/sankalpsingha/somesite/.git/
$ echo 'Test!' > index.html
$ git add index.html
$ git commit -q -m "This is the first push."
The remote repository
I assume that the web site will live on a server to which you have ssh access, and that things are set up so that you can ssh to it without having to type a password (i.e., that your public key is in ~/.ssh/authorized_keys and you are running ssh-agent locally).
On the server, we create a new repository to mirror the local one.
$ mkdir coolsite.git && cd coolsite.git
$ git init --bare
Initialized empty Git repository in /home/sankalpsingha/coolsite.git/
Now lets make and define the hoolks as a post-receive hook that checks out the latest tree into the web server's DocumentRoot (this directory must exist; Git will not create it for you):
$ mkdir /var/www/www.somesite.org
$ cat > hooks/post-receive
#!/bin/sh
GIT_WORK_TREE=/var/www/www.somesite.org git checkout -f
$ chmod +x hooks/post-receive
Back on the workstation, we define a name for the remote mirror, and then mirror to it, creating a new "master" branch there.
$ git remote add online ssh://server.somesite.org/home/sankalpsingha/coolsite.git
$ git push online +master:refs/heads/master
On the server, /var/www/www.somesite.org should now contain a copy of your files, independent of any .git metadata.
The update process
Just run :
$ git push online
This will transfer any new commits to the remote repository, where the post-receive hook will immediately update the DocumentRoot for you.
(This is more convenient than defining your workstation as a remote on the server, and running "git pull" by hand or from a cron job, and it doesn't require your workstation to be accessible by ssh.)
You can use some CI or deployment tools. I'm doing this manually. Here is the way i'm doing it.
I have a workcopy directory from git, and git hook to sync it on push.
Then i sync worckopy directory with site using rsync.
All database changes done with migrations, so 1 command yiic migrate is enough to make new tables or alter existing.
With this methodology there is no actuall downtime, some pages can be unavailable for 5-10 sec max, but all system works on this time.
Point the uploaded project in different port and check whether
everything is working properly. If everything is working change the
symlink to the uploaded directory
It's bad way, for testing you need test server, or test it on local machine with same configs and params as on server. For example Vagrant can make any environment on your local machine for testing.
If you need to pass tests (e.g. unit tests or functional), then watch on CI tools.

Git workflow for lone developer (total git newbie)

I have decided that it's time for me to start using Git on a PHP project that I have been developing casually for over a decade. (Please, no lectures from the version control police!) Due to the complex setup required on my VPS to do everything the project needs (esp. single-codebase-multiple-client structure and a Japanese-capable installation of TeX to create specialty PDFs), it is not possible to set up a development environment on my local Windows box. But I do have a testbed area on the server that I can play in, so it's my development area. Currently I use Filezilla to access the server and open files directly into Notepad++, and when I'm ready to see my edit in action, I just save and let Filezilla upload. When everything looks good on the testbed, I copy the files to the production codebase area. Yeah, that gives me no history of my changes other than my own comments, and I have to be careful not to mix bug fixes with half-finished new features. I can see the value of Git's branches for different upgrades in progress.
Yesterday I got my toes wet. First I created a Github account, and then (at the recommendation of a tutorial) installed Git For Windows (with its own Bash and tiny-looking GUI) and Kdiff3, and followed some instructions for configuring Git Bash. After all that, though, I ended up having to install something else in order to interface with my Github account (appropriately named Github for Windows), which seem to do all the stuff the other two programs were supposed to do for me. Anyway, then I did a simple task as my first foray into the Github world - I had added functionality to someone else's jQuery plugin and wanted to share it with the developer, so I forked his repo, cloned it to my machine, overwrote the file I had previously edited and tested, synced to my Github account, and sent a pull request. All the terminology in that last sentence was brand new to me, so I was pretty proud of myself that I got that far. ;) But I guess I only needed the Github software, not the Git software - it's hard to know what tutorials to believe.
Anyway, now I want to figure out a workflow for my own stuff, which is my actual question for you guys. From what I can tell, having the master repo anywhere but the public Github costs money, and I don't care if others see my code (I don't expect anyone else to work on my oddball project made of spaghetti code, but if they want to, that's great). Okay, but then what? Perhaps one of these scenarios, or something else:
Clone branches of the repo to my PC, do edits on the local files, and upload them in Filezilla for testing (a couple more clicks than my current workflow because Filezilla doesn't automatically see the relationship between the local file and the remote file, but not a big deal). Then when I'm happy with the code, commit locally, sync to Github, and copy the files (from somewhere - not sure on this point) to the production area.
Install the Linux flavor of Git on my VPS so that the "local" Git file location is the testbed, and use Git through PuTTY to do the local commits. Simpler for file structure (no need for a copy on my PC at all) but more cumbersome to use Git:
I'm not on PuTTY very frequently, and for some reason the connection often dies on me and I have to restart.
Even though the Linux command line is Git's native habitat, I am probably more comfortable with a GUI (because I forget command syntax quickly - old brain, I guess).
Also, since I never ended up using the Git program I installed here, I'm not sure whether it would be Git or Github I would be using on the server.
Some other scenario, since neither #1 or #2 uses Git/Github to manage the production file area at all, which would probably be a good idea so that I don't forget to copy everything I need.
I tried to research the possibility of a PHP-based GUI to go with idea #2 (so I don't have to use PuTTY for day-to-day operations), but it seems that the discussions of such tools all assume either that you are trying to create your own Github service, or that the "local" cloned repo is physically on your local PC (with xAMP running on whatever OS it is). But maybe the Github software I used is enough to do all that - it's hard to tell. I don't yet understand the interplay between a master public repo on Github, branches somewhere (on Github also?), at least two sets of files on my web server (the testbed and the production area), Github software, Git software, and the keyboard/screen of the computer I'm sitting at.
So pardon my newbie ramblings, but if someone out there has a similar development situation, What's your workflow? Or what would you suggest for me?
Here's one way to aproach the issue:
You will need three repositories:
a local repo to edit code. [1]
a bare remote repository on your server. This will be in a location that in not publicly viewable, but you can ssh in to. [2]
The production environment. [3]
Here's the implementation:
workstation$ cd localWorkingDirectory/
workstation$ git init
workstation$ git add .
workstation$ git commit -m 'initial commit'
workstation$ ssh login#myserver
myserver$ mkdir myrepo.git
myserver$ cd myrepo.git
myserver$ git init --bare
myserver$ exit
workstation$ cd localWorkingDirectory/
workstation$ git remote add origin login#myserver:myrepo.git
workstation$ git push origin master
every time you make a commit on any branch, back it up with:
workstation$ git push origin BRANCH
When you are ready to move branch version2 into production: do this
workstation$ git push origin version2
workstation$ ssh login#myserver
myserver$ git clone path/to/myrepo.git productionDirectory
myserver$ cd productionDirectory
myserver$ git checkout version2
Oh no! It dsoesn't work! better switch back to version1!
workstation$ ssh login#myserver
myserver$ cd productionDirectory
myserver$ git checkout version1
You don't need github (or any other central store) to start using git. Especially since you're a lone developer. Git runs directly on your own machine, without any server component (unlike for example subversion). Just git init and start committing away.
I agree with the other commenters here, that you should aim to get a local development environment up and running. Even if it takes some effort, it's certainly worth it. One of the side effects of doing so may be that you are forced to decouple some of your current hard dependencies and thereby getting a better overall application architecture out of it. The things that can't easily be replicated in your development environment could instead be replaced with mock services.
Once that is in place, look into a scripted deployment process. E.g. write a shell script that syncs your development machine's codebase with the production server. There are many ways to do this, but I suggest you start really simple, then revise your options (Capistrano is one option).
I'd certainly look at something like capistrano for your current development setup.
I can understand why you might have a reticence to use the terminal but it would probably help your understanding of git in context. Doesn't take long to get to grips with the commands and when tied into a system such as capistrano you'll be rocking development code up to your environment in no time:
git commit -a
git push origin develop
cap deploy:dev
When i'm working on windows i generally try to replicate the deployment environment i have with virtual machines locally using something like sun's virtualbox. That way you can minimise potential environment issues while still developing locally. Then you can just use putty to ssh to your local vm. Setup sharing between the vm and your host OS and all your standard IDEs/editors will work too. I find this preferable to having to setup a vps remotely but whatever works.

can you make my life easier working this way? (large code base and patching issue)

So I am working on a really large code base, more than 3000 files, more than 1 million lines of code and more than 500+ tables.
Though that is not really the issue. The issue here is, when a new feature is required, I work on it locally on my machine and when the time comes to update/patch our live production:
I ssh to our prod server
I navigate to the directory, and open the file to patch
I copy and paste??? OMG
Anyway, here is my take, please suggest if you guys have alternatives or more comfortable of doing this
First, we migrate to GIT. (we're in SVN)
Everytime we make release, we branch out in our git repo, and then clone a new copy in our prod server (right now we do a branch in svn, and do svn export, then copy it to target dir
when patching the server with new feature, I can simply go to the target repo/release, and do a git pull?? or should i go with a git patch?
This is how i envision a more simpler life.
Would you guys come up anything much easier than this?
I think you are on the right track. I did something similar.
I have two branches.
Master -> holds latest in dev
Production -> holds latest in production
When I need to make a change to prod, I branch production branch, make my changes and the merge back in to production branch. This gives me the option to work on multiple features at the same time.
Then I log on to the box and do git pull.
Of course the bigger problem here is that with PHP there is no such thing as package like there is java (WAR packages). :( So I am sure this is a pain.
I wish I could help you more but I can't think of anything else to make your life easier.
install svn on your prod server, checkout a specific branch, lets say production,
( cd /var/www && svn co svn://domain/path/to/branch )
then once you are ready to push your code just login to prod and run a script that will svn up that specific folder and maybe do some owner updates on the files
( cd /var/www && svn up )
( cd /var/www && chown -R www-data:www-data )
( cd /var/www && svn info )
if you want to do it remotely w/o login to the server you can do remote commands from ssh
(if the script above is on your home and called updateRepo.sh)
ssh user#server updateRepo.sh
if you have your ssh key pair setup you wont even need to do your password
Note: above will output to bash so you might want to redirect all of this 2>&1
is this helpful?
It may be time to consider using a continuous integration tool like Hudson Jenkins.
If not, at the very least consider having different branches or repositories for development and production code as per #Amir Raminfar's suggestion.

How to get started deploying PHP applications from a subversion repository?

I've heard the phrase "deploying applications" which sounds much better/easier/more reliable than uploading individual changed files to a server, but I don't know where to begin.
I have a Zend Framework application that is under version control (in a Subversion repository). How do I go about "deploying" my application? What should I do if I have an "uploads" directory that I don't want to overwrite?
I host my application through a third party, so I don't know much other than FTP. If any of this involves logging into my server, please explain the process.
Automatic deploy + run of tests to a staging server is known as continuous integration. The idea is that if you check in something that breaks the tests, you would get notified right away. For PHP, you might want to look into Xinc or phpUnderControl
You'd generally not want to automatically deploy to production though. The normal thing to do is to write some scripts that automates the task, but that you still need to manually initiate. You can use frameworks such as Phing or other build-tools for this (A popular choice is Capistrano), but you can also just whisk a few shell-scripts together. Personally I prefer the latter.
The scripts themselves could do different things, depending on your application and setup, but a typical process would be:
ssh to production server. The rest of the commands are run at the production server, through ssh.
run svn export svn://path/to/repository/tags/RELEASE_VERSION /usr/local/application/releases/TIMESTAMP
stop services (Apache, daemons)
run unlink /usr/local/application/current && ln -s /usr/local/application/releases/TIMESTAMP /usr/local/application/current
run ln -s /usr/local/application/var /usr/local/application/releases/TIMESTAMP/var
run /usr/local/application/current/scripts/migrate.php
start services
(Assuming you have your application in /usr/local/application/current)
I wouldn't recommend automatic updating. Just because your unit tests pass doesn't mean your application is 100% working. What if someone checks in a random new feature without any new unit tests, and the feature doesn't work? Your existing unit tests might pass, but the feature could be broken anyway. Your users might see something that's half-done. With automatic deployment from a check-in, you might not notice for a few hours if something made it live that shouldn't have.
Anyhow, it wouldn't be that difficult to get an automatic deployment going if you really wanted. You'd need a post-check-in hook, and really the steps would be:
1) Do an export from the latest check-in
2) Upload export to production server
3) Unpack/config the newly uploaded export
I've always performed the last steps manually. Generally it's as simple as SVN export, zip, upload, unzip, configure, and the last two steps I just alias a couple of bash commands together to perform. Then I swap out the root app directory with the new one, ensuring I keep the old one around as a backup, and it's good to go.
If you're confident in your ability to catch errors before they'd automatically go live, then you could look at automating that procedure. It gives me the jibbly-jibblies though.
At my webdev company we recently started using Webistrano, which is a Web GUI to the popular Capistrano tool.
We wanted an easy to use, fast deployment tool with a centralized interface, accountability (who deployed which version), rollback to previous versions and preferably free. Capistrano is well-known as a deployment tool for Ruby on Rails applications, but not centralized and targeted mainly to Rails apps. Webistrano enhances it with a GUI, accountability, and adds basic support for PHP deployment (use the 'pure file' project type).
Webistrano is itself a Ruby on Rails app, that you install on a development or staging server. You add a project for each of your websites. To each project you add stages, such as Prod and Dev.
Each stage can have different servers to deploy to, and different settings. Write (or modify) a 'recipe', which is a ruby script that tells capistrano what to do. In our case I just used the supplied recipe and added a command to create a symlink to a shared uploads dir, just like you mentioned.
When you click Deploy, Webistrano SSHs into your remote server(s), does an svn checkout of the code, and any other tasks that you require such as database migrations, symlinking or cleanup of previous versions. All this can be tweaked of course, after all, it's simply scripted.
We're very happy with it, but it took me a few days to learn and set up, especially since I wasn't familiar with Ruby and Rails. Still, I can highly recommend it for production use in small and medium companies, since it's proven very reliable, flexible and has saved us many times the initial investment. Not only by speeding up deployments, but also by reducing mistakes/accidents.
This sort of thing is what you would call "Continous Integration". Atlassian Bamboo (cost), Sun Hudson (free) and Cruise Control (free) are all popular options (in order of my preference) and have support to handle PHPUnit output (because PHPUnit support JUnit output).
The deployment stuff can be done with a post build trigger. Like some other people on this thread, I would exercise great caution before doing automated deployments on checkin (and test passing).
check fredistrano, it's a capistrano clone
works great (litle bit confusing installing but after all runs great)
http://code.google.com/p/fredistrano/
To handle uploads, the classic solution is to move the actual directory out of the main webspace, leaving it only for a fresh version to be checked out (as I do in the script below) and then using Apache to 'Alias' it back into place as part of the website.
Alias /uploads /home/user/uploads/
There are less choices to you if you don't have as much control of the server however.
I've got a script I use to deploy a given script to the dev/live sites (they both run on the same server).
#!/bin/sh
REV=2410
REVDIR=$REV.20090602-1027
REPOSITORY=svn+ssh://topbit#svn.example.com/var/svn/website.com/trunk
IMAGES=$REVDIR/php/i
STATIC1=$REVDIR/anothersite.co.uk
svn export --revision $REV $REPOSITORY $REVDIR
mkdir -p $REVDIR/tmp/templates_c
chown -R username: $REVDIR
chmod -R 777 $REVDIR/tmp $REVDIR/php/cache/
chown -R nobody: $REVDIR/tmp $REVDIR/php/cache/ $IMAGES
dos2unix $REVDIR/bin/*sh $REVDIR/bin/*php
chmod 755 $REVDIR/bin/*sh $REVDIR/bin/*php
# chmod -x all the non-directories in images
find $IMAGES -type f -perm -a+x | xargs -r chmod --quiet -x
find $STATIC1 -type f -perm -a+x | xargs -r chmod --quiet -x
ls -l $IMAGES/* | grep -- "-x"
rm dev && ln -s $REVDIR dev
I put the revison number, and date/time which is used for the checked-out directory name. The chmod's in the middle also make sre the permissions on the images are OK as they are also symlinked to our dedicated image server.
The last thing that happens is an old symlink .../website/dev/ is relinked to the newly checked out directory. The Apache config then has a doc-root of .../website/dev/htdocs/
There's also a matching .../website/live/htdocs/ docroot, and again, 'live' is another symlink. This is my other script that will remove the live symlink, and replace it with whatever dev points to.
#!/bin/sh
# remove live, and copy the dir pointed to by dev, to be the live symlink
rm live && cp -d dev live
I'm only pushing a new version of the site every few dats, so you might not want to be using this several times a day (my APC cache wouldn't like more than a few versions of the site around), but for me, I find this to be very much problem-free for my own deployment.
After 3 years, I've learned a bit about deployment best practices. I currently use a tool called Capistrano because it's easy to set up and use, and it nicely handles a lot of defaults.
The basics of an automated deployment process goes like this:
Your code is ready for production, so it is tagged with the version of the release: v1.0.0
Assuming you've already configured your deployment script, you run your script, specifying the tag you just created.
The script SSH's over to your production server which has the following directory structure:
/your-application
/shared/
/logs
/uploads
/releases/
/20120917120000
/20120918120000 <-- latest release of your app
/app
/config
/public
...etc
/current --> symlink to latest release
Your Apache document root should be set to /your-application/current/public
The script creates a new directory in the releases directory with the current datetime. Inside that directory, your code is updated to the tag you specified.
Then the original symlink is removed and a new symlink is created, pointing to the latest release.
Things that need to be kept between releases goes in the shared directory, and symlinks are created to those shared directories.
It depends on your application and how solid the tests are.
Where I work everything gets checked into the repository for review and is then released.
Auto updating out of a repository wouldn't be smart for us, as sometimes we just check in so that other developers can pull a later version and merge there changes in.
To do what you are talking about would need some sort of secondary check in and out to allow for collaboration between developers in the primary check in area. Although I don't know anything about that or if its even possible.
There is also issues with branching and other like features that would need to be handled.

Categories