With our CD process, we have configured the following drush commands to be executed after code sync on the servers -
drush #hostname rr
drush #hostname cc all
drush #hostname fra -y
drush #hostname updb -y
Now I want to know if execution of the above commands cause an outage.
Thanks
This depends largely on what code you push exactly. The more custom the code, the more likely something may break at all. I've seen a lot of sites running similar commands as part of their deployment routine without a problem. Most likely it's drush cc all that may abort due to memory limit exhaustion. But this won't break your site.
To ensure your commands will run successfully in your live environment I'd recommend to implement some sort of continuous integration. For example CircleCI (1.500 build minutes free per month) or TravisCI (free for open source projects). Here is an example: https://github.com/leymannx/drupal-circleci-behat. Though it's for Drupal 8 I guess you'll get the idea.
By that you'll basically set up your site from scratch inside some temporary and configurable server (Docker), import a dummy database, run your commands, maybe run some testing (Behat) and then ONLY when everything went fine the site will be deployed to the live server where your deployment commands run again.
Depending on how often those command run and how big the site is, those commands can put a strain on the server and cause outage. If this is only on deployment then still can cause outage depending on the range of factors, but that can be more controlled such us have the deployment at the time when there isn't much traffic.
Check out a list of drush commands at drupalreference.com
Related
Is there any way to set up git such that it listens for updates from a remote repo and will pull whenever something changes? The use case is I want to deploy a web app using git (so I get version control of the deployed application) but want to put the "central" git repo on Github rather than on the web server (Github's interface is just soooo nice).
Has anyone gotten this working? How does Heroku do it? My Google-fu is failing to give me any relevant results.
Git has "hooks", actions that can be executed after other actions. What you seem to be looking for is "post-receive hook". In the github admin, you can set up a post-receive url that will be hit (with a payload containing data about what was just pushed) everytime somebody pushes to your repo.
For what it's worth, I don't think auto-pull is a good idea -- what if something wrong was pushed to your branch ? I'd use a tool like capistrano (or an equivalent) for such things.
On unix-likes you can create cron job that calls "git pull" (every day or every week or whatever) on your machine. On windows you could use task scheduler or "AT" command to do the same thing.
There are continuous integrations programs like Jenkins or Bamboo, which can detect commits and trigger operations like build, test, package and deploy. They do what you want, but they are heavy with dependencies, hard to configure and in the end they may use periodical check against git repository, which would have same effect like calling git pull by cron every minute.
I know this question is a bit old, but you can use the windows log and git to autopull your project using a webhook and php (assuming your project involves a webserver.
See my gist here :
https://gist.github.com/upggr/a6d92e2808e9628ebe0d01fd93569f4a
As some have noticed after trying this, if you use php exec(), it turns out that solving for permissions is not that simple.
The user that will execute the command might not be your own, but www-data or apache.
If you have root/sudo access, I recommend you read this Jonathan's blog post
When you aren't allowed/can't solve permissions
My solution was a bit creative. I noticed I could create a script under my username with a loop and git pull would work fine. But that, as pointed out by others, bring the question of running a lot of useless git pull every, say, 60 seconds.
So here the steps to a more delicate solution using webhooks:
deploy key: Go to your server and type:
ssh-keygen -t rsa -b 4096 -C "deploy" to generate a new deploy key, no need write-permissions (read-only is safer). Copy the public key to your github repository settings, under "deploy key".
Webhook: Go to your repository settings and create a webhook. Lets assume the payload address is http://example.com/gitpull.php
Payload: create a php file with this code example bellow in it. The purpose of the payload is not to git pull but to warn the following script that a pull is necessary. Here the simple code:
gitpull.php:
<?php
/* Deploy (C) by DrBeco 2021-06-08 */
echo("<br />\n");
chdir('/home/user/www/example.com/repository');
touch(GITPULLMASTER);
?>
Script: create a script in your preferred folder, say, /home/user/gitpull.sh with the following code:
gitpull.sh
#!/bin/bash
cd /home/user/www/example.com/repository
while true ; do
if [[ -f GITPULLMASTER ]] ; then
git pull > gitpull.log 2>&1
mv GITPULLMASTER GITPULLMASTER.`date +"%Y%m%d%H%M%S"`
fi
sleep 10
done
Detach: the last step is to run the script in detached mode, so you can log out and keep the script running in background.
There are 2 ways of doing that, the first is simpler and don't need screen software installed:
disown:
run ./gitpull.sh & to put it in background
then type disown -h %1 to detach and you can log out
screen:
run screen
run ./gitpull.sh
type control+a d to detach and you can log out
Conclusion
This solution is simple and you avoid messing with keys, passwords, permissions, sudo, root, etc., and also you prevent the script to flood your server with useless git pulls.
The way it works is that it checks if the file GITPULLMASTER exists; if not, back to sleep. Only if it exists, then do a git pull.
You can change the line:
mv GITPULLMASTER GITPULLMASTER.date +"%Y%m%d%H%M%S"`
to
rm GITPULLMASTER
if you prefer a cleaner directory. But I find it useful for debug to let the pull date registered (and untracked).
For our on-premises Windows test servers, we use Windows Task Scheduler tasks, set to run every 3 minutes, pulling from Bitbucket Cloud to repositories on those servers. While not instantaneous, it meets our needs, and has proven to be reliable.
I'm using LAMP with CodeIgniter for one of my projects; version controlled by SVN. Every time I execute svn export file:///svnrepo/project/trunk/www . --force when in the www directory and then reload the web page, it goes blank.
The website only shows up after I do a service httpd restart (Using CentOS 5).
I want to be able to execute the svn export using a Phing build script in the future and I don't want to have to get root privileges and restart apache every time when I do a build.
Is what I'm experience a common problem? How do I solve it without restarting apache?
Edit:
It seems someone has had this problem before: http://codeigniter.com/forums/viewthread/181642/
Ok I got it.SVN maintains a files last modified time which throws off the APC cache. So to solve it we update the last modified time of all the files after we run an SVN export. Here is my final script:
#!/bin/sh
svn export --force file:///home/steve/repo/exmaple/trunk \
/home/steve/public_html/example.com/public/
find /home/steve/public_html/exmaple.com/public | xargs touch
You can find more details here.
An alternative solution would be to set apc.stat=0 (reference) in the apc.ini, and then use apc_clear_cache() (reference) to force the removal of the opcode cache.
What's awesome about this solution is that when apc.stat is set to 0, it disables the check on each request to determine if the file has been modified. This results in a huge performance boost.
Additionally, using apc_clear_cache() to clear the APC cache tends to result in a cleaner build. I've run into wonky race conditions where certain files will get built out that have dependencies on others that have not yet been built out. This results in a spat of FATAL errors. The only caveat here is that apc_clear_cache() needs to be run via apache, so you'll need to implement a wgetor something similar for this.
I am in charge of launching web projects and it takes a little too long currently from client sign off to final launch. It is on a server which I have root access to, but it runs Plesk so that the boss can setup VirtualHosts, which means there are many sites running on it.
Each project has its own git repository so currently I have the following setup.
On my staging server there is a clone of the repo and I have two bare repositories. One is on the forge (powered by Indefero) and the other is on the live server.
Each release of a project is tagged with todays date eg. git tag -a deployed-2011-04-20.
So on the staging server I execute something similar to git push --tags live master, which targets the bare repo on the live server.
Then over SSH on the live server I execute a short bash script which basically clones the repository from the live bare repo to the folder Apache will serve.
So if that all makes sense would you be able to recommend a tool or anything to make my life easier that follows that work flow or can be adapted?
It looks something like this:
Forge (authoritative source)
^
|
v
Staging/development server
|
v
Live server bare repo
|
v
Releases folder (symlinked to htdocs)
One solution that comes to mind is to add some post-receive hook on the live server bare repo in order to detect any deployed-2011-xx-yy tag coming from the staging repo, and to trigger the ssh script from there.
The other solution is to have a scheduler (like Hudson mention in pderaaij's answer, in order to:
monitor the stating repo and, on the right tag, trigger the push on the live server
monitor the live bare repo, and trigger the ssh script.
The second solution has the advantage to keep a trace of all release instances in an Hudson job report, each time said job detect the right tags and execute the release process.
Take a look at Capistrano, which happily does the symlink dance you describe here.
If you use Hudson as a continious integration server, you can make use of the build pipeline plugin.
You have your normal build process, but add an extra job which contains the commands to deploy your application. The plugin gives you a nice button to execute that build.
The hudson job could execute all the needed commands or you can take a peek at Maven for PHP and use the available plugins to invoke the remote scripts
Perhaps it is a bit out of range considering the path you've chosen already, but it's worth the research.
We got a couple of drupal sites we develop for, we are a team of 4 developers and about 20+ non-technical content managers.
Each developer has his own dev environment, we all got a beta environment where we test code integration and performance, a staging environment where the content managers test features before we push to the live environment, a training environment we use to train people and an environment specific for usability testing.
All that is setup with only 1 bare repo on a central server where each environment is a branch. We do use the post-receive hook with password-less ssh certs doing the auto pulling on the appropriate repo based on a case statement like the following:
BRANCH=`echo $line | sed 's/.*\///g'`
LOG="`date` - updating $BRANCH instance"
case $BRANCH in
"beta" )
ssh www-data#beta "cd /var/www/beta.example.com; git pull"
;;
I have worked within a web development company where we had our local machines, a staging server and a a number of production servers. We worked on macs in perl and used svn to commit to stage, and perl scripts to load to production servers. Now I am working on my own project and would like to find good practices for web development when using shared web hosting and not working from a unix based environment (with all the magic I could do with perl / bash scripting / cron jobs etc)
So my question is given my conditions, which are:
I am using a single standard shared web hosting from an external provider (with ssh access)
I am working with at least one other person and intended to use SVN for source control
I am developing php/mysql under Windows (but using linux is a possibility)
What setup do you suggest for testing, deployment, migration of code/data? I have a xampp server installed on my local machine, but was unsure which methods use to migrate data etc under windows.
I have some PHP personnal-projects on shared-hosting ; here are a couple of thoughts, from what I'm doing on one of those (the one that is the most active, and needs some at least semi-automated synchronization way) :
A few words about my setup :
Some time ago, I had everything on SVN ; now, I'm using bazaar ; but the idea is exactly the same (except, with bazaar, I have local history and all that)
I have an ssh access to the production server, like you do
I work on Linux exclusivly (so, what I do might not be as easy with windows)
Now, How I work :
Everything that has te be on the production server (source-code, images, ...) is commited to SVN/bazarr/whatever
I work locally, with Apache/PHP/MySQL (I use a dump of the production DB that I import locally once in a while)
I am the only one working on that project ; it would probably be OK for a small team of 2/3 developpers, but not more.
What I did before :
I had some PHP script that checked the SVN server for modification between "last revision pushed to production" and HEAD
I'm guessing this homemade PHP script looks like the Perl script you are currently usng ^^
That script built a list of directories/files to upload to production
And uploaded those via FTP access
This was not very satisfying (there were bugs in my script, I suppose ; I never took time to correct those) ; and forced me to remember the revision number of the time I last pushed to production (well, it was automatically stored in a file by the script, so not that hard ^^ )
What I do now :
When switching to bazaar, I didn't want to rewrite that script, which didn't work very well anyway
I have dropped the script totally
As I have ssh access to the production server, I use rsync to synchronise from my development machine to the production server, when what I have locally is considered stable/production-ready.
A couple of notes about that way of doing things :
I don't have a staging server : my local setup is close enough to the production's one
Not having a staging server is OK for a simple project with one or two developpers
If I had a staging server, I'd probably go with :
do an "svn update" on it when you want to stage
when it is OK, launch the rsync command from the staging server (which will ba at the latest "stable" revision, so OK to be pushed to production)
With a bigger project, with more developpers, I would probably not go with that kind of setup ; but I find it quite OK for a (not too big) personnal project.
The only thing "special" here, which might be "linux-oriented" is using rsync ; a quick search seems to indicate there is a rsync executable that can be installed on windows : http://www.itefix.no/i2/node/10650
I've never tried it, though.
As a sidenote, here's what my rsync command looks like :
rsync --checksum \
--ignore-times \
--human-readable \
--progress \
--itemize-changes \
--archive \
--recursive \
--update \
--verbose \
--executability \
--delay-updates \
--compress --skip-compress=gz/zip/z/rpm/deb/iso/bz2/t[gb]z/7z/mp[34]/mov/avi/ogg/jpg/jpeg/png/gif \
--exclude-from=/SOME_LOCAL_PATH/ignore-rsync.txt \
/LOCAL_PATH/ \
USER#HOST:/REMOTE_PATH/
I'm using private/public keys mecanism, so rsync doesn't ask for a password, btw.
And, of course, I generally use the same command in "dry-run" mode first, to see what is going to be synchorised, with the option "--dry-run"
And the ignore-rsync.txt contains a list of files that I don't want to be pushed to production :
.svn
cache/cbfeed/*
cache/cbtpl/*
cache/dcstaticcache/*
cache/delicious.cache.html
cache/versions/*
Here, I just prevent cache directories to be pushed to production -- seems logical to not send those, as production data is not the same as development data.
(I'm just noticing there's still the ".svn" in this file... I could remove it, as I don't use SVN anymore for that project ^^ )
Hope this helps a bit...
Regarding SVN, I would suggest you go with a dedicated SVN host like beanstalk or use the same server machine to run an SVN server so both developers can work off it.
In the latter case, your deployment script would simply move the bits to a staging web folder (accessible via beta.mysite.com) and then another deployment script could move that to the live web directory. Deploying directly to the live site is obviously not a good idea.
If you decide to go with a dedicated host or want to deploy from your machine to the server, use rsync. This is also my current setup. RSync does differential syncs (over SSH) so it's fast and it was built for just this sort of stuff.
As you grow you can start using build tools with unit tests and whatnot. This leaves only the data sync issue.
I only sync data from remote -> local and use a DOS batch file that does this over SSH using mysqldump. Cygwin is useful for Windows machines but you can skip it. The SQL import script also runs a one line query to update some cells such as hostname and web root for local deployment.
Once you have this setup, you can focus on just writing code and remote deployment or local sync and deployement becomes a one click process.
One option is to use a dedicated framework for the task. Capistrano fits very well with scripting languages such as php. It's based on Ruby, but if you do a search, you should be able to find instructions on how to use it for deploying php applications.
I've heard the phrase "deploying applications" which sounds much better/easier/more reliable than uploading individual changed files to a server, but I don't know where to begin.
I have a Zend Framework application that is under version control (in a Subversion repository). How do I go about "deploying" my application? What should I do if I have an "uploads" directory that I don't want to overwrite?
I host my application through a third party, so I don't know much other than FTP. If any of this involves logging into my server, please explain the process.
Automatic deploy + run of tests to a staging server is known as continuous integration. The idea is that if you check in something that breaks the tests, you would get notified right away. For PHP, you might want to look into Xinc or phpUnderControl
You'd generally not want to automatically deploy to production though. The normal thing to do is to write some scripts that automates the task, but that you still need to manually initiate. You can use frameworks such as Phing or other build-tools for this (A popular choice is Capistrano), but you can also just whisk a few shell-scripts together. Personally I prefer the latter.
The scripts themselves could do different things, depending on your application and setup, but a typical process would be:
ssh to production server. The rest of the commands are run at the production server, through ssh.
run svn export svn://path/to/repository/tags/RELEASE_VERSION /usr/local/application/releases/TIMESTAMP
stop services (Apache, daemons)
run unlink /usr/local/application/current && ln -s /usr/local/application/releases/TIMESTAMP /usr/local/application/current
run ln -s /usr/local/application/var /usr/local/application/releases/TIMESTAMP/var
run /usr/local/application/current/scripts/migrate.php
start services
(Assuming you have your application in /usr/local/application/current)
I wouldn't recommend automatic updating. Just because your unit tests pass doesn't mean your application is 100% working. What if someone checks in a random new feature without any new unit tests, and the feature doesn't work? Your existing unit tests might pass, but the feature could be broken anyway. Your users might see something that's half-done. With automatic deployment from a check-in, you might not notice for a few hours if something made it live that shouldn't have.
Anyhow, it wouldn't be that difficult to get an automatic deployment going if you really wanted. You'd need a post-check-in hook, and really the steps would be:
1) Do an export from the latest check-in
2) Upload export to production server
3) Unpack/config the newly uploaded export
I've always performed the last steps manually. Generally it's as simple as SVN export, zip, upload, unzip, configure, and the last two steps I just alias a couple of bash commands together to perform. Then I swap out the root app directory with the new one, ensuring I keep the old one around as a backup, and it's good to go.
If you're confident in your ability to catch errors before they'd automatically go live, then you could look at automating that procedure. It gives me the jibbly-jibblies though.
At my webdev company we recently started using Webistrano, which is a Web GUI to the popular Capistrano tool.
We wanted an easy to use, fast deployment tool with a centralized interface, accountability (who deployed which version), rollback to previous versions and preferably free. Capistrano is well-known as a deployment tool for Ruby on Rails applications, but not centralized and targeted mainly to Rails apps. Webistrano enhances it with a GUI, accountability, and adds basic support for PHP deployment (use the 'pure file' project type).
Webistrano is itself a Ruby on Rails app, that you install on a development or staging server. You add a project for each of your websites. To each project you add stages, such as Prod and Dev.
Each stage can have different servers to deploy to, and different settings. Write (or modify) a 'recipe', which is a ruby script that tells capistrano what to do. In our case I just used the supplied recipe and added a command to create a symlink to a shared uploads dir, just like you mentioned.
When you click Deploy, Webistrano SSHs into your remote server(s), does an svn checkout of the code, and any other tasks that you require such as database migrations, symlinking or cleanup of previous versions. All this can be tweaked of course, after all, it's simply scripted.
We're very happy with it, but it took me a few days to learn and set up, especially since I wasn't familiar with Ruby and Rails. Still, I can highly recommend it for production use in small and medium companies, since it's proven very reliable, flexible and has saved us many times the initial investment. Not only by speeding up deployments, but also by reducing mistakes/accidents.
This sort of thing is what you would call "Continous Integration". Atlassian Bamboo (cost), Sun Hudson (free) and Cruise Control (free) are all popular options (in order of my preference) and have support to handle PHPUnit output (because PHPUnit support JUnit output).
The deployment stuff can be done with a post build trigger. Like some other people on this thread, I would exercise great caution before doing automated deployments on checkin (and test passing).
check fredistrano, it's a capistrano clone
works great (litle bit confusing installing but after all runs great)
http://code.google.com/p/fredistrano/
To handle uploads, the classic solution is to move the actual directory out of the main webspace, leaving it only for a fresh version to be checked out (as I do in the script below) and then using Apache to 'Alias' it back into place as part of the website.
Alias /uploads /home/user/uploads/
There are less choices to you if you don't have as much control of the server however.
I've got a script I use to deploy a given script to the dev/live sites (they both run on the same server).
#!/bin/sh
REV=2410
REVDIR=$REV.20090602-1027
REPOSITORY=svn+ssh://topbit#svn.example.com/var/svn/website.com/trunk
IMAGES=$REVDIR/php/i
STATIC1=$REVDIR/anothersite.co.uk
svn export --revision $REV $REPOSITORY $REVDIR
mkdir -p $REVDIR/tmp/templates_c
chown -R username: $REVDIR
chmod -R 777 $REVDIR/tmp $REVDIR/php/cache/
chown -R nobody: $REVDIR/tmp $REVDIR/php/cache/ $IMAGES
dos2unix $REVDIR/bin/*sh $REVDIR/bin/*php
chmod 755 $REVDIR/bin/*sh $REVDIR/bin/*php
# chmod -x all the non-directories in images
find $IMAGES -type f -perm -a+x | xargs -r chmod --quiet -x
find $STATIC1 -type f -perm -a+x | xargs -r chmod --quiet -x
ls -l $IMAGES/* | grep -- "-x"
rm dev && ln -s $REVDIR dev
I put the revison number, and date/time which is used for the checked-out directory name. The chmod's in the middle also make sre the permissions on the images are OK as they are also symlinked to our dedicated image server.
The last thing that happens is an old symlink .../website/dev/ is relinked to the newly checked out directory. The Apache config then has a doc-root of .../website/dev/htdocs/
There's also a matching .../website/live/htdocs/ docroot, and again, 'live' is another symlink. This is my other script that will remove the live symlink, and replace it with whatever dev points to.
#!/bin/sh
# remove live, and copy the dir pointed to by dev, to be the live symlink
rm live && cp -d dev live
I'm only pushing a new version of the site every few dats, so you might not want to be using this several times a day (my APC cache wouldn't like more than a few versions of the site around), but for me, I find this to be very much problem-free for my own deployment.
After 3 years, I've learned a bit about deployment best practices. I currently use a tool called Capistrano because it's easy to set up and use, and it nicely handles a lot of defaults.
The basics of an automated deployment process goes like this:
Your code is ready for production, so it is tagged with the version of the release: v1.0.0
Assuming you've already configured your deployment script, you run your script, specifying the tag you just created.
The script SSH's over to your production server which has the following directory structure:
/your-application
/shared/
/logs
/uploads
/releases/
/20120917120000
/20120918120000 <-- latest release of your app
/app
/config
/public
...etc
/current --> symlink to latest release
Your Apache document root should be set to /your-application/current/public
The script creates a new directory in the releases directory with the current datetime. Inside that directory, your code is updated to the tag you specified.
Then the original symlink is removed and a new symlink is created, pointing to the latest release.
Things that need to be kept between releases goes in the shared directory, and symlinks are created to those shared directories.
It depends on your application and how solid the tests are.
Where I work everything gets checked into the repository for review and is then released.
Auto updating out of a repository wouldn't be smart for us, as sometimes we just check in so that other developers can pull a later version and merge there changes in.
To do what you are talking about would need some sort of secondary check in and out to allow for collaboration between developers in the primary check in area. Although I don't know anything about that or if its even possible.
There is also issues with branching and other like features that would need to be handled.