We have a production site powered by Wordpress. In my experience, Wordpress updates tend to go pretty smoothly, but every now an then something goes wrong and so we always run the updates locally or on our dev site first to make sure nothing breaks.
My questions is this: Is it a good practice to commit those changes (from the upgrade) locally and then push the changes to production? ...effectively updating the production site? This seems to work, but I know that sometimes updates include modifications to the database. So my fear is that the update will modify my local DB, but NOT the production DB and then cause problems when the newer code runs (expecting the DB to have been modified).
Is this is valid concern?
Will well-written plugins account for this issue somehow?
Is there an entirely different and better way to do this?
UPDATE:
I think the purpose of this question was initially unclear. I know very well that I can run the update locally, test it, commit, then run the update in production, commit, then merge. That's what we currently do but it sucks and I'm not sure if it's even necessary. The point of this question is to figure that out, or learn a better way. For example, if someone knows something definitive about the nature of WP updates and how they handle DB modifications, it would pretty much answer this question.
If you are able to successfully execute the update in a test environment, you should be able to then execute the same update in your production environment. It might be a bit more work, but its going to give you the most information on whether or not an update will work.
If you are in a virtualzied environment, you should be able to copy your production virtual machine to test the upgrade.
Even though it takes a few minutes extra, always stick to best practices. Complete updates locally then push to the dev site. On occasion, a plugin will have a database change but not documented properly.
Best Practice:
Wait a day after the update then read the issue queue on the plugin . If other people had problems with the update, you will know ahead of time.
Backup the database
git status/git commit, making sure the branch is clean/make any needed commit
Complete all necessary updates
Clear all cache (twice)
Check to make sure all is running smoothly locally.
If there was a database change from the update Make a new database backup.
Push changes to the dev site
If there was a database change Restore database from #7
edit: Please make sure your local database and code is identical to the dev site before the backup.
I prefer a tool like WP Staging to create a test site with a few clicks. Than i update all plugins and if everything is fine, i do the same process on my production site. You find WP Staging on wordpress.org
Just make sure you save a copy of your work before doing anything. Also, always check what the update is, sometimes it's just a language add-on which you probably don't need.
Related
I have a moodle database which I exported a few months ago before our server went down. Now I want to generate reports from my old database, I have tried to import to new moodle site but moodledata folder is missing. So now I'm looking for another way to generate reports from my database. I have tried to make Msql queries but I think that would take a lot of time for now. I need help if there is any tool around which I can use or any API which I can use to generate reports from my database. I have tried to use Seal Report to tackle this issue but I have found that there is a lot of manual work to be done, I don't means this tool can't do that but I'm just looking if there is any other tool which can simplify my task.
NB: I know some will say this is not a programming question, Please feel free to suggest any best way to query using any language.
You should be able to set up a local copy of a Moodle site with a copy of the database and with a blank Moodle data folder (I've done this regularly in order to investigate issues on a customer's site).
Once you've done that, you will have access to any reporting tools you would normally have inside Moodle.
You may find it easiest to set up a fresh install of Moodle, pointed at a blank database, then, once the install is finished, edit the config.php file to point at the restored copy of the original site. You may have to purge caches (php admin/cli/purge_caches.php) and you may have to reset the admin password (php admin/cli/reset_password.php). It is also wise to turn off email (edit config.php and add $CFG->noemailever = true; ).
I'm working in a web agency with a small team (5 developers, 2 designers). We primarily work with PHP/MySQL web applications including Magento, Experession Engine and CakePHP. We use a combination of Windows 7 PC'S (developers) and Mac's on OSX (designers).
I've been looking into using github for our projects with 3 main goals:
To see who has edited files and allowing people to comment on files.
To avoid overwriting each others work as it's sometimes the case multiple people can try and work on the same file at the same time.
Allowing rollbacks to previous versions of a file.
This is our current workflow and I don't understand how github fits in with this at all. I realise that our workflow will need changing but I haven't been able to find a process which in any way seems to fit in with this:
All our work is done on a remote web server which is specifically for development (nothing "live" sits on here). The server is running Apache, PHP, MySQL, etc. Nobody has a local development environment set up on their machine and we don't want that if possible.
We all have FTP access to the development server mentioned above. We generally edit files directly on this dev server as it gives a very quick way to test things out (literally edit a file, upload it and run it in the browser). There are problems with conflicts, e.g. multiple people trying to edit the same file, which is why I'm looking into using git.
When everything has been approved on the development server it is made live by copying it to a different server. The live server can be anywhere - we use some servers we manage ourselves, sometimes we use third party hosting companies - it varies.
I've been looking into this for the last couple of days and all the approaches I'm finding seem impossible for us to use. Does anyone have any insight into the best way to achieve this? Or am I looking into something which isn't even applicable for the issues I'm trying to resolve?
I would appreciate any useful advice people can offer.
Thanks.
We have a very similar set up in the company I work. We actually have a different sandboxes on the dev server. In other words, we clone the repo into different sandboxes. Each developer/designer gets a sandbox. For example if there are 3 developers, there will be 3 sandbox directories + 1 staging directory
So, developer john gets /home/john/example.com and it can be viewed at john.example.hot (setting up vhosts)
mike gets /home/mike/example.com viewed at mike.example.com
tracy gets /home/tracy/example.com viewed at tracy.example.com
And there will be one additional staging directory. /home/staging/example.com staging.example.com
Staging merges all the changes together so it can be tested. All of these directories are accessible only with internal IPS.
We deploy these changes to production using RSYNC. More information here about RSYNC: http://www.cyberciti.biz/tips/linux-use-rsync-transfer-mirror-files-directories.html
You could create a git repository at your testing machine and have every one of your team use git to push their changes to that repository. This way they will get notified, when their changes will conflict with those of other people.
A typical workflow could look like this:
Developer1 changes something at his machine.
Designer1 changes some files on his machine
Designer1 commits those changes to his local git repository
Developer1 commits those changes to his local git repository
Developer1 pushes his changes to the development machine.
Designer1 pushes his changes to the development machine
in case of conflicts with the changes of Developer1 he will now be prompted what those conflicts are and will have to resolve them.
Then the resolved changes will be pushed to the development machine.
This should fix your problem 1.) and 3.) and will make 2.) an explicit action, which means that your developers and designers will see what they are overriding. If changes happen at different parts of a file at the same time, than git may be able to keep both changes without needing further interaction.
But beware, that this still has the problem, that no one will get to test his own changes without interference of other people as those may change things at any time while someone is trying to test something. With only 1 development machine you can not prevent this from happening with git alone. As your team is rather small and your current approach doesn't fix this either it might be of little significance to you.
I've just gotten my production site up and running. I have a lot more work to do, and I'm realizing the need now for a development server before pushing changing live onto the production site (with users) - obviously...
This thread (and a lot more on Stack) describe me:
Best/Better/Optimal way to setup a Staging/Development server
Anyhow... Reading these threads is outright confusing at times, with all of the thrown around terminology, and my smaller CentOS/Apache knowledge.
My goal is to:
Make some changes to files as needed.
Test the changes on the same server, to ensure settings are identical.
If all is ok, I can now save a version of this somewhere, perhaps locally is enough for now (Bazaar seems like a possibility?)
Finally, replace all of the changed files via SSH or SFTP or something...
My worries are:
Uploading changes while users are in the system.
How to upload the files that have changed, only.
I'd love somebody to either link to a great guide for what I'm thinking about (that leaves nothing to imagination I'd hope) - or some kind of suggestion/etc... I'm running in circles trying out different SVN's and progarms to manager them, etc...
I'm the only one developing, and just want a repeatable, trust-worthy solution that can work for me without making my life too miserable trying to get it set up (and keep it set up).
Thanks much.
If you have the ability to create a staging subdomain on the production server, here is how I would (and do) handle it:
Develop on your development machine, storing your code in a VCS. I use subversion, but you may find another you prefer. After making changes, you check in your code.
On your production server, you create a subdomain in an Apache VirtualHost which is identical to, but isolated from your production VirtualHost. Checkout your code from the VCS to the staging subdomain area. After making changes, you then run an update from your VCS, which pulls only changed files down. Staging and production share the same data set, or you may have a separate database for each.
The reason for using a subdomain instead of just a different directory is that it enables you to use the same DocumentRoot for both staging and production. It's also easy to identify where you are if you use something like staging.example.com.
When you're certain everything works as it's supposed to you can run a VCS update on the production side to bring the code up to date.
It is important to be sure that you have instructed Apache to forbid access to the VCS metadata directories (.svn, .git, whatever).
Addendum
To forbid access to .svn directories use a rewrite rule like:
RewriteEngine on
RewriteRule .*\.svn/.* - [F]
This will send a 403 on them. You could also redirect them to the homepage to make it less obvious they're even present.
In terms of worry #1, remember that even StackOverflow periodically goes down for maintenance when people are using it. Just provide a good mechanism for you to switch the site to maintenance mode (and back out) and you'll be fine.
Thank you everyone for the tips/hints/etc...
I've finally found the perfect solution for my needs, SpringLoops... http://www.springloops.com/v2/
It manages my SVN (which I use Tortoise SVN with) - and will actually deploy the changes onto my two sites: staging and production.
I highly recommend it, it's working wonderfully so far.
Thanks!
You need a version control system, like Subversion, Git or Mercurial.
When you change files, you commit those changes to a repository. This keeps track of all the changes you've made and lets you undo them if it turns out you did something stupid. It also lets you restore files you accidentally delete or lose.
On top of that, it makes deploying updates as simple as typing 'git update' or 'svn update' on the production server. Only the files that changed get transferred and put in place.
You will get this advice no matter how many times the question is re-asked. You need version control.
I have a website say www.livesite.com which is currently running. I have been developing a new version of the website on my local machine with http://localhost and then committing my changes with svn to www.testsite.com where I would test the site on the livesite.com server but under another domain (its the same environment as the live site but under a different domain).
Now I am ready to release the new version to livesite.com. Doing it the first time is easy, I could just copy & paste everything from testsite.com to livesite.com (not sure its the best way to do it).
I want to keep testsite.com as a testing site where I would push updates, test them and once satisfied move to livesite.com but I am not sure how to do that after the new site is launched.. I don't think copy pasting the whole directory is the right way of doing it and it will break the operations of current users on the livesite.com.
I also want to keep my svn history on testsite.com. What is the correct way of doing this with SVN ? Thank you so much!
Other answers mentioning Hudson or Weploy are good. They cover more issues than what follows. That said, the following may be sufficient.
If you feel that's overkill, here's the poor-man's way of doing it with SVN and a little creative sysadminning.
Make your proudction document root a symlink, not an actual directory. Meaning you have something like this:
/var/www/myproject-1-0-0
/var/www/myproject-1-1-0
/var/www/myproject-1-1-1
/var/www/html -> myproject-1-1-1
This means you can check out code onto production (say, myproject-1-1-2) without overwriting stuff being served. Then you can switch codebases near-instantly by doing something like:
$ rm html && ln -s myproject-1-1-2 html
I'd further recommend not doing an svn checkout/svn export of your trunk on the production box. Instead, create a branch ahead of time (name it something like myproject-X-Y-Z). That way if you need to do some very stressful tweaking of production code, you can commit it back to the branch, and merge it back to trunk once the fire is extinguished)
I do this a lot, and it works quite well. However, it has some major drawbacks:
Mainly, you have to handle database migrations, or other upgrade scripts, all by yourself. If you have scripts (plain-old-SQL, or something more involved), you need to think about how best to execute them. Downtime of hopefully-just-a-minute might not be a bad idea. You could keep a "maintenance site" around (/var/www/mainenance), and point the symlink there for a few moments if you needed to.
This method is not nearly as cool as Weploy, for example, but for relatively small projects (running on a single server, with not-huge databases), it's often good enough, and dead simple.
My answer will complicate things a little bit, but here goes:
I would for this type of scenario use Hudson.
Hudson will allow for you to have an auto deploy / clean the current dir out / add new from svn process. You can then worry about development and less about jugling and deploying from one place to another.
The caveat is that you need to learn a little bit on how to setup Hudson and how to make him work for you.
How to get started with PHP for Hudson
I think that should get you on the right track, a bit of work like I said, but pays off later on.
If only the server side code changes, it is possible that you can simply copy the code across and things will be okay. But even there you have to think of possibility of people in mid-interaction. If the client side code changes, especially if you are heavily using ajax, you will have to get the current users to reload their pages. If the database also changes, then you have ensure that no database transactions happen during the time that you are applying the database change scripts.
In all cases, and irrespective of whether you are using any continuous integration tool, I believe it is safest to go for downtime to apply these changes. One of the reasons why people have the "beta" sticker on their sites is so that they can log everyone off and shut them all out to apply changes without notice. As long as they don't do it very frequently they can get away with it too. Once you are out of beta, applying changes becomes a ceremony where you start announcing downtime weeks in advance, then get a window of 30 minutes to a few hours to apply all changes.
For underlying things like patching security flaws in the OS or system software, adding hardware etc, downtime can be avoided if there is load balancing, and the patches are applied one by one.
In the past, I've always edited all my sites live; wasn't too concerned about my 2 visitors seeing an error message.
However, there may come a day when I get more than 2 visitors. What would be the best approach to testing my changes and then making all the changes go live simultaneously?
Should I copy and paste ever single file into a sub-folder and edit these, then copy them back when I'm done? What if I have full URLs in my code (they'll break if I move them)? Maybe I can use some .htaccess hackery to get around this? What about database dummy test data? Should I dupe all my MySQL tables and reference those instead?
I'm using CakePHP for the particular project I'm concerned about, but I'm curious to know what approaches people are taking both with Cake (which may have tools to assist with this?), and without a framework.
I've been getting a lot of recommendations for SVN, which sounds great, but unfortunately my host doesn't support it :\
The best thing you can do is to create a staging environment in which you test your changes. The staging environment (ideally) is a complete, working duplicate of your production system. This will prevent you from experiencing many headaches and inadvertent production crashes.
If you are working on a small project the best thing to do is to recreate your remote site locally (including the database). Code all your changes there and then, once you are satisfied that you are finished, deploy the changes to your remote site in one go.
I would recommend putting your website code under full version control (git or subversion). Test and maintain your source in a separate, private sandbox server, and just check out the latest stable version at the production site whenever it's ready for release.
For database support, even for small projects I maintain separate development and production databases. You can version the SQL used to generate and maintain your schema and testing or bootstrapping data along with the rest of your site. Manage the database environment used by your site from an easily separated configuration file, and tell your version control solution to ignore it.
Absolute URLs are going to be a problem. If you can't avoid them, you could always store the hostname in the same configuration file and read it as needed... except within stylesheets and Javascript resources, of course. My second choice for that problem would be URL-rewriting magic or its equivalent in the development server, and my last choice would be just messing with the /etc/hosts file when I wanted to test features that depend on them.
I set up a development server on my laptop that duplicates my web server as closely as possible (server software and configuration, operating system, filesystem layout, installed software, etc.) That way I can write the code on my laptop and test it locally; once I've gotten things working there, I copy it to the server. Sometimes a few problems arise because of slight differences between the two computers, but those are always quickly resolved (and just in case they're not, I have my site in an SVN repository so I can always revert it).
On another website I used to maintain, I used a slightly different tactic: I designated a URL path within the site that would be a development version of the base site. That is, http://www.example.com/devweb would ordinarily mirror http://www.example.com, http://www.example.com/devweb/foo/bar.php would mirror http://www.example.com/foo/bar.php, etc. I created a folder devweb under the document root, but instead of copying all the files, I configured the server so that if a requested file didn't exist in the /devweb directory, it would look for it under the document root. That was a more fragile setup than having a separate development server, though.
I have a number of websites written in CakePHP. I develop and test on my local machine, using the database on my production server (I just have a MySQL login that works for my static IP address).
All code is checked into Subversion, and I then have a continuous integration server - Hudson:
https://hudson.dev.java.net/
This builds and deploys my project on the production machine. It just checks out the code in subversion for a specific project then runs a simple script to SSH/copy the files into the staging or production location on the server. You can either set this up to be a manual process (which I have currently) or you can set this up so that it deploys once code has been checked in. There's lots of other CI tools that can be setup to do this (have a look at Xinc as well).
http://code.google.com/p/xinc/
As for absolute URLs you can always setup something in your host file to resolve the site locally on your machine instead. It works for me, just don't forget to take it out afterwards : )
Hope that helps...
I have a version of config/database.php that uses the php server variable "SERVER NAME" to determine which system the app is running on. Then when I clone my git repo across my home system, development site (which shares the same specs as the live machine), and the live machine they all connect to their respective databases.
I pasted here, but I also believe its available on thebakery.
http://pastebin.com/f1a701145