Wordpress site release management strategy - php

I'm updating an existing wordpress site making significant modifications the the theme and site structure, as well as making updates to plugins which in turn store their data into mysql database.
As far as I'm aware there are 2 (3?) possible strategies here:
'Dump-and-load' MySQL database from DEV to LIVE and replace wp-content folder with latest updates.
Import changes via WP-importer and replace wp-content folder with latest updates.
Make database changes manually via WP admin interface and replace wp-content folder with latest updates (this is useful only for minor changes).
While I am developing in my own separate environment this is for an existing website which is currently live and will continue to receive updates from the public such as comments and entries into contact forms, hence I expect the database to be different now from when I release my changes.
Given this the options above provide the following problems.
1. DUMP AND LOAD
The 'dump-and-load' strategy seems to be out of the question as my data is being updated behind the scenes (this would have been my preferred approach as this is easily rolled back).
Result: requires synchronising databases post release to get latest updates, TOO COMPLICATED.
2. USE THE IMPORTER
Using the WP-Importer plugin page and post IDs will get updated, screwing up styling that relies on the post IDs to get activated. This in turn creates a CSS nightmare that I wish to avoid, having to go though the CSS after release to update the new page/post IDs with the ones the database created.
Result: Too finicky, not very professional approach leading to long and complex release process.
3. UPDATE DATABASE MANUALLY
This option is great for small changes but when for more complex releases the list of steps to follow on the PROD interface becomes long and hard to follow, making it easy to make mistakes.
Result: Too easy to screw up, only a last resort.
IS THERE A STANDARD WORDPRESS RELEASE STRATEGY FOR EXISTING WEBSITES?
So basically, my question is: What release process do other wordpress developers follow when UPDATING an existing website? Is there an option that I have not listed below that minimizes hassle and reduces time and complexity during release?
I've set up source control for the site using GIT and I am used to automating things via ANT or similar release script, this may be overkill for the current project but would be ideal to at least know of a simple way to update a wordpress site and minimize the chances of screwing it up.
Thanks!

I don't think this is particular to WordPress, it's a similar situation to any custom site. I personally favor replaying the SQL changes on production that were made on dev. The tricky part is that you have to know what SQL changes were made. For example a certain plugin may make some schema changes when you install it - you need to know what they were. You can do that by creating an export of your DB as SQL before installing a plugin, then take another export after and do a diff on the files.
Since you say you're making the modifications then I might assume you know what SQL changes you are going to make? Just make sure all changes you make to the DB are in the form of SQL script files and not just editing using the GUI (you can use the GUI to help write the queries, but save the actual SQL). After all of your changes are done you should have a bunch of SQL scripts that you ran during your development process - you can re-run them in order without encountering errors.
Then when it's time to push to production, create a staging version of production (that is take a fairly current DB backup of production). Run your update scripts on that and test that everything is ok. If it is, then you can run on production.
definitely make a backup of production before running any changes on it!

The guy behind WordFence was working on a deployment plugin called
Deploymint.
There's a new one called WP Stack.
Metal Toad Media discussed using Capistrano, but that Capistrano isn't specific to WP.
CrowdFavorite launched a service called
RAMP.
Needless to say, you have some other options. If you're making db changes manually make sure you're working with the serialized data effectively. I recommend using Search and Replace DB. WordPress also had a great little trick for changing the site url entirely from the wp-config file.

I assume you have everything running in a test environment. I would then:
Create a new database in your live environment.
Preload it with all content and configurations for the new site.
In your test environment, configure your config.php to point to the new database.
Upload all files to the live server. Upload your config.php last.
This will minimize downtime.

Related

Merging dev and live prestashop databases

I have extensive modifications to carry out on a prestashop 1.6 site.
I have created a local copy, and am tracking filesystem changes in git.
However a lot of changes in prestashop are stored in the database, specifically in my case:
Installing and configuring a new module
Uninstalling a module
Adding shop categories and changing the hierarchy
Changing module positions
and generally modifying what modules appear in what hooks.
During the dev process, the live site has received numerous new orders, customers, subscribers etc, so the databases are out of sync.
I have solved similar problems in other frameworks by dumping and importing specific tables in the db, or using the frameworks built in migrations functionality, but i cannot find any advise specifically for prestashop.
How is this handled?
Considering that the dev site has probably undergone more diverse changes than the live one, i wonder if it would be easier to copy the new orders etc over to the dev site then overwrite the whole thing?
I don't think that it is possible to achieve this in PrestaShop. You must have immense knowledge of PrestaShop DB (i.e. have knowledge of each and every table and columns in it) to merge the databases.
It is never recommended to do that either.
I suggest you do the syncing manually as it is a very risky task and you might lose all the data in your live store, which will be even more painful.
For modules the information is stored in all tables which start with modules.
Module config values are stored in configuration and configuration_lang. Make sure you also copy custom module tables of course.
Shop categories information is in all tables which start with category.
Module hooks information is in all tables which start with hook.
However as Raghubendra Singh said in his answer this is very risky task, if you really really want to do it, I suggest you create another local copy of currently live site and first try the process between two local copies and make sure everything works correctly.
I can tell you my experience on updating the Prestashop and using it everyday.
For the everyday work (fix bug or add feature) I do the changes in DB directly in phpmyadmin. I test everything in a mirror isntallation, copy the changes to the prod site and apply the mysql changes.
We only lauched 2-3 new major versions of the site (once every 2 years, more or less) and wait for a stable version of Prestashop, even 1.7 now has a few major bugs (Translations was one of them, not sure it's 100% fixed in 1.7.1).
The last one, that went quite well, we altered the theme to our needs, applied a bunch of new features for our customers, etc... When was time to launch I just analysed the difference in the relevant tables, and copied the data from the old db to the new one, with the added fields and changed defaults, etc... using ssh access as they were both on the same server.
Btw, the old tables we need were related to address, carrier, cart, category, customer, delivery, feature, group, image (but not image_type), manufacturer, orders, product, range, specific_price, stock_available, supplier, tax, tax_rule, wishlist, zone, country, state, employee, profile, and others used by our modules. Others like modules, configuration, hooks, etc, didn't matter because it was a complete new theme.
I always thought on doing something that could synchronize the db of the dev version and live one. But still haven't done due to the fact that we don't do that many major changes, and the minor ones, we try to keep the changes in a file until we apply it (not the most professional, I know). And sometimes, on these major version changes there could be new ways of Prestashop doing things. THe last i remember was the access slug in the 1.6.something, that was not in the 1.5, and after everything done, I could login to back office, but other workers couldn't, because it had changed how access was controlled, and since I was superadmin, I was not affected by it. Another thing in the of not doing it right now, it that Prestashop is starting to use Symfony, and I think will try to use it even more in the future, impacting how things are going to be done in the future. So a solution now could not work in the future.
We could also use the upgrade features in modules. Never tried it, but it could be used to apply upgrades to the DB and others automatically. Looks promising, but don't know if it works with a push or if only on module upgrade. One of these days i'm going to test this.
This is not a response with a solution, but I'm interested in one, and in working on one if there isn't any. It would be interesting to do a push, and not having to change things in db "by hand".

How to use source control with Joomla while allowing users the ability to continue making content changes on the production server?

Scenario:
My team manages multiple Joomla websites for our clients. While we manage the development/hosting of the sites, the clients do all of the content updates (creating articles, content, uploading images etc...)
We run these websites in the following standard configuration where we have
A development server
A staging server
A production server
The client makes all of the content updates to the production server (on a daily basis). The other two servers are used primarily for new development and testing.
We currently are using, BitBucket as our SVN for these websites (we are just starting out with this). Currently all files pertaining to the website are stored in the repo.
The Problem
Based on our current setup, if a developer makes changes to the dev environment, and that change set is then pushed to the production environment, we end up overwriting all of the content updates that our clients have made in the production environment.
My Question
How do we successfully utilize a source control system, and maintain the flexibility to allow our clients to continue to make updates directly on the production server, without forcing them to make content changes on dev, staging, and then production?
While you briefly described your workflow, there are some things to be considered. There is no general rule, but look into the following suggestions:
Put into version control JUST the extensions you have developed (template, components, plugins etc.). The customizations are anyway the only things you add to Joomla. Hopefully no core hacks. Alternatively if you really want to version the whole installation, you should at least ignore media folders that are changed by the clients / you. I see no need to put the whole Joomla site under version control.
I imagine your clients are not actually changing PHP scripts, just media files and database entries. You should only push to production code and / or database schema changes.
If you are relying on a commit - push to FTP feature, or manually pushing files, I would suggest looking into building a distributable version of your changes, in a form of a package that can be deployed via the Extension manager. Building packages can be done in one click with a tool like Phing. For example if you make some changes to the template, create a new template version, create the package and update first the staging server / testing server and if all goes well, the production.
Some things shouldn't be in version control.
In general, source code should be versioned and data should not. I'm not familiar with Joomla, but any kind of "uploaded content" directory should be in the ignore file for your version control system. That way you can deploy changes to the software without worrying about overwriting data.
Of course, your data should be backed up regularly, but that's not what revision control is for.
If you have
a staging server where you test the website changes (layout, new functionality, new css)
a production server where the user publishes new content
you need partial database updates along with file synchronization.
The database is pretty hard as the assets table may be affected both by configuration changes on the staging server and by new content on the production server; for this and any other shared tables, we address the issue by making sure the ids don't conflict right after the update leaving a sufficient gap.
Although - to quote most other answers - revision control is not for data nor for the database, it is indeed very nice, especially with pre and post-commit hooks to perform the required database actions; however, any scripting/publishing tools going from rsync-rdiff to phing to ant - maven will do

Wordpress Development VS Production Environments

I am about to use WordPress as CMS with a lot of customization, but my question is, how should I sync development to production?
Assuming WordPress is not used, a typical development cycle is to develop the pages locally, and create the DB scripts. Then when everything is ready, it is posted to the site. And then again, more db and code changes, then only the changes are updated/applied, and so on.
Now, with WordPress, you get all the great features (one example is blogging, comments, almost ready CMS ...etc). However deployment is a pain! Ideally, I would rather keep my typical development cycle described above. Therefore I am going to be implementing WordPress locally (from wordpress.org) and then pushing the changes to my production server.
So, assuming that I create a new page in WordPress locally (I will never create pages on the server, all locally, I will find a way to disable wp-admin for the server), a number of files are created. This is not a problem so far. HOWEVER, if I want to add content to that newly created page, that content is saved to my local database. Even though that content is a database change, it is considered (from my point of view) a new change that should be pushed to server rather than add that content via the live server (because that content is considered static, it is not a blog post or a comment, it is a static page).
Now, that new page content is saved to the DB, and therefore, the DB will have changes done on my local machine that I should push to the server along with the files that I will FTP to the server.
My questions are:
Is this approach correct? If not, what do you suggest
What is the best way to find database diffs? What is a tool to use? Does MySQL Workbench provide something like that? I intend to use that tool to find diffs and then generate an update script for the DB. The reason for this question is I normally make the changes myself, and I know how to track them, but now, those DB changes are generated by WordPress and I need to reverse engineer them to find out which changes are made.
Assuming I got step 2 to work, is there anything in that script that should be modified? Such as server names? Does WordPress hard-code server names for example?
So to summarize and give you more information about my development environment, I use XAMPP for development on Windows 7, with PHP and MySQL setup. I also use Mercurial for source control. As mentioned above, I will use WordPress as part of the solution and I intend to use it to help me create a CMS solution. I will use it locally for page generation, and disable that feature for online (keeping online for blog posts and similar entries only). I am doing that so as to keep things in-sync. If I create a page locally, some data is saved to the DB. Now, how do I sync/upload?
Thanks.
OK, after further investigation, here is what I concluded.
All theme development should be version-controlled
All plugin development should be version-controlled
Content of pages and posts are not part of the development porcess, this is contect and should only be backed up.
This way, you do not need to worry about DB changes ...etc.
Hope this helps everyone.
You might use a Version Control System? What OS is the development on, e.g. Win or Linux? And what is the production OS? I use http://serverpress.com for my testing environment though there are others, WAMP, LAMP, etc.

Syncing Drupal site between dev, staging and production

Often after a Drupal (6.x) site is launched, I have people starting to sign up and enter their own content. Whenever there is need for an upgrade, the database on production is copied to dev and then the development is done on dev, later get pushed to staging for client's approval.
When the site is eventually ready to go live, there is a problem. Production server has the latest user inputted content, dev and staging have the latest functionality. Simply overwriting the database on production won't work. What I usually do is to write down what has been done to dev and than follow the steps to go though the implementations again on production. As the system grows bigger, one single mistake on production may cause lost of business. I can't shutdown the site for several hours. I can't tell how many people are using the site at a given time, even so it's impossible to wait for a time where nobody is on the site to make the upgrade.
Has anyone have any good idea?
Thanks in advance.
There are two concepts you need to look into: The first is "Exportables" which is generally a way of exporting all the configuration of a given module. The second is "Features" (terribly named, yes) which is a way of grouping a set of Exportables into a given changeset for version control, updating, deployment, rollback, etc.
For clarification, many modules implement their own "Exportables" methodology what I linked to above was the Exportables module. Here's a wider strategy for it - http://www.sthlmconnection.se/tips-and-tweaks/exportable-configuration-your-drupal-module-ctools
It's the million dollar question: How to transfer code, configuration and content between different Drupal sites? In Drupal, code is stored in files (or at least it should be) while configuration and content are usually in the database.
Taking your code from one server to another isn't that hard, and code has another advantage: it's easy to store and manage in a version control system like SVN or GIT. That's why most solutions focus on taking stuff out of the database and putting it into code.
Already mentioned by CaseySoftware, the Features module is what you need to store configuration in code. Features has a stable release since a couple of weeks and the community seems to agree that Features is the way forward.
Moving content between sites is a little harder, because content can be added or changed on dev, staging and production simultaneously. Exportables is an attempt to solve that, but it's not the only one. Make sure you also check out Deploy and the Features-based UUID Features Integration modules. None of those modules is stable yet and time will tell which one is the best solution.

How to efficiently manage multiple installations of a web application?

From my experience, one of the bigger problems we come across during our webdevelopment process is keeping different setups updated and secure across different servers.
My company has it's own CMS which is currently installed across 100+ servers. At the moment, we use a hack-ish FTP-based approach, combined with upgrade scripts at specific locations to upgrade all of our CMS setups. Efficiently managing these setups becomes increasingly difficult and risky when there are several custom modules involved.
What is the best way to keep multiple setups of a web application secure and up-to-date?
How do you do it?
Are there any specific tips regarding modularity in applications, in order to maintain flexibility towards our clients, but still being able to efficiently manage multiple "branches" of an application?
Some contextual information: we mainly develop on the LAMP-stack. One of the main factors that helps us sell our CMS is that we can plugin pretty much anything our client wants. This can very from 10 to to 10.000 lines of custom code.
A lot of custom work consists of very small pieces of code; managing all these small pieces of code in Subversion seems quite tedious and inefficient to me (since we deliver around 2 websites every week, this would result in a lot of branches).
If there is something I am overlooking, I'd love to hear it from you.
Thanks in advance.
Roundup: first of all, thanks for all of your answers. All of these are really helpful.
I will most likely use a SVN-based approach, which makes benlumley's solution closest to what I will use. Since the answer to this question might differ in other usecases, I will accept the answer with the most votes at the end of the run.
Please examine the answers and vote for the ones that you think have the most added value.
I think using a version control system and "branching" the part of the codes that you have to modify could turn out to be the best approach in terms of robustness and efficiency.
A distributed version system could be best suited to your needs, since it would allow you to update your "core" features seamlessly on different "branches" while keeping some changes local if need be.
Edit: I'm pretty sure that keeping all that up to date with a distributed version system would be far less tedious than what you seem to expect : you can keep the changes you are sure you're never going to need elsewhere local, and the distributed aspect means each of your deployed application is actually independent from the others and only the fix you mean to propagate will propagate.
If customizing your application involves changing many little pieces of code, this may be a sign that your application's design is flawed. Your application should have a set of stable core code, extensibility points for custom libraries to plug into, the ability to change appearance using templates, and the ability to change behavior and install plugins using configuration files. In this way, you don't need a separate SVN branch for every client. Rather, keep the core code and extension plugin libraries in source control as normal. In another repository, create a folder for each client and keep all their templates and configuration files there.
For now, creating SVN branches may be the only solution that helps you keep your sanity. In your current state, it's almost inevitable that you'll make a mistake and mess up a client's site. At least with branches you are guaranteed to have a stable code base for each client. The only gotcha with SVN branches is if you move or rename a file in a branch, it's impossible to merge that change back down to the trunk (you'd have to do it manually).
Good luck!
EDIT: For an example of a well-designed application using all the principles I outlined above, see Magento E-Commerce. Magento is the most powerful, extensible and easy to customize web application I've worked with so far.
I may be wrong, but it seems to me what Aron is after is not version control. Versioning is great, and I'm sure they're using it already, but for managing updates on hundreds of customized installations, you need something else.
I'm thinking something along the lines of a purpose-built package system. You'll want every version of a module to keep track of its individual dependencies and 'guaranteed compatibilities', and use this information to automatically update only the 'safe' modules.
E.g. let's say you've built a new version 3 of your 'Wiki' module. You want to propagate the new version to all the servers running your application, but you've made changes to one of the interfaces within the Wiki module since version 2. Now, for all default installations, that is no problem, but it would break installations with custom extensions on top of the old interface. A well-planned package system would take care of this.
To address the security question, you should look into using digital signatures on your patches. There are lots of good libraries available for public-key-based signatures, so just go with whatever seems to be the standard for your chosen platform.
Not sure whether someone's said this, there are a lot of long responses here, and I've not read them all.
I think a better approach to your version control would be to have your CMS sat on its own in its own repository and each project in its own. (or, all of these could be subfolders within one repo i guess)
You can then use its trunk (or a specific branch/tag if you prefer) as an svn:external in each project that requires it. This way, any updates you make to the CMS can be committed back to its repository, and will be pulled into other projects as and when they are svn updated (or the external is svn:switch 'ed).
As part of making this easier, you will need to make sure the CMS and the custom functionality sit in different folders, so that svn externals works properly.
IE:
project
project/cms <-- cms here, via svn external
project/lib <-- custom bits here
project/www <-- folder to point apache/iis at
(you could have cms and lib under the www folder if needed)
This will let you branch/tag each project as you wish. You can also switch the svn:external location on a per branch/tag basis.
In terms of getting changes live, I'd suggest that you immediately get rid of ftp and use rsync or svn checkout/exports. Both work well, the choice is up to you.
I've got most experience with the rsync route, rsyncing an svn export to the server. If you go down this route, write some shell scripts, and you can create a test shell script to show you the files it will upload without uploading them as well, using the -n flag. I generally use a pair of scripts for each environment - one a test, and one to actually do it.
Shared key authentication so you don't need a password to send uploads up may also be useful, depending on how secure the server to be given the access is.
You could also maintain another shell script for doing bulk upgrades, which simply calls the relevant shell script for each project you want to upgrade.
Have you looked at Drupal? No, not to deploy and replace what you have, but to see how they handle customizations and site-specific modules?
Basically, there's a "sites" folder which has a directory for every site you're hosting. Within each folder is a separate settings.php which allows you to specify a different database. Finally, you can (optionally) have "themes" and "modules" folders within sites.
This allows you to do site-specific customizations of particular modules and limit certain modules to those sites. As a result, you end up with a site that the vast majority of everything is perfectly identical and only the differences get duplicated. Combine that with the way it handles upgrades and updates and you might have a viable model.
Build into the code a self-updating process.
It will check for updates and run them when/where/how you have configured it for the client.
You will have to create some sort of a list of modules (custom or not) that need to be tested with the new build prior to roll-out. When deploying an update you will have to ensure these are tested and integrated correctly. Hopefully your design can handle this.
Updates are ideally a few key steps.
a) Backup so you can back out. You should be able to back out
the entire update at any time. So,
that means creating a local archive
of the application and database
first.
b) Update Monitoring Process - Have the CMS system phone home to look for a new build.
c) Schedule Update on availability - Chances are you don't want the update to run the second it is available. This means you will have to create a cron/agent of some kind to do the system update automatically in the middle of the night. You can also consider client requirements to update on weekends, or on specific days. You can also stagger rolling out your updates so you don't update 1000 clients in 1 day and get tech support hell. Staggered roll-out of some kind might be beneficial for you.
d) Add maintenance mode to update the site -- Kick the site into maintenance mode.
e) SVN checkout or downloadable packages -- ideally you can deploy via svn checkout, and if not, setup your server to deliver svn generated packages into an archive that can be deployed on client sites.
f) Deploy DB Scripts - Backup the databases, update them, populate them
g) Update site code - All this work for one step.
h) Run some tests on it. If your code has self-tests built in, it would be ideal.
Here's what I do...
Client-specific include path
Shared, common code is in shared/current_version/lib/
Site specific code is in clients/foo.com/lib
The include path is set to include from the clients/foo.com/lib, and then share/lib
The whole thing is in a version control system
This ensures that the code uses shared files wherever possible, but if I need to override a particular class or file for some reason, I can write a client specific version in their folder.
Alias common files
My virtual host configuration will contain a line like
Alias /common <path>/shared/current_version/public_html/common
Which allows common UI elements, icons, etc to be shared across projects
Tag the common code with each site release
After each site release, I tag the common code by creating a branch to effectively freeze that point in time. This allows me to deploy /shared/version_xyz/ to the live server. Then I can have a virtual host use a particular version of the common files, or leave it pointing at the current_version if I want it to pick up the latest updates.
Have you looked at tools such as Puppet (for system administration incl. app deployment) or Capistrano (deployment of apps in RubyOnRails but not limited to these)?
One option would be to set up a read-only version control system (Subversion). You could integrate access to the repository into your CMS and invoke the updates through a menu, or automatically if you do not want the user to have a choice about an update (could be critical). Using a version control system would also allow you to keep different branches easily
As people have already mentioned that using version control (I prefer Subversion due to functionality) and branching would be the best option. Another open source software available on sourceforge called cruisecontrol. Its amazing, you configure cruisecontrol with subversion in sach a way that any code modification or new code added in serversion, Cruise control will know automatically and will do build for you. It will save your hell of time.
I have done the same way in my company. we have four projects and have to deploy that project on different servers. I have setup cruiseconrol in such a way that any modification in code base triggers automatic build. and another script will deploy that build on the server. your are good to go.
If you use a LAMP stack I would definitely turn the solutions files into a package of your distribution and use it for propagate changes. I recommend for that matter Redhat/Fedora because of RPM and it's what I have experience on. Anyway you can use any Debian based distribution too.
Sometime ago I made a LAMP solution for managing an ISP hosting servers. They had multiple servers to take care of web hosting and I needed a way to deploy the changes of my manager, because every machine was self-contained and had a online manager. I made a RPM package containing the solution files (php mostly) and some deploying scripts that runned with the RPM.
For automated updating we had our own RPM repository set on every server in yum.conf. I set an crontab job to update the servers daily with the latest RPMs from that trusted repository.
Trustiness can be achieve too because you can use trust settings in the RPM packages, like signing them with your public key file and accepting only signed packages.
Hm could it be an idea to add configuration files? You wrote that a lot of small script are doing something. Now if you'd build them into the sources and steered them with configuration files shouldn't that "ease" that?
On the other hand having branches for every customer looks like an exponential growth to me. And how would you "know" which areas you've done something and do not forget to "make" changes in all other branches also. That looks quite ugly to me.
It seems a combination of revision controls, configuration options and/or deployment receipts seems to be a "good" idea.....
With that many variations on your core software, I think you really need a version control system to stay on top of pushing updates from the trunk to the individual client sites.
So if you think Subversion would be tedious, you've got a good sense for what the pain points will be... Personally, I wouldn't recommend Subversion for this, since it's not really that good at managing & tracking branches. Although benlumley's suggestion to use externals for your core software is a good one, this breaks down if you need to tweak the core code for your client sites.
Look into Git for version control, it's built for branching, and it's fast.
Check out Capistrano for managing your deployments. It's a ruby script, often used with Rails, but it can be used for all sorts of file management on remote servers, even non-ruby sites. It can get the content to the remote end through various stragegies including ftp, scp, rsync, as well as automatically checking out the latest version from your repository. The nice features it provides include callback hooks for every step of the deploy process (e.g. so you can copy your site-specific configuration files which might not be in version control), and a release log system--done through symlinks--so you can quickly roll back to a previous release in case of trouble.
I'd recommend a config file with the list of branches and their hosted location, then run through that with a script that checks out each branch in turn and uploads the latest changes. This could be cron'd to do nightly updates automatically.

Categories