I'm having problems keeping consistent versions of a repository on different local machines, and pushing and pulling effectively.
When I worked with Rails, I could push and pull easily and all the files required to start a Rails server were included.
With Wordpress, I have to include files like wp-config.php in .gitignore so when I pull the repository to a new computer, I cannot start a local server (through Desktop Server). I did try manually transferring wp-config because that wasn't too inconvenient, but then a database error followed and I need a more complete solution.
How do you transfer entire WP repositories between developers through version control? I want to be able to push and pull, not drag and drop.
(One solution I thought of: Duplicate the WP base, connect the remote repository to the base, then pull and merge the updated site into the base server.)
(Another possible solution: moving the db config and salts lines from wp-config.php into dbsalts.php, then including that file in wp-config.php. I would then add dbsalts.php to .gitignore and remove wp-config, so the big important stuff would be ignored but the reduced wp-config would be pushed. Not sure if this would work, and we'd still have to drag and drop dbsalts.php.)
dbsalts.php
define( 'DB_NAME', ..... (redacted code for security)
...........................
define('AUTH_KEY'......
..........................
wp-config.php
include(dbsalts.php);
Currently using wpengine and desktop server, but I'm just now implementing this and open to suggestions.
*Private BitBucket Git Repository – While most of the time the only truly sensitive information in a WordPress install is in the wp-config file, which is preferable not to include in Git repository, keep whole sites in private repos, which are free from Bitbucket. Open-sourcing your entire site might make sense, but it’s something to think through thoroughly before doing so, and erring on the side of caution is a good policy, especially when doing client work.
*WP Migrate DB Pro – While Git can keep all of the site’s files in sync between environments, a different tool is required for keeping the MySQL databses in sync. WP Migrate DB Pro is an excellent plugin that automates this process.
Even if you get this to work you are going to have further problems with the database. 2 developers working on 2 different local WordPress sites can create a number of issues with data in the database.
Here's an example suppose both of you pull the site that has the following posts in the database: A, B, C.
Now you make a new post: 'D1' and the other developer make a different post: 'D2'. This will associate D1 and D2 with the same post ID (since D1 isn't at the second developer's WordPress database, the same is true for D2).
What you could do (in theory), is set up both machines to connect to the same remote database (you'll have to white-list both of your IP address, if they are different, for that particular install with WPEngine's support). You'll need to modify the local wp-config.php, which I believe will make it different from the one on the server, but that's okay, since you can include it in gitignore.
Now you might can both pull and push to the server without an issue. But I'd use BitBucket or GitHub and have only one of you pushing to the server. I'd also look at DeployBot.com at some point. It can push the code from BitBucket/GitHub to WPEngine's server through ftp automatically.
I use local configs based on the url by adding the following to wp-config.php:
$site_options_filename = dirname(__FILE__) . '/options-' . $_SERVER['SERVER_NAME'] . '.php';
if (!file_exists($site_options_filename)) {
die('missing config: ' . $site_options_filename);
}
This will look for a specific config file based on the domain you are viewing. I use a dev. prefix or similar for my local builds.
I also use a quick bash script to dump the db to a .sql file which I include in the commit whenever there are database changes that have to be propagated:
mysqldump -uyour_db_username -pyour_db_password --add-drop-table --create-options your_database_name > dump.sql
It's a simple task to run dump.sql against the database every time you pull and dump.sql has changed.
Related
I am currently building a website with WordPress. The workflow I am trying to follow is local -> staging -> production. However, WordPress seems to make this a bit tricky since many changes made in the WP GUI are saved to the database rather than updating local files.
For example, when I change the title of my homepage, that title is written to and then read from the database when the page loads. Same applies to any styling changes that I make from within the GUI, like text colors etc.
As a result, none of these changes can be pushed to staging using git, since git only looks for changes in the files, not changes in the database tables.
I know that I could make those changes on the staging server directly but if at all possible, I would rather keep development entirely local.
I also understand that I could export the local database and import it on the staging server but that would be terrible for many reasons.
So the question is, is there a method (and/or a tool) to capture changes to individual db tables and push them to the remote db, similar to how git handles file changes?
Scenario:
My team manages multiple Joomla websites for our clients. While we manage the development/hosting of the sites, the clients do all of the content updates (creating articles, content, uploading images etc...)
We run these websites in the following standard configuration where we have
A development server
A staging server
A production server
The client makes all of the content updates to the production server (on a daily basis). The other two servers are used primarily for new development and testing.
We currently are using, BitBucket as our SVN for these websites (we are just starting out with this). Currently all files pertaining to the website are stored in the repo.
The Problem
Based on our current setup, if a developer makes changes to the dev environment, and that change set is then pushed to the production environment, we end up overwriting all of the content updates that our clients have made in the production environment.
My Question
How do we successfully utilize a source control system, and maintain the flexibility to allow our clients to continue to make updates directly on the production server, without forcing them to make content changes on dev, staging, and then production?
While you briefly described your workflow, there are some things to be considered. There is no general rule, but look into the following suggestions:
Put into version control JUST the extensions you have developed (template, components, plugins etc.). The customizations are anyway the only things you add to Joomla. Hopefully no core hacks. Alternatively if you really want to version the whole installation, you should at least ignore media folders that are changed by the clients / you. I see no need to put the whole Joomla site under version control.
I imagine your clients are not actually changing PHP scripts, just media files and database entries. You should only push to production code and / or database schema changes.
If you are relying on a commit - push to FTP feature, or manually pushing files, I would suggest looking into building a distributable version of your changes, in a form of a package that can be deployed via the Extension manager. Building packages can be done in one click with a tool like Phing. For example if you make some changes to the template, create a new template version, create the package and update first the staging server / testing server and if all goes well, the production.
Some things shouldn't be in version control.
In general, source code should be versioned and data should not. I'm not familiar with Joomla, but any kind of "uploaded content" directory should be in the ignore file for your version control system. That way you can deploy changes to the software without worrying about overwriting data.
Of course, your data should be backed up regularly, but that's not what revision control is for.
If you have
a staging server where you test the website changes (layout, new functionality, new css)
a production server where the user publishes new content
you need partial database updates along with file synchronization.
The database is pretty hard as the assets table may be affected both by configuration changes on the staging server and by new content on the production server; for this and any other shared tables, we address the issue by making sure the ids don't conflict right after the update leaving a sufficient gap.
Although - to quote most other answers - revision control is not for data nor for the database, it is indeed very nice, especially with pre and post-commit hooks to perform the required database actions; however, any scripting/publishing tools going from rsync-rdiff to phing to ant - maven will do
So - let's say I develop a PHP app which I develop in a vagrant box identical to production envrionment. So - as an end result I would have a *.tar.zip file with a code...
How would one organize a deployment into production environment where there are a lot of application servers? I mean - I'm confused how to push code into production synchronously all at once?
More information:
on server code is stored like this:
project
+current_revision ->link to revisions/v[n]
+revisions
+v1
+v2
+v3
...
+data
So when I have to deploy changes I usually run a deploy script that uploads updated tar onto server with ssh, untars into specific dir under revisions, symlinks it into current_revision and restart php-fpm.... This way I can rollback anytime just by symlinking to an older revision.
with multipe servers what bothers me is that not all boxes will be updated at once, ie. technically some glitches might be possible.
If you're looking for a "ready-to-go" answer, you'll need to provide some more info about your setup. For example, if you plan to use git for VCS, you could write a simple shell script that pulls the latest commit and rsyncs with the server(s). Or if you're building on top of Symfony, capifony is a great tool. If your using AWS, there's a provider plugin written by the author of Vagrant that's super easy to use, and you can specify a regex for which machines to bring up or provision.
If instead you're looking for more of a "roadmap", then the considerations that you'll want to take are:
Make building of identical boxes in the remote and local environments as easy as possible, and try to make sure that your provisioning emphasizes idempotence.
Consider your versioning/release structure; what resources will rarely or never change? Include those in a setup function instead of a deploy function, and don't include them in your sync run.
Separate your development and system administration concerns; i.e. do not just package a vagrant box with a *.tar.gz and tie it through config.vm.box_url. The reason for this is that you'd have to repackage every production server with a new box every time you deploy, instead of just changing files on the server, or adding/removing some packages from the server.
Check out some config management tools like Chef and Puppet; even if you don't end up using them, they'll give you an idea of how sysadmin professionals approach this problem.
Lots of ways. If starting from barebones (no cloud infrastructure), I'm a fan of the SVN branch hook. Have a SVN repo for your code. Set up a post-commit hook on it, which checks if anything in /branch/production/ has been changed.
If it has, let the post-commit hook fire all your automated roll-out procedure - and in this case, an easy way to do so is to let all your servers known* to svn export the branch. As simple as that!
(* that's the hard step)
I am about to use WordPress as CMS with a lot of customization, but my question is, how should I sync development to production?
Assuming WordPress is not used, a typical development cycle is to develop the pages locally, and create the DB scripts. Then when everything is ready, it is posted to the site. And then again, more db and code changes, then only the changes are updated/applied, and so on.
Now, with WordPress, you get all the great features (one example is blogging, comments, almost ready CMS ...etc). However deployment is a pain! Ideally, I would rather keep my typical development cycle described above. Therefore I am going to be implementing WordPress locally (from wordpress.org) and then pushing the changes to my production server.
So, assuming that I create a new page in WordPress locally (I will never create pages on the server, all locally, I will find a way to disable wp-admin for the server), a number of files are created. This is not a problem so far. HOWEVER, if I want to add content to that newly created page, that content is saved to my local database. Even though that content is a database change, it is considered (from my point of view) a new change that should be pushed to server rather than add that content via the live server (because that content is considered static, it is not a blog post or a comment, it is a static page).
Now, that new page content is saved to the DB, and therefore, the DB will have changes done on my local machine that I should push to the server along with the files that I will FTP to the server.
My questions are:
Is this approach correct? If not, what do you suggest
What is the best way to find database diffs? What is a tool to use? Does MySQL Workbench provide something like that? I intend to use that tool to find diffs and then generate an update script for the DB. The reason for this question is I normally make the changes myself, and I know how to track them, but now, those DB changes are generated by WordPress and I need to reverse engineer them to find out which changes are made.
Assuming I got step 2 to work, is there anything in that script that should be modified? Such as server names? Does WordPress hard-code server names for example?
So to summarize and give you more information about my development environment, I use XAMPP for development on Windows 7, with PHP and MySQL setup. I also use Mercurial for source control. As mentioned above, I will use WordPress as part of the solution and I intend to use it to help me create a CMS solution. I will use it locally for page generation, and disable that feature for online (keeping online for blog posts and similar entries only). I am doing that so as to keep things in-sync. If I create a page locally, some data is saved to the DB. Now, how do I sync/upload?
Thanks.
OK, after further investigation, here is what I concluded.
All theme development should be version-controlled
All plugin development should be version-controlled
Content of pages and posts are not part of the development porcess, this is contect and should only be backed up.
This way, you do not need to worry about DB changes ...etc.
Hope this helps everyone.
You might use a Version Control System? What OS is the development on, e.g. Win or Linux? And what is the production OS? I use http://serverpress.com for my testing environment though there are others, WAMP, LAMP, etc.
I develop in PHP with NetBeans. The modifications are uploaded to a virtualized LAMP dev server on my machine directly by NetBeans.
I would like to branch some developments.
The problem is that only the trunk is sent to the server.
I use a classic structure:
{svnroot}/trunk
{svnroot}/branches
{svnroot}/tags
How can I test the branches without doing a crazy branch/trunk swap (with all the possible conflicts)?
Is there a solution with an htaccess configuration?
Should I use SVN differently?
Should I use NetBeans differently?
Most SVN setups have a few top-level directories
{svnroot}/trunk
{svnroot}/branches
{svnroot}/tags
If you lack the "branches" top-level directory, add it. Then use svn copy to copy in all the contents from "trunk".
If your web server pulls the code in such a manner that your "branches" directory gets pulled into the web server, that's a deployment issue concerning your web server, and whoever set that up needs to fix it.
Sometimes a person side-steps having a release plan by doing a svn checkout of the code directly into the web server. While that works for a very limited number of cases, it reduces your ability to handle future events without migrating to a more sophisticated release plan. If your environment tends to do something like this, you might be able to continue to follow your plan by selectively checking out only the sub-contents of "trunk", or you could migrate to a proper "build" of your release, which then goes through a "deployment plan".
If you lack the "trunk" directory, before attempting anything, you might have to create the "trunk" directory and move all of the current contents into it. This means that all development would need to checkout from the "trunk" subdirectory instead of the {svnroot} directory. This is done by extending your URL (adding "/trunk" to the end).
I hope this gets you thinking along the right paths.
You could checkout everything onto the web server, and use symbolic links (or junction points with Windows servers - see Junction.exe from www.sysinernals.com ) to switch between test/production environments. Or yes, you could use .htaccess to change where your web root points to. As others have said, it's usually a good idea to have separate test/production servers.