I currently have the following setup:
*.mysite.com --> /home/public_html/app/index.php
I want to write some code in index.php that changes the whole document root to /home/public_html/app_prev/index.php based on a condition. The reason for this is that I am doing a migration and if they haven't been migrated yet, I want to serve the old version of the code; once they are migrated. Each user has their own database and I will migrate 1 by 1. Normally it would take seconds to migrate all of them, but this release will take awhile to do.
Is this possible?
Is this recommend when making large database schema changes? Will it cause performance problems/errors?
You could just use a PHP redirect based on the condition that you're looking for. It's no different then serving a different page based on what's coming in.
It's a reasonable implementation if you have many large databases and you're worried about performance. I'd test it by
Keep the old code path and old database.
Migrate a test database over to the new codebase. I don't know how you're doing your logic, but you could have a single column, one entry table in each database that describes whether it's on the old or new code base.
Test that the new codebase works.
Start migrating your databases over, changing that single entry in each database (or however your logic is determined).
I'm working on a software solution which was written using PHP Symfony with mysql database. When we do upgrades to the existing product what we use now is copping the existing database to a new database and do the upgrade standing on the new database. But the current method of asking the user to copy the existing database does not seem to be the professional way to do an upgrade.
Is there any standard way of doing that automatically and preserve the consistency of the old database? Please help me on this issue. Thanks in advance.
You could create a copy of the tables with a different table prefix (like updateAttempt_) and then if everything goes well delete the old ones and rename the new ones to the old ones.
Although, If you're doing this to make sure the data isn't corrupted in the event something goes wrong.. isn't that what TRANSACTIONS are for?
I'm developing a new version for my web application with some redesigned database structure. However, the old application is still working onine with customers. Is there any solutions for easing this deployment?
Thanks and best regards.
Edited: My question is about how to merge the old database with the new database with new redesigned structure. The old database had many new records when I developed new application with new database.
Just make a new database, and include the version in the name for example. You can have multiple databases on the same server, and even use multiple databases in the same application.
I believe there are two choices;
Either force all users to the new system with a bit of downtime, which as long as your site has some quite time on traffic you can schedule it then.
Alternatively upload both and run concurrently pointing everyone at the new site and give a time-frame to your users for taking down the old site.
These are some steps you can follow.
First you have to get a backup database dump from the your existing database. Eventhough you made some mistake you are in the safe side.
Then you can create a new database using the old dump.
Then you have to figure out what are the changes you did in the structure.
Then you have to map old data to the new tables which are changed using ALTER TABLE commands. For this you can first create necessary new tables using sql commands and then read the old data and insert in to new ones.
If you are using mysql you can use "Transactions" to make sure your changes are persistent. You can refer my blog post to learn more on "Transactions" http://coders-view.blogspot.com/2012/03/how-to-use-mysql-transactions-with-php.html
I am a PHP software developer and am looking for the best solution to get around this concern I have. I am create a script that will in the future have new releases that includes new features, bugfixes, etc. I know how to do the code changes in the upgrade script, however I the script I am developing uses MySQL to store data in multiple tables.
Now here is the question, I have figured out how to make an install script for the initial release, however what is the best solution/method to making an upgrade script that can upgrade any previous version up to the latest version? The latest version has new MySQL tables (not a problem), however it also changes the database structure (new columns, delete columns, etc.) I will create a scenario below to better picture what I am worried about.
v0.1.0 - Initial Release
v0.1.1 - Has a new database table and some fields added to the table structure
v0.2.0 - Has more new fields added to different tables
My concern is the the upgrade v0.1.0 > v0.2.0 because there were changes between the two versions.
UPDATE I probably could have mentioned that I am using GitHub as my VCS for the code alone. All code changes are saved there and in the install or upgrade script for code changes, I just plan on overwriting the user's current files as they shouldn't have any data within class/function files.
In my experience, the best approach is to create a versioning table in MySQL that includes the application's version number along with any queries that change the structure of the database. So essentially:
CREATE TABLE versions (
app_version DOUBLE NOT NULL,
query TEXT,
created TIMESTAMP NOT NULL
);
With our install script, we take note of our initial application version and our destination application version and select all queries that need to be executed for this to work.
SELECT query FROM versions WHERE app_version > {$initialAppVersion} AND app_version <= {$destinationAppVersion} ORDER BY created ASC;
In PHP:
foreach ($resultset AS $row) {
$stmt = $db->prepare($row['query']);
try {
$stmt->execute();
} catch (Exception $e) { /* handle exception */ }
}
You could store the current schema version they are on somewhere.
For the upgrade script have it check to make sure it's on the previous version before it continues making any updates. This will prevent updates from being skipped or missed.
Another suggestion:
Some ORMs like Propel and Doctrine allow to create automatically such migration scripts.
Check for example:
http://www.propelorm.org/wiki/Documentation/1.6/WhatsNew#Migrations
http://www.doctrine-project.org/blog/new-to-migrations-in-1-1
I've arrived at the point where I realise that I must start versioning my database schemata and changes. I consequently read the existing posts on SO about that topic but I'm not sure how to proceed.
I'm basically a one man company and not long ago I didn't even use version control for my code. I'm on a windows environment, using Aptana (IDE) and SVN (with Tortoise). I work on PHP/mysql projects.
What's a efficient and sufficient (no overkill) way to version my database schemata?
I do have a freelancer or two in some projects but I don't expect a lot of branching and merging going on. So basically I would like to keep track of concurrent schemata to my code revisions.
[edit] Momentary solution: for the moment I decided I will just make a schema dump plus one with the necessary initial data whenever I'm going to commit a tag (stable version). That seems to be just enough for me at the current stage.[/edit]
[edit2]plus I'm now also using a third file called increments.sql where I put all the changes with dates, etc. to make it easy to trace the change history in one file. from time to time I integrate the changes into the two other files and empty the increments.sql[/edit]
Simple way for a small company: dump your database to SQL and add it to your repository. Then every time you change something, add the changes in the dump file.
You can then use diff to see changes between versions, not to mention have comments explaining your changes. This will also make you virtually immune to MySQL upgrades.
The one downside I've seen to this is that you have to remember to manually add the SQL to your dumpfile. You can train yourself to always remember, but be careful if you work with others. Missing an update could be a pain later on.
This could be mitigated by creating some elaborate script to do it for you when submitting to subversion but it's a bit much for a one man show.
Edit: In the year that's gone by since this answer, I've had to implement a versioning scheme for MySQL for a small team. Manually adding each change was seen as a cumbersome solution, much like it was mentioned in the comments, so we went with dumping the database and adding that file to version control.
What we found was that test data was ending up in the dump and was making it quite difficult to figure out what had changed. This could be solved by dumping the schema only, but this was impossible for our projects since our applications depended on certain data in the database to function. Eventually we returned to manually adding changes to the database dump.
Not only was this the simplest solution, but it also solved certain issues that some versions of MySQL have with exporting/importing. Normally we would have to dump the development database, remove any test data, log entries, etc, remove/change certain names where applicable and only then be able to create the production database. By manually adding changes we could control exactly what would end up in production, a little at a time, so that in the end everything was ready and moving to the production environment was as painless as possible.
How about versioning file generated by doing this:
mysqldump --no-data database > database.sql
Where I work we have an install script for each new version of the app which has the sql we need to run for the upgrade. This works well enough for 6 devs with some branching for maintenance releases. We're considering moving to Auto Patch http://autopatch.sourceforge.net/ which handles working out what patches to apply to any database you are upgrading. It looks like there may be some small complication handling branching with auto Patch, but it doesn't sound like that'll be an issue for you.
i'd guess, a batch file like this should do the job (didn't try tough) ...
mysqldump --no-data -ufoo -pbar dbname > path/to/app/schema.sql
svn commit path/to/app/schema.sql
just run the batch file after changing the schema, or let a cron/scheduler do it (but i don't know ... i think, commits work if just the timestamps changed, even if the contents is the same. don't know if that would be a problem.)
The main ideea is to have a folder with this structure in your project base path
/__DB
—-/changesets
——–/1123
—-/data
—-/tables
Now who the whole thing works is that you have 3 folders:
Tables
Holds the table create query. I recommend using the naming “table_name.sql”.
Data
Holds the table insert data query. I recommend using the same naming “table_name.sql”.
Note: Not all tables need a data file, you would only add the ones that need this initial data on project install.
Changesets
This is the main folder you will work with.
This holds the change sets made to the initial structure. This holds actually folders with changesets.
For example i added a folder 1123 wich will contain the modifications made in revision 1123 ( the number is from your code source control ) and may contain one or more sql files.
I like to add them grouped into tables with the naming xx_tablename.sql - the xx is a number that tells the order they need to be runned, since sometimes you need the modification runned in a certain order.
Note:
When you modify a table, you also add those modifications to table and data files … since those are the file s that will be used to do a fresh install.
This is the main ideea.
for more details you could check this blog post
Take a look at SchemaSync. It will generate the patch and revert scripts (.sql files) needed to migrate and version your database schema over time. It's a command line utility for MySQL that is language and framework independent.
Some months ago I searched tool for versioning MySQL schema. I found many useful tools, like Doctrine migration, RoR migration, some tools writen in Java and Python.
But no one of them was satisfied my requirements.
My requirements:
No requirements , exclude PHP and MySQL
No schema configuration files, like schema.yml in Doctrine
Able to read current schema from connection and create new migration script, than represent identical schema in other installations of application.
I started to write my migration tool, and today I have beta version.
Please, try it, if you have an interest in this topic.
Please send me future requests and bugreports.
Source code: bitbucket.org/idler/mmp/src
Overview in English: bitbucket.org/idler/mmp/wiki/Home
Overview in Russian: antonoff.info/development/mysql-migration-with-php-project
Our solution is MySQL Workbench. We regularly reverse-engineer the existing Database into a Model with the appropriate version number. It is then possible to easily perform Diffs between versions as needed. Plus, we get nice EER Diagrams, etc.
At our company we did it this way:
We put all tables / db objects in their own file, like tbl_Foo.sql. The files contain several "parts" that are delimited with
-- part: create
where create is just a descriptive identification for a given part, the file looks like:
-- part: create
IF not exists ...
CREATE TABLE tbl_Foo ...
-- part: addtimestamp
IF not exists ...
BEGIN
ALTER TABLE ...
END
Then we have an xml file that references every single part that we want executed when we update database to new schema.
It looks pretty much like this:
<playlist>
<classes>
<class name="table" desc="Table creation" />
<class name="schema" desc="Table optimization" />
</classes>
<dbschema>
<steps db="a_database">
<step file="tbl_Foo.sql" part="create" class="table" />
<step file="tbl_Bar.sql" part="create" class="table" />
</steps>
<steps db="a_database">
<step file="tbl_Foo.sql" part="addtimestamp" class="schema" />
</steps>
</dbschema>
</playlist>
The <classes/> part if for GUI, and <dbschema/> with <steps/> is to partition changes. The <step/>:s are executed sequentially. We have some other entities, like sqlclr to do different things like deploy binary files, but that's pretty much it.
Of course we have a component that takes that playlist file and a resource / filesystem object that crossreferences the playlist and takes out wanted parts and then runs them as admin on database.
Since the "parts" in .sql's are written so they can be executed on any version of DB, we can run all parts on every previous/older version of DB and modify it to be current.
Of course there are some cases where SQL server parses column names "early" and we have to later modify part's to become exec_sqls, but it doesn't happen often.
I think this question deserves a modern answer so I'm going to give it myself. When I wrote the question in 2009 I don't think Phinx already existed and most definitely Laravel didn't.
Today, the answer to this question is very clear: Write incremental DB migration scripts, each with an up and a down method and run all these scripts or a delta of them when installing or updating your app. And obviously add the migration scripts to your VCS.
As mentioned in the beginning, there are excellent tools today in the PHP world which help you manage your migrations easily. Laravel has DB migrations built-in including the respective shell commands. Everyone else has a similarly powerful framework agnostic solution with Phinx.
Both Artisan migrations (Laravel) and Phinx work the same. For every change in the DB, create a new migration, use plain SQL or the built-in query builder to write the up and down methods and run artisan migrate resp. phinx migrate in the console.
I do something similar to Manos except I have a 'master' file (master.sql) that I update with some regularity (once every 2 months). Then, for each change I build a version named .sql file with the changes. This way I can start off with the master.sql and add each version named .sql file until I get up to the current version and I can update clients using the version named .sql files to make things simpler.