MySQL temporarily modify database for development - php

For development I have a set of tables in a database I want to modify the tables during development and testing. But be able to go back to the original state i started with every time I run the application. is there a way to achieve this without actually backing up and restoring every time.
OS: Win10 Software: XAMPP MySQL(MariaDB) with PHP

You can make a backup and then restore the backup with a different name. Then point the dev/test environment of your application (you do have a different "test" copy of your application as well, I hope?) at the new copy of the database.
Each time you want to "go back to the start", just restore the backup (with the alternative name) again.
Backup/restore is scriptable, so you can automate it if you need to.

If your data is being exhausted as you say there's no other way than return to original state, but you might look for the ways to make it faster and easier. If the db is big and you look to shorten the restore time look at the suggestions here:
https://dba.stackexchange.com/questions/83125/mysql-any-way-to-import-a-huge-32-gb-sql-dump-faster
Also you can write a shell script that wraps the restore operations suggested in one of the solutions from the link above in single command.

Related

Erasing db entries on Wamp

I used Wamp when building a sign-in at for my company. In the table the first 40 entries are test entries. My boss asked if I could erase these entries. My concern is if I erase those will it have an effect on the other entries (customers singing in) that have been made. I only took one db class in undergrad and the professor told us that you don't change entries unless you have too. It is easy to screw up a db and can be hard to fix it. I am wondering if anyone has any advice about if it is worth erasing the first 40 entries, or could it mess up the other entries. Basically if its not broke is is worth fixing?
The front end of the app is android written in Java and XML. The back end in PHP that talks to the db.
Deleting generally entries is fine, just understand the DB structure. Do the ID's for those users get referenced elsewhere, how does the DB connect data? Basic principle is as follows. On your development environment migrate the database downstream (from production to development). On dev (and dev only!) delete the 40 entries. Test. I assume you don't have unit testing in place (write unit tests!) but you can still test all the functionality manually. Honestly this should not hurt anything.
If you have to do this on production then make a backup of the DB and test out deleting the entries. If it doesn't work then drop the DB and re-import the backup. This is not advised because if you fail to import in a decent amount of time people are going to take notice. If this is your first database dump and re-import then test it on a useless database or duplicate the production database and name it something different.
Also you should look up frameworks for your PHP if they don't currently have one. Laravel has 'Migrations' which allow you to run code to update your database. This ensures that you can write code on dev, test it, deploy to production and just run the migration (all automatable). Here is some info on it: http://laravelbook.com/laravel-migrations-managing-databases/
Good luck and remember, ALWAYS TEST ON DEV FIRST.

Is it possible to restore a Postgres database by simply swapping out some files for speed?

This is a general Postgres backup and restore method question, based on the following use case for a non production server (i.e. a local testing server).
I have a ~20gb database that I will mangle during the testing of a php script that will result in the need to drop it and recreate it quite often.
Running dumped SQL to restore it takes quite a lot of time, and I'm on a tight deadline, so I wondered if there was a method whereby I could speed up the process. I thought the following may work:
Create and populate the database initially
Copy its data files to a secondary location
mangle the database with my testing.
delete the data files and the copy the copies back restoring the original state.
But I don't know where to start or if there's some internal stuff happening that would prevent this from working.
Is the above possible, if so how is it achieved?
This isn't a closed question, if there are faster alternatives to what I'm asking for, please enlighten me. I'm open to suggestions.
Thanks.
Your fastest solution is probably to just do this via the file system.
Stop the server and make a local copy of your entire database cluster, i.e. everything under $PGDATA, inclusive. Start the server and do your mangling. When you need to refresh your database, stop the server and copy the files back in from your backup location. Note that this affects the entire cluster, so you cannot do this if other databases in the same cluster are in production use: everything is frozen in the state it was in when you first made the backup.
The alternative is to use pg_dump in binary mode, but probably quite a bit slower than the manual method. It is the only solution if other databases in the cluster need to be preserved.
You can't swap out some files, or just one database within a cluster, because the transaction IDs and the pg_clog that keeps track of committed / rolled back transactions are global. So if you copy an old file back into a PostgreSQL database it'll likely cause all sorts of chaos - PostgreSQL might've discarded its knowledge about the old transaction IDs it didn't need to remember anymore, and suddenly you've resurrected them.
As Patrick said, you can do a file system level copy and restore, but you must do so for the whole database cluster (entire datadir), not just some files. The manual describes this in more detail.
(Patrick's answer is correct, I'm just explaining why you can't do it the way you thought).

Sync phpMyAdmin DB's Across Desktops

So i just setup my Xampp Apache server to load all the documents i create on my Google Drive. For example if i type 127.0.0.1, it will show me all my web files on my Google Drive. I set this up so i can develop across my laptop which i use at school and my desktop which i use at home without having to copy files back and forth between computer to computer. This works the way i want it to but i forgot one thing. How am i supposed to sync my databases that i create. My question to you is how can i sync my databases to the cloud or somewhere else so i don't have to export and import every time i switch devices?
Also i would like to stay away from using hosting as i won't be online all the time.
The database server (the application itself) expects exclusive access to the data files. If you try to synchronize a data file between two systems, you're going to have issues and probably data loss.
What you could do is synchronize the data directory and make sure you're only running one server at a time. So when you're done working on the laptop, shut down the MySQL server process/service (mysqld), wait for it to finish synchronizing, and then start up the mysqld on the desktop. I suspect this will work, but it's a pretty non-standard usage so anything could happen.
To make it easier, I'd definitely consider writing a wrapper script/batch file that first tests for the presence of a lock file, then (if non exists) creates one, starts the mysqld, and when exiting make sure mysqld is stopped before deleting the lock file.
Anyway, to make this happen you would first stop mysqld everywhere, take the one mysql data directory that you wish to use, copy it to your Google Drive, then edit all of your MySQL configuration files to point to the new data directory instead of the old one. Whether XAMPP makes this more difficult than it should be, I'm not sure, but with stock MySQL it should be pretty trivial.
Remember that just because it's possible doesn't make it a good idea, and likewise that just because it's not a good idea doesn't make it won't work. So I'm saying it's not a good idea to do this, but if done with proper attention it will "probably" work.
Hope that helps.

Two-way mySQL database sync between hosted and local production server

So the scenario is this:
I have a mySQL database on a local server running on Windows 2008 Server. The server is only meant to be accessible to users on our network and contains our companies production schedule information. I have what is essentially the same database running on a hosted server running linux, which is meant to be accessible online so our customers can connect to it and update their orders.
What I want to do is a two-way sync of two tables in the database so that the orders are current in both databases, and a one-way sync from our server to the hosted one with the data in the other tables. The front end to the database is written in PHP. I will say what I am working with so far, and I would appreciate if people could let me know if I am on the right track or barking up the wrong tree, and hopefully point me in the right direction.
My first idea is to make (at the end of the PHP scripts that generate changes to the orders tables) an export of the changes that have been made, perhaps using INSERT into OUTFILE WHERE account = account or something similar. This would keep the size of the file small rather than exporting the entire orders table. What I am hung up on is how to (A) export this as an SQL file rather than a CSV (B) how to include the information about what has been deleted as well as what has been inserted (C) how to fetch this file on the other server and execute the SQL statement.
I am looking into SSH and PowerShell currently but can't seem to formulate a solid vision of exactly how this will work. I am looking into cron jobs and Windows scheduled tasks as well. However, it would be best if somehow the updates simply occurred whenever there was a change rather than on a schedule to keep them synced in real time, but I can't quite figure that one out. I'd want to be running the scheduled task/cronjob at least once every few minutes, though I guess all it would need to be doing is checking if there were any dump files that needed to be put onto the opposing server, not necessarily syncing anything if nothing had changed.
Has anyone ever done something like this? We are talking about changing/adding/removing from 1(min) to 160 lines(max) in the tables at a time. I'd love to hear people's thoughts about this whole thing as I continue researching my options. Thanks.
Also, just to clarify, I'm not sure if one of these is really a master or a slave. There isn't one that's always the accurate data, it's more the most recent data that needs to be in both.
+1 More Note
Another thing I am thinking about now is to add at the end of the order updating script on one side another config/connect script pointing to the other servers database, and then rerun the exact same queries, since they have identical structures. Now that just sounds to easy.... Thoughts?
You may not be aware that MySQL itself can be configured with databases on separate servers that opportunistically sync to each other. See here for some details; also, search around for MySQL ring replication. The setup is slightly brittle and will require you to learn a bit about MySQL replication. Or you can build a cluster; much higher learning curve but less brittle.
If you really want to roll it yourself, you have quite an adventure in design ahead of you. The biggest problem you have to solve is not how to make it work, it's how to make it work correctly after one of the servers goes down for an hour or your DSL modem melts or a hard drive fills up or...
Start a query on a local and a remote server can be a problem if the connection breaks. It is better to each query locally stored in the file, such as GG-MM-DD-HH.sql, and then send the data every hour, when the hour expired. Update period can be reduced to 5 minutes for example.
In this way, if the connection breaks, the re-establishment take on all the left over files.
At the end of the file insert CRC for checking content.

How to optimize upgrading web application?

I mantain a custom PHP application (build for me) that is hosted in a web server. Sometimes I add new features or repair bugs, and after test in local I upload the changes to the web server. It's not a critical application (is a game), but the most of the time there are some people connected.
The steps that I make to upgrade the application:
Access via FTP (Filezilla)
Upload a .htaccess file that redirects all the people (except my IP) to a mantain.html file
Check that access is denied for other IP except mine.
Backup old code
Upload new code
Go to PhPMyAdmin
Backup DB
Execute scripts for the DB
Test that all works fine (if not -> revert the backups)
remove .htaccess file
I usually spend an average of 30 minutes doing these steps, and I'm wondering if there is any way to optimize, automatize or doing something to spend less time. Also I know that if I can automatize some steps there are less prone to have errors.
Several other answers suggest PHP-specific deployment tools, but being as I'm not very familiar with PHP, I'll offer some general tips. These suggestions may be redundant by some of the other tools already suggested, though.
First off, don't upload a new .htaccess file every time--just have two of them on your server. Perhaps call them .htaccess-permanent, and .htaccess-maintenence. Then create a symlink to the one that ought to be active. Then once you've tested that access is properly denied once, you don't have to do this manual testing phase every single time you do an upgrade.
I'd also write a shell script to do most everything for me. My new work flow would look like this:
Upload new code to server in a directory called new/
Log in to the server via shell, and execute the upgrade script
Test the new site
Run upgrade-finalize
The end.
Now for the interesting part, the upgrade script will do this:
It will delete the .htaccess symlink, and re-create it, pointing to .htaccess-maintenence.
It will copy the current code in current/ to backup/
It will back up the DB, using the exact same commands that PHPMyAdmin uses
It will move the contents of new/ (which you just uploaded) to current/
It will execute the scripts for the DB
And the upgrade-finalize script will simply:
Delete the .htaccess symlink, and re-create it, pointing to .htaccess-permanent once again
The only possibly tricky part here will be getting the exact commands that PHPMyAdmin uses to back up your database, but it's probably a simple mysqldump command, and you can probably get that info from PHPMyAdmin or some logs, or something. Sorry, I don't know more about PHPMyAdmin to help in this specific area.
Look into a deployment tool like Capistrano that allows you to automate those steps.
I usually spend an average of 30 minutes doing these steps, and I'm wondering if there is any way to optimize, automatize or doing something to spend less time.
There are many ways. For starters, steps one through eight can be done in a single shell script. You could checkout Phing, an automated deployment system. Also, you might want to delve in continuous integration for even more control over how and when the software can be deployed.
Doing this manually is, like you say, asking for trouble.
For starters, you could upload your files into a new webroot and when done, switch over the DocumentRoot in apache, leaving it available during the copy process. For any shared files you could use a symlink to a common folder (eg, uploaded images etc)
You could probably take the backup during operation as well if you don't care about consistency in the database. For migrations that doesn't "break" the functionality, you could also migrate it and test it on your new webroot with another hostname if consistency isn't a problem.
The best option is always to use multiple webservers so that you can take one offline for testing while the other one is operational, but you will still have problem with consistency, however I assume that is not an option since you don't mention it.

Categories