I used Wamp when building a sign-in at for my company. In the table the first 40 entries are test entries. My boss asked if I could erase these entries. My concern is if I erase those will it have an effect on the other entries (customers singing in) that have been made. I only took one db class in undergrad and the professor told us that you don't change entries unless you have too. It is easy to screw up a db and can be hard to fix it. I am wondering if anyone has any advice about if it is worth erasing the first 40 entries, or could it mess up the other entries. Basically if its not broke is is worth fixing?
The front end of the app is android written in Java and XML. The back end in PHP that talks to the db.
Deleting generally entries is fine, just understand the DB structure. Do the ID's for those users get referenced elsewhere, how does the DB connect data? Basic principle is as follows. On your development environment migrate the database downstream (from production to development). On dev (and dev only!) delete the 40 entries. Test. I assume you don't have unit testing in place (write unit tests!) but you can still test all the functionality manually. Honestly this should not hurt anything.
If you have to do this on production then make a backup of the DB and test out deleting the entries. If it doesn't work then drop the DB and re-import the backup. This is not advised because if you fail to import in a decent amount of time people are going to take notice. If this is your first database dump and re-import then test it on a useless database or duplicate the production database and name it something different.
Also you should look up frameworks for your PHP if they don't currently have one. Laravel has 'Migrations' which allow you to run code to update your database. This ensures that you can write code on dev, test it, deploy to production and just run the migration (all automatable). Here is some info on it: http://laravelbook.com/laravel-migrations-managing-databases/
Good luck and remember, ALWAYS TEST ON DEV FIRST.
Related
For development I have a set of tables in a database I want to modify the tables during development and testing. But be able to go back to the original state i started with every time I run the application. is there a way to achieve this without actually backing up and restoring every time.
OS: Win10 Software: XAMPP MySQL(MariaDB) with PHP
You can make a backup and then restore the backup with a different name. Then point the dev/test environment of your application (you do have a different "test" copy of your application as well, I hope?) at the new copy of the database.
Each time you want to "go back to the start", just restore the backup (with the alternative name) again.
Backup/restore is scriptable, so you can automate it if you need to.
If your data is being exhausted as you say there's no other way than return to original state, but you might look for the ways to make it faster and easier. If the db is big and you look to shorten the restore time look at the suggestions here:
https://dba.stackexchange.com/questions/83125/mysql-any-way-to-import-a-huge-32-gb-sql-dump-faster
Also you can write a shell script that wraps the restore operations suggested in one of the solutions from the link above in single command.
Im building an application and would like to know what is the best way to update the production database.
Right now i deploy my code (CakePHP Application via a GIT Repository )
just like this tutorial https://www.digitalocean.com/community/tutorials/how-to-set-up-automatic-deployment-with-git-with-a-vps
But what is the propper way to update the database structure? not data.
Lets say I create a new table or alter some fields in another table in debug, right now i export the query and then ssh into server and connect to MySql and inject the query in the database.
but im sure there must be another way not so complicated.
What I like to do is write a set of functions for converting my data into the new fields/tables and so on.
For example I recently had to make a change to my database where I store the transaction dates because the format I stored the dates in doesn't evaluate correctly during a query.
I wrote a PHP function that does the conversion for each field in
the database.
I grabbed a backup copy of the live production database.
I tested my function using a dry run on the backup copy of the live database.
Once everything checked out okay, I put my application/site into a temporal maintenance mode and grabbed another copy of the live database. To be sure that any interactions between my customers and database were in tact and there would be no gap.
I ran the functions on that copy of the database and re-uploaded to my database sever.
I took my site/application out of maintenance mode.
The result of having everything prepared in advance and testing ensured only 3-5 minutes of downtime for deployment.
Database migrations is that what you need. If you are using CakePHP v2.x read this:
http://book.cakephp.org/2.0/en/console-and-shells/schema-management-and-migrations.html
If CakePHP 3.x this:
http://book.cakephp.org/3.0/en/migrations.html
In your post-receive hook you have to set some trigger to run migrations on your production server
Why wouldn't you use migrations as it's built into Cake. That would allow you to create them locally and then when the code is pushed via git, run the migration in production.
I have a site with both a production version (something.com) and a staging version not accessible to public used for testing / verification. The issue is that a lot of the testing is specific to the data which changes rapidly on production, making the staging database outdated fairly quickly. At first I would manually use mysqldump once or twice a week to back up the production DB and reimport it on staging. Now that the data on the live site changes a lot quicker, doing this only once a week isn't enough and the whole process is a bit tedious.
I'm thinking there has to be a way to automatically do this. I'm thinking I could make the staging DB accessible from the production server and write a script that dumps the production database and overwrites staging, put that in a nightly cron job and be done with it. The database is quite big though (last backup was over 400 megs) so I was wondering if maybe there would be a way to do incremental updates? There are multiple tables and the data isn't necessarily dated (for instance, it isn't user accounts with a created_on field) so I'm not really sure if finding all the operations done during a specific time span is doable. Maybe there's a trigger that could log all the operations, which could then be run on the staging DB nightly?
If it's of any help, the database is for a Symfony 2 application.
Thanks
This is a general Postgres backup and restore method question, based on the following use case for a non production server (i.e. a local testing server).
I have a ~20gb database that I will mangle during the testing of a php script that will result in the need to drop it and recreate it quite often.
Running dumped SQL to restore it takes quite a lot of time, and I'm on a tight deadline, so I wondered if there was a method whereby I could speed up the process. I thought the following may work:
Create and populate the database initially
Copy its data files to a secondary location
mangle the database with my testing.
delete the data files and the copy the copies back restoring the original state.
But I don't know where to start or if there's some internal stuff happening that would prevent this from working.
Is the above possible, if so how is it achieved?
This isn't a closed question, if there are faster alternatives to what I'm asking for, please enlighten me. I'm open to suggestions.
Thanks.
Your fastest solution is probably to just do this via the file system.
Stop the server and make a local copy of your entire database cluster, i.e. everything under $PGDATA, inclusive. Start the server and do your mangling. When you need to refresh your database, stop the server and copy the files back in from your backup location. Note that this affects the entire cluster, so you cannot do this if other databases in the same cluster are in production use: everything is frozen in the state it was in when you first made the backup.
The alternative is to use pg_dump in binary mode, but probably quite a bit slower than the manual method. It is the only solution if other databases in the cluster need to be preserved.
You can't swap out some files, or just one database within a cluster, because the transaction IDs and the pg_clog that keeps track of committed / rolled back transactions are global. So if you copy an old file back into a PostgreSQL database it'll likely cause all sorts of chaos - PostgreSQL might've discarded its knowledge about the old transaction IDs it didn't need to remember anymore, and suddenly you've resurrected them.
As Patrick said, you can do a file system level copy and restore, but you must do so for the whole database cluster (entire datadir), not just some files. The manual describes this in more detail.
(Patrick's answer is correct, I'm just explaining why you can't do it the way you thought).
So the scenario is this:
I have a mySQL database on a local server running on Windows 2008 Server. The server is only meant to be accessible to users on our network and contains our companies production schedule information. I have what is essentially the same database running on a hosted server running linux, which is meant to be accessible online so our customers can connect to it and update their orders.
What I want to do is a two-way sync of two tables in the database so that the orders are current in both databases, and a one-way sync from our server to the hosted one with the data in the other tables. The front end to the database is written in PHP. I will say what I am working with so far, and I would appreciate if people could let me know if I am on the right track or barking up the wrong tree, and hopefully point me in the right direction.
My first idea is to make (at the end of the PHP scripts that generate changes to the orders tables) an export of the changes that have been made, perhaps using INSERT into OUTFILE WHERE account = account or something similar. This would keep the size of the file small rather than exporting the entire orders table. What I am hung up on is how to (A) export this as an SQL file rather than a CSV (B) how to include the information about what has been deleted as well as what has been inserted (C) how to fetch this file on the other server and execute the SQL statement.
I am looking into SSH and PowerShell currently but can't seem to formulate a solid vision of exactly how this will work. I am looking into cron jobs and Windows scheduled tasks as well. However, it would be best if somehow the updates simply occurred whenever there was a change rather than on a schedule to keep them synced in real time, but I can't quite figure that one out. I'd want to be running the scheduled task/cronjob at least once every few minutes, though I guess all it would need to be doing is checking if there were any dump files that needed to be put onto the opposing server, not necessarily syncing anything if nothing had changed.
Has anyone ever done something like this? We are talking about changing/adding/removing from 1(min) to 160 lines(max) in the tables at a time. I'd love to hear people's thoughts about this whole thing as I continue researching my options. Thanks.
Also, just to clarify, I'm not sure if one of these is really a master or a slave. There isn't one that's always the accurate data, it's more the most recent data that needs to be in both.
+1 More Note
Another thing I am thinking about now is to add at the end of the order updating script on one side another config/connect script pointing to the other servers database, and then rerun the exact same queries, since they have identical structures. Now that just sounds to easy.... Thoughts?
You may not be aware that MySQL itself can be configured with databases on separate servers that opportunistically sync to each other. See here for some details; also, search around for MySQL ring replication. The setup is slightly brittle and will require you to learn a bit about MySQL replication. Or you can build a cluster; much higher learning curve but less brittle.
If you really want to roll it yourself, you have quite an adventure in design ahead of you. The biggest problem you have to solve is not how to make it work, it's how to make it work correctly after one of the servers goes down for an hour or your DSL modem melts or a hard drive fills up or...
Start a query on a local and a remote server can be a problem if the connection breaks. It is better to each query locally stored in the file, such as GG-MM-DD-HH.sql, and then send the data every hour, when the hour expired. Update period can be reduced to 5 minutes for example.
In this way, if the connection breaks, the re-establishment take on all the left over files.
At the end of the file insert CRC for checking content.