Removed records in a database (mysql workbench) - php

I exported sql file with tables schema without any data inside. Then I wanted to import it to another, empty database, so that it will create empty tables there.
The problem is it imported the file in the first database, non-empty, and thus, it restored the tables to the empty state.
So now all of my tables are empty.
So again, I exported empty tables sql file from staging database, and imported them to fresh, empty database. But instead the file was imported, accidentaly, to the staging database, and all data was removed.
I checked "default target schema" in import options to the right database, so I don't know what went wrong here.
My question is - is there a way to resotre the removed data?

This is very bad news for your database. Your import file dropped your tables with lots of data and recreated them containing no rows. Ouch.
If you have a backup, restore it.
If you don't have a backup, you may (or may not) be able to restore some of your data by recovering deleted files. Your luck will be better if you used separate files for your InnoDB tables, or if you used MyISAM. At any rate, shut down your MySQL server instance and make the server on which it runs as quiet as possible, to reduce the probability that the OS will reclaim the deleted file space.
You'll have to ask on https://serverfault.com/ for further advice.
There is no escaping that this event will be difficult to recover from.

Related

Importing large amounts of data using MySQL/PHP

We created a import script which imports about 120GB of data into a MySQL database. The data is saved in a few hunderd directories (all are seperated databases). Each directory contains files with table structures and table data.
The issue being; it works on my local machine with a subset of the actual data, but when the import is ran on the server (Which takes a few days). Not all the tables are created (even tables that are tested locally). The odd thing is that the script, when ran on the server does not show any errors on the creation of the tables.
Here is on a high level how the script works:
Find all directories that represent a database
Create all databases
Per database loop through the tables: create table, fill table
Added the code on gist: https://gist.github.com/3349872
Add more logging to see steps that succeded since you might be having problems with memory usage or execution times.
Why dont you create sql files from given CVS files and then just do normal importing in bash?
mysql -u root -ppassword db_something< db_user.sql
Auch, the problem was in the code. Amazingly stupid mistake.
When testing the code on a subset of all the files all table information and table content where available. When a table count not be created the function enters a logging statement and than returns. On the real data this was a the mistake because there are files with no data and no structure so after creating a few tables this creation of the tables of a certain database went wrong and did a return and so didn't create the other tables.

dump selected data from one db to another in mysql

Here's the situation:
I have a mySQL db on a remote server. I need data from 4 of its tables. On occasion, the schema of these tables is changed (new fields are added, but not removed). At the moment, the tables have > 300,000 records.
This data needs to be imported into the localhost mySQL instance. These same 4 tables exist (with the same names), but the fields needed are a subset of the fields in the remote db tables. The data in these local tables is considered read-only and is never written to. Everything needs to be run in a transaction so there is always some data in the local tables, even if it is a day old. The localhost tables are used by an active website, so this entire process needs to complete as quickly as possible to minimize downtime.
This process runs once per day.
The options as I see them:
Get a mysqldump of the structure/data of the remote tables and save to file. Drop the localhost tables, and run the dumped sql script. Then recreate the needed indexes on the 4 tables.
Truncate the localhost tables. Run SELECT queries on the remote db in PHP and retrieve only the fields needed instead of the entire row. Then loop through the results and create INSERT statements from this data.
My questions:
Performance wise, which is my best option?
Which one will complete the fastest?
Will either one put a heavier load on the server?
Would indexing the
tables take the same amount of time in both options?
If there is no good reason for having the local d/b be a subset of the remote, make the structure the same and enable database replication on the needed tables. Replication works by the master tracking all changes made, and managing each slave d/b's pointer into the changes. Each slave says give me all changes since the last request. For a sizeable database, this is far more efficient than any alternative you have selected. It comes with only modest cost.
As for schema changes, I think the alter information is logged by the master, so the slave(s) can replicate those as well. The mechanism definitely replicates drop table ... if exists and create table ... select, so alter logically should follow, but I have not tried it.
Here it is: confirmation that alter is properly replicated.

Only import tables from a complete MySql database export

If I have exported a .sql file with my database in it, can I then only import "parts" of that database instead of the entire database to MySql?
The question appeared when I was trying it out on a test database.
I exported the testdatabase.
Then emptied some of the tables in the database.
Then I planned on importing from the .sql file and hope the emptied tables would be refilled with whatever they where populated with.
But I get an error:
#1007 Can't create database 'database_name' - database exists
Offcourse it exists, but is it possible to only import values of the already existing tables from the .sql backup?
Or must I remove the entire database and then import the database?
FYI I am using PhpMyAdmin for this currently.
It's straightforward to edit the file and remove the parts you're not interested in having restored, Camran.
Alternatively - import the entire file into a separate database (change the database name # the top of the file) and then use INSERT statements to copy the data from the tables in this new database to the other.
I solved this problem by writing a script to dump each table into each individual file and all the CREATE TABLE statements off in their own file. It's gotten a bit long and fancy over the years, so I can't really post it here.
The other approach is to tell MySQL to ignore errors. With the CLI, you provide the -f switch, but I'm not familiar enough with PhpMyAdmin to know how to do that.

File+database transaction safety

I have a MySQL table which basically serves as a file index. The primary key of each record is also the name of a file in a directory on my web host.
When the user wants to delete a file from the system, I want to ensure some kind of transaction safety, i.e. if something goes wrong while deleting the file the record is not erased, and if for some reason the database server dies the file won't be erased. Either event occurring would be very unlikely, but if there's even the slightest chance of a problem I want to prevent it.
Unfortunately I have absolutely no idea how to implement this. Would I need to work out which is less likely to fail, and simply assume that it never will? Are there any known best practices for this?
Oh and here's the kicker - my web host only supports MyISAM tables, so no MySQL transactions for me.
In case it matters, I'm using PHP as my server-side scripting language.
Whether the file is "Deleted" from the DB via a UPDATE or a DELETE of a row, the problem is the same -- the database + file operations are not atomic. Neither an UPDATE or a DELETE are safer than the other, they're both transactions in a database whereas the file operation is not.
The solution is that there is never any conflict as to the state of the data. Only one source is considered "the truth" and the other reflects that truth. That way if there's ever an inconsistency between the two, you know what the "truth" is. In fact, there is never a "logical" inconsistency, only the aftermath manifested by physical artifacts on the disk.
In most cases, the Database is a better representation of The Truth.
Here's the truth table:
File Exists -- DB Record exists -- Truth
Yes No File does not exist
Yes Yes File does exist
No Yes File does exist, but its in error.
No No File does not exist
Operationally, here's how this works.
To create a file, copy the file to the final destination, then make an entry in the DB.
If the file copy fails, you don't update the DB.
If the file copy succeeds, but the DB is not updated, the file "does not exist", so back to step one.
If the file copy succeeds and the DB update succeeds, then everything is A-OK
To delete a file, first update the DB to show the file is deleted.
If the DB update succeeds, then delete the actual file.
If the DB update does not, then do not delete the file.
If the file delete fails, no problem -- the file is still "deleted" because the DB says so.
If you follow the work flow, there's "no way" that the file should be missing while the DB says it exists. If the file goes missing, you have an undefined state, that you will need to resolve. But this shouldn't happen barring someone walking on your file system.
The DB transactions help keep things honest.
Occasionally, as Jonathan mentioned, you should run some kind of scavenging, syncing process to make sure there aren't any rogue files. but even then, that's really not an issue save for file space, especially if the file names of the actual files have nothing to do with the original file names. (i.e. they're synthetic files names) That way you don't have to worry about overwrites etc.
In the circumstances, I think I'd use a logical deletion mechanism where a record is marked as deleted even though it is still present in the database, and the file is still present in the file system. I might move the file to a new location when it is logically deleted (think 'recycle bin') and update the record with the new location as well as the 'logically deleted' marker.
Then, sometime later, you could do an offline scavenge operation to physically delete files and records marked as logically deleted.
This reduces the risk to the live data. It slightly complicates the SQL, but a view might work - rename the main table, then create a view with the same name as the main table used to have, but with criteria that eliminate logically deleted records:
CREATE VIEW MainTable(...) AS
SELECT * FROM RenamedTable WHERE DeleteFlag = 'N';
Even upgrading to a company that provides MySQL transactions is not a huge help. You would need a transaction manager which can run Two-Phase Commit protocols between the file system and the DBMS, which is non-trivial.
You can create a Status column (or an "is_active" column) in the File table with two values: 0=Active, 1=Deleted.
When a user deletes a file, only the Status field is changed and the file remain intact.
When a user browses files, only files with Status=0 are shown.
The Administrator can view/delete files with Status=1.

SQL/PHP: How to upload big database to server when I have import file size limit? And then update

I'm creating locally a big database using MySQL and PHPmyAdmin. I'm constantly adding a lot of info to the database. I have right now more than 10MB of data and I want to export the database to the server but I have a 10MB file size limit in the Import section of PHPmyAdmin of my web host.
So, first question is how I can split the data or something like that to be able to import?
BUT, because I'm constantly adding new data locally, I also need to export the new data to the web host database.
So second question is: How to update the database if the new data added is in between all the 'old/already uploaded' data?
Don't use phpMyAdmin to import large files. You'll be way better off using the mysql CLI to import a dump of your DB. Importing is very easy, transfer the SQL file to the server and afterwards execute the following on the server (you can launch this command from a PHP script using shell_exec or system if needed) mysql --user=user --password=password database < database_dump.sql. Of course the database has to exist, and the user you provide should have the necessary privilege(s) to update the database.
As for syncing changes : that can be very difficult, and depends on a lot of factors. Are you the only party providing new information or are others adding new records as well? Are you going modify the table structure over time as well?
If you're the only one adding data, and the table structure doesn't vary then you could use a boolean flag or a timestamp to determine the records that need to be transferred. Based on that field you could create partial dumps with phpMyAdmin (by writing a SQL command and clicking Export at the bottom, making sure you only export the data) and import these as described above.
BTW You could also look into setting up a master-slave scenario with MySQL, where your data is transferred automatically to the other server (just another option, which might be better depending on your specific needs). For more information, refer to the Replication chapter in the MySQL manual.
What I would do, in 3 steps:
Step 1:
Export your db structure, without content. This is easy to manage on the exporting page of phpmyadmin. After that, I'd instert that into the new db.
Step 2:
Add a new BOOL column in your local db in every table. The function of this is, to store if a data is new, or even not. Because of this set the default to true
Step 3:
Create a php script witch connects to both databases. The script needs to get the data from your local database, and put it into the new one.
I would do this with following mysql methods http://dev.mysql.com/doc/refman/5.0/en/show-tables.html, http://dev.mysql.com/doc/refman/5.0/en/describe.html, select, update and insert
then you have to run your script everytime you want to sync your local pc with the server.

Categories