Tables persist after manually deleting database? - php

After I navigate to my database in C:\ProgramData\MySQL\MySQL Server 5.5\data and delete the folder the tables are still saved somehow. When I try to run my PHP script again I get this error: "Error creating users table: Table 'databaseName.tableName' already exists."
The line of code that triggers that error is this:
mysql_query($createTableQuery) or die('Error creating users table: ' . mysql_error());
In order to fix this problem I have to rename the tables and re-run the script. It is becoming quite cumbersome having to find new table names every time I delete my database while testing my code.
Is anyone aware of a command to delete the tables as well? Or perhaps where the tables are stored on my computer so that I could manually delete them? I'd prefer to stay away from commands and rather know exactly where these tables were stored so that I could find them and delete them.

Are you aware of (?):
DROP TABLE [name];

You should be using drop database
not deleting the files. There may be metadata stored elsewhere.
I'd prefer to stay away from commands
and rather know exactly where these
tables were stored so that I could
find them and delete them.
This is naive way of thinking. Use the public interface (SQL) not the filesystem. What will you do if the storage mechnism changes? There are many storage engines in mysql and they all don't work the same way.

Related

Database not working after MAMP update [duplicate]

I am using windows XP. I am creating a table in phpMyAdmin using its built-in create table feature,
my database name is ddd.
It generates the following code:
CREATE TABLE `ddd`.`mwrevision` (
`asd` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,
`sddd` INT NOT NULL
) ENGINE = INNODB;
and the following error shows up:
MySQL said:
#1146 - Table 'ddd.mwrevision' doesn't exist
What might be the problem?
I also had same problem in past. All had happend after moving database files to new location and after updating mysql server. All tables with InnoDB engine disappeared from my database. I was trying to recreate them, but mysql told me 1146: Table 'xxx' doesn't exist all the time until I had recreated my database and restarted mysql service.
I think there's a need to read about InnoDB table binaries.
I had the same problem and can't get a good tip for this over the web, so I shared this for you and for all who needs.
In my situation I copy a database (all files: frm, myd) to the data folder in MySQL data folder (using Wamp at home). All thing was OK until I want to create a table and have the error #1146 Table '...' doesn't exist!.
I use Wamp 2.1 with MySQL version 5.5.16.
My solution:
Export the database to file;
verify if exported file is really OK!!;
drop the database where I have issues;
create a new database with the same name that the last;
import the file to the database.
FOR ME IS PROBLEM SOLVED. Now I can create tables again without errors.
Restarting MySQL works fine for me.
In my case I ran this command even if the table wasn't visible in PhpMyAdmin :
DROP TABLE mytable
then
CREATE TABLE....
Worked for me !
Check filenames.
You might need to create a new database in phpmyadmin that matches the database you're trying to import.
I had the same problem. I tried to create a table in mysql and got the same error. I restarted mysql server and ran the command and was able to create/migrate table after restating.
Today i was facing same problem. I was in very difficult situation but what id did i create a table with diffrent name e.g (modulemaster was not creating then i create modulemaster1) and after creating table i just do the rename table.
I encountered the same problem today. I was trying to create a table users, and was prompted that ERROR 1146 (42S02): Table users doesn't exist, which did not make any sense, because I was just trying to create the table!!
I then tried to drop the table by typing DROP TABLE users, knowing it would fail because it did not exist, and I got an error, saying Unknown table users. After getting this error, I tried to create the table again, and magically, it successfully created the table!
My intuition is that I probably created this table before and it was not completely cleared somehow. By explicitly saying DROP TABLE I managed to reset the internal state somehow? But that is just my guess.
In short, try DROP whatever table you are creating, and CREATE it again.
As pprakash mentions above, copying the table.frm files AND the ibdata1 file was what worked for me.
In short:
Shut your DB explorer client (e.g. Workbench).
Stop the MySQL service (Windows host).
Make a safe copy of virtually everything!
Save a copy of the table file(s) (eg mytable.frm) to the schema data folder (e.g. MySQL Server/data/{yourschema}).
Save a copy of the ibdata1 file to the data folder (i.e., MySQL Server/data).
Restart the MySQL service.
Check that the tables are now accessible, queryable, etc. in your DB explorer client.
After that, all was well. (Don't forget to backup if you have success!)
Column names must be unique in the table. You cannot have two columns named asd in the same table.
run from CMD & %path%=set to mysql/bin
mysql_upgrade -u user -ppassword
Recently I had same problem, but on Linux Server. Database was crashed, and I recovered it from backup, based on simply copying /var/lib/mysql/* (analog mysql DATA folder in wamp). After recovery I had to create new table and got mysql error #1146. I tried to restart mysql, and it said it could not start. I checked mysql logs, and found that mysql simply had no access rigths to its DB files. I checked owner info of /var/lib/mysql/*, and got 'myuser:myuser' (myuser is me). But it should be 'mysql:adm' (so is own developer machine), so I changed owner to 'mysql:adm'. And after this mysql started normally, and I could create tables, or do any other operations.
So after moving database files or restoring from backups check access rigths for mysql.
Hope this helps...
The reason I was facing this was because I had two "models.py" files which contained slightly different fields.
I resolved it by:
deleting one of the models.py files
correcting references to the deleted file
then running manage.py syncdb
I got this issue after copying mytable.idb table file from another location. To fix this problem I did the following:
ALTER TABLE mydatabase.mytable DISCARD TABLESPACE;
Copy mytable.idb
ALTER TABLE mydatabase.mytable IMPORT TABLESPACE;
Restart MySql
I had the same issue. It happened after windows start up error, it seems some files got corrupted due to this. I did import the DB again from the saved script and it works fine.
I had this problem because of a trigger not working..Worked after I deleted the trigger.
In my case, MySQL's parameter; lower_case_table_names was configured = 0.
It causes queries related with using upper cases will not work.
For me it was a table name upper/lower case issue. I had to make sure that table case name matched in a delete query, table notifications was not the same as Notifications. I fixed it by matching table name case with query and what MySQLWorkbench reported.
What is wierd is that this error showed up in a worked sql statement. Don't know what caused this case sensitivity. Perhaps an auto AWS RDS update.
if you are modifying mysql bin->data dir's and after that, your database import will not works
so you need to close wamp and after that start wamp
now database import will work fine
Make sure you do not have a trigger that is trying to do something with the table mentioned in the error. I was receiving Error Code: 1146. Table 'exampledb.sys_diagnotics' doesn't exist on insert queries to another table in my production database. I exported the table schemas of my production database then searched for instances of exampledb.sys_diagnotics the schema SQL and found a debugging insert statement I had added to a table trigger in my development environment but this debug statement had been copied to production. The exampledb.sys_diagnotics table was not present on my production database. The error was resolved by removing the debug statement in my table trigger.

Create Table Just Once?

I have a few nagging questions about creating tables:
If I use PHP to create a MySQL function to create a table, I know it works the first time (to create a database for usernames and passwords) but what about the following times when the database sees the code to "create table". It seems to ignore it on my virtual server, but I was just wondering if this is wrong. Does it keep trying to create a new table each time? Is it okay to leave that code in?
Another question I have is, let's say I go into PHPMyAdmin and add a column called "role" (to define the user's role). The sign in page will crash since I added a column in PHPMyAdmin, but if add the column using PHP/MySQL it is perfectly fine. Why is that?
CREATE TABLE is executed each time you run the function. It's better to replace the syntax with CREATE TABLE IF NOT EXISTS.
The keywords IF NOT EXISTS prevent an error from occurring if the
table exists.
If you does not add IF NOT EXISTS it will throw the error.
Reference: http://dev.mysql.com/doc/refman/5.7/en/create-table.html
Please post your code in question to help you with second query.
1.) It depends on the purpose of the table.
If you need to create tables dynamically then your code should check each time
if the table exists:
CREATE TABLE IF NOT EXISTS 'yourTable'
However if you create the table only ones, there is no need to check for existence over and over again, so the code to create these table(s) should execute one time only.
2.) You need to update the function that does the insert or read after adding a column via PHPMyAdmin. It's difficult to answer your second question as I don't know what your functions do.
Do not keep your CREATE TABLE ... statements in your PHP code so that they execute every single time on every single page load. It's unnecessary and error prone. The statements are not being ignored, very likely they are run and are producing errors, and you're simply not checking for errors.
Database creation is a deployment step, meaning when you upload your code to your server, that's the one and only time when you create or modify databases. There are entire toolchains available around managing this process; learn something about automated deployment processes and database schema versioning at some point.
No idea without seeing your code and the exact error message.

Detect target write fields so that they can be backed up and potentially restored

Basically, I am trying to create an interface that will tell an administrator "Hey, we ran this query, and we weren't so sure about it, so if it broke things click here to undo it".
The easiest way I can think to do this is to somehow figure out what tables and cells an identified "risky" query writes to, and store this data along with some bookkeeping data in a "backups" table, so that if necessary the fields can be repopulated with their original contents.
How do I go about figuring out which fields get overwritten by a particular (possibly complicated) mysql command?
Edit: "risky" in terms of completing successfully but doing unwanted things, not in terms of throwing an error or failing and leaving the system in an inconsistent state.
I suggest the following things:
- add an AFTER UPDATE trigger to every table you want to monitor
- create a copy of every table (example: [yourtable]_backup) you want to monitor
- in all AFTER UPDATE triggers, add code: INSERT INTO yourtable_backup VALUES(OLD.field1, OLD.field2..., OLD.fieldN)
How it works: the AFTER UPDATE trigger detects an update of the table, and backups the old values into the backup table
Important: you need to use INNODB table format for triggers to work. Triggers don't work with MyISAM tables.
You may add a timestamp field to the backup tables to know when each row was inserted.
Documentation: http://dev.mysql.com/doc/refman/5.5/en/create-trigger.html

MySQL archive data...what to do when it's too big

I use an INSERT INTO & DELETE FROM combination in a PHP script to take data out of an operational MySQL table and put into into an archive table.
The archive table has gotten too big. Even though no day-to-day operations are performed on it, mysqldump chokes when we back up (error 2013):
Error 2013: Lost connection to MySQL server during query when dumping table 'some_table' at row: 1915554
What can I do? Should my PHP script move it to another DB (how?)? Is it okay to keep the large table in the operational db?--in that case, how do I get around the mysqldump issue?
Thanks!
Are you by chance dumping using memory buffering and running out of swap and physical RAM? If so, you can try dumping row by row instead.
Try adding --quick to your mysqldump statement.
According to the documentation, you should combine --single-transaction with --quick.
Source: http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html
Look for #Will's answer for a 2013 Error code due to a table being oversized.
That, however, turned out not to be my problem. When I ran a SELECT where giving it a WHERE id>500000 AND id<1000000 (example), I quickly found out that a section of my data had been corrupted.
Because of this I couldn't copy the table content over, I couldn't back up the table (or the database) using mysqldump, I could even say DELETE FROM to get rid of the corrupted rows.
Instead I used CREATE TABLE some_tbl_name SELECT * FROM corrupted_table WHERE id>500000 AND id<1000000 and then once I had the data that wasn't corrupt saved into another table, I was able to drop the corrupted table and create a new one.
I'm not accepting my own answer because Will's is correct, but if anyone runs into the same issue, I've posted it here.
mysqldump --opt --max_allowed_packet=128M base_de_datos > bd.sql
it works for me
You can try --var_max_allowed_packet=??? and --var_net_buffer_length=???
You can also try disabling extended inserts: --skip-extended-insert
But this is assuming your diagnosis of too large of a table is correct.
Just how big is this table?
As for the second issue, try logging directly into the MySQL server and running mysqldump from there, preferably writing the dump to a local filesystem, but a network connection moving plain data is far more reliable than any SQL connection.

PHP / MySQL Conceptual Database 'Sync' question

I am working on a PHP class implementing PDO to sync a local database's table with a remote one.
The Question
I am looking for some ideas / methods / suggestions on how to implement a 'backup' feature to my 'syncing' process. The ideas is: Before the actual insert of the data takes place, I do a full wipe of the local table's data. Time is not a factor so I figure this is the cleanest and simplest solution and I wont have to worry about checking for differences in the data and all that jazz. The Problem is, I want to implement some kind of security measure in case there is a problem during the insert of data, like loss of internet connection or something. The only idea I have so far is: Copy said table to be synced -> wipe said table -> insert remote tables data into local table -> if successful delete backup copy.
Check out mk-table-sync. It compares two tables on different servers, using checksums of chunks of rows. If a given chunk is identical between the two servers, no copying is needed. If the chunk differs, it copies just the chunk it needs. You don't have to wipe the local table.
Another alternative is to copy the remote data to a distinct table name. If it completes successfully, then DROP the old table and RENAME the new local copy to the original table's name. If the copy fails or is interrupted, then drop the local copy with the distinct name and try again. Meanwhile, your other local table with the previous data is untouched.
Following is Web tool that sync database between you and server or other developer.
It is Git Based. So you should use Git in project.
But it only helpful while developing Application. it is not tool for compare databases.
For Sync Databases you regularly push code to Git.
Git Project : https://github.com/hardeepvicky/DB-Sync

Categories