I use an INSERT INTO & DELETE FROM combination in a PHP script to take data out of an operational MySQL table and put into into an archive table.
The archive table has gotten too big. Even though no day-to-day operations are performed on it, mysqldump chokes when we back up (error 2013):
Error 2013: Lost connection to MySQL server during query when dumping table 'some_table' at row: 1915554
What can I do? Should my PHP script move it to another DB (how?)? Is it okay to keep the large table in the operational db?--in that case, how do I get around the mysqldump issue?
Thanks!
Are you by chance dumping using memory buffering and running out of swap and physical RAM? If so, you can try dumping row by row instead.
Try adding --quick to your mysqldump statement.
According to the documentation, you should combine --single-transaction with --quick.
Source: http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html
Look for #Will's answer for a 2013 Error code due to a table being oversized.
That, however, turned out not to be my problem. When I ran a SELECT where giving it a WHERE id>500000 AND id<1000000 (example), I quickly found out that a section of my data had been corrupted.
Because of this I couldn't copy the table content over, I couldn't back up the table (or the database) using mysqldump, I could even say DELETE FROM to get rid of the corrupted rows.
Instead I used CREATE TABLE some_tbl_name SELECT * FROM corrupted_table WHERE id>500000 AND id<1000000 and then once I had the data that wasn't corrupt saved into another table, I was able to drop the corrupted table and create a new one.
I'm not accepting my own answer because Will's is correct, but if anyone runs into the same issue, I've posted it here.
mysqldump --opt --max_allowed_packet=128M base_de_datos > bd.sql
it works for me
You can try --var_max_allowed_packet=??? and --var_net_buffer_length=???
You can also try disabling extended inserts: --skip-extended-insert
But this is assuming your diagnosis of too large of a table is correct.
Just how big is this table?
As for the second issue, try logging directly into the MySQL server and running mysqldump from there, preferably writing the dump to a local filesystem, but a network connection moving plain data is far more reliable than any SQL connection.
Related
I am using windows XP. I am creating a table in phpMyAdmin using its built-in create table feature,
my database name is ddd.
It generates the following code:
CREATE TABLE `ddd`.`mwrevision` (
`asd` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,
`sddd` INT NOT NULL
) ENGINE = INNODB;
and the following error shows up:
MySQL said:
#1146 - Table 'ddd.mwrevision' doesn't exist
What might be the problem?
I also had same problem in past. All had happend after moving database files to new location and after updating mysql server. All tables with InnoDB engine disappeared from my database. I was trying to recreate them, but mysql told me 1146: Table 'xxx' doesn't exist all the time until I had recreated my database and restarted mysql service.
I think there's a need to read about InnoDB table binaries.
I had the same problem and can't get a good tip for this over the web, so I shared this for you and for all who needs.
In my situation I copy a database (all files: frm, myd) to the data folder in MySQL data folder (using Wamp at home). All thing was OK until I want to create a table and have the error #1146 Table '...' doesn't exist!.
I use Wamp 2.1 with MySQL version 5.5.16.
My solution:
Export the database to file;
verify if exported file is really OK!!;
drop the database where I have issues;
create a new database with the same name that the last;
import the file to the database.
FOR ME IS PROBLEM SOLVED. Now I can create tables again without errors.
Restarting MySQL works fine for me.
In my case I ran this command even if the table wasn't visible in PhpMyAdmin :
DROP TABLE mytable
then
CREATE TABLE....
Worked for me !
Check filenames.
You might need to create a new database in phpmyadmin that matches the database you're trying to import.
I had the same problem. I tried to create a table in mysql and got the same error. I restarted mysql server and ran the command and was able to create/migrate table after restating.
Today i was facing same problem. I was in very difficult situation but what id did i create a table with diffrent name e.g (modulemaster was not creating then i create modulemaster1) and after creating table i just do the rename table.
I encountered the same problem today. I was trying to create a table users, and was prompted that ERROR 1146 (42S02): Table users doesn't exist, which did not make any sense, because I was just trying to create the table!!
I then tried to drop the table by typing DROP TABLE users, knowing it would fail because it did not exist, and I got an error, saying Unknown table users. After getting this error, I tried to create the table again, and magically, it successfully created the table!
My intuition is that I probably created this table before and it was not completely cleared somehow. By explicitly saying DROP TABLE I managed to reset the internal state somehow? But that is just my guess.
In short, try DROP whatever table you are creating, and CREATE it again.
As pprakash mentions above, copying the table.frm files AND the ibdata1 file was what worked for me.
In short:
Shut your DB explorer client (e.g. Workbench).
Stop the MySQL service (Windows host).
Make a safe copy of virtually everything!
Save a copy of the table file(s) (eg mytable.frm) to the schema data folder (e.g. MySQL Server/data/{yourschema}).
Save a copy of the ibdata1 file to the data folder (i.e., MySQL Server/data).
Restart the MySQL service.
Check that the tables are now accessible, queryable, etc. in your DB explorer client.
After that, all was well. (Don't forget to backup if you have success!)
Column names must be unique in the table. You cannot have two columns named asd in the same table.
run from CMD & %path%=set to mysql/bin
mysql_upgrade -u user -ppassword
Recently I had same problem, but on Linux Server. Database was crashed, and I recovered it from backup, based on simply copying /var/lib/mysql/* (analog mysql DATA folder in wamp). After recovery I had to create new table and got mysql error #1146. I tried to restart mysql, and it said it could not start. I checked mysql logs, and found that mysql simply had no access rigths to its DB files. I checked owner info of /var/lib/mysql/*, and got 'myuser:myuser' (myuser is me). But it should be 'mysql:adm' (so is own developer machine), so I changed owner to 'mysql:adm'. And after this mysql started normally, and I could create tables, or do any other operations.
So after moving database files or restoring from backups check access rigths for mysql.
Hope this helps...
The reason I was facing this was because I had two "models.py" files which contained slightly different fields.
I resolved it by:
deleting one of the models.py files
correcting references to the deleted file
then running manage.py syncdb
I got this issue after copying mytable.idb table file from another location. To fix this problem I did the following:
ALTER TABLE mydatabase.mytable DISCARD TABLESPACE;
Copy mytable.idb
ALTER TABLE mydatabase.mytable IMPORT TABLESPACE;
Restart MySql
I had the same issue. It happened after windows start up error, it seems some files got corrupted due to this. I did import the DB again from the saved script and it works fine.
I had this problem because of a trigger not working..Worked after I deleted the trigger.
In my case, MySQL's parameter; lower_case_table_names was configured = 0.
It causes queries related with using upper cases will not work.
For me it was a table name upper/lower case issue. I had to make sure that table case name matched in a delete query, table notifications was not the same as Notifications. I fixed it by matching table name case with query and what MySQLWorkbench reported.
What is wierd is that this error showed up in a worked sql statement. Don't know what caused this case sensitivity. Perhaps an auto AWS RDS update.
if you are modifying mysql bin->data dir's and after that, your database import will not works
so you need to close wamp and after that start wamp
now database import will work fine
Make sure you do not have a trigger that is trying to do something with the table mentioned in the error. I was receiving Error Code: 1146. Table 'exampledb.sys_diagnotics' doesn't exist on insert queries to another table in my production database. I exported the table schemas of my production database then searched for instances of exampledb.sys_diagnotics the schema SQL and found a debugging insert statement I had added to a table trigger in my development environment but this debug statement had been copied to production. The exampledb.sys_diagnotics table was not present on my production database. The error was resolved by removing the debug statement in my table trigger.
I was wondering if there is a (free) tool for mysql/php benchmark.
In particular, I would like to insert thousands of data into the MySQL database, and test the application with concurrent queries to see if it will last. This is, test the application in the worst cases.
I saw some pay tools, but none free or customizable one.
Any suggestion? or any script?
Thnx
Insert one record into the table.
Then do:
INSERT IGNORE INTO table SELECT FLOOR(RAND()*100000) FROM table;
Then run that line several times. Each time you will double the number of rows in the table (and doubling grows VERY fast). This is a LOT faster than generating the data in PHP or other code. You can modify which columns you select RAND() from, and what the range of the numbers is. It's possible to randomly generate text too, but more work.
You can run this code from several terminals at once to test concurrent inserts. The IGNORE will ignore any primary key collisions.
Make a loop (probably infinite) that would keep inserting data into the database and test going from there.
for($i=1;$i=1000;$i++){
mysql_query("INSERT INTO testing VALUES ('".$i."')");
//do some other testing
}
for($i=1;$i<5000;$i++){
$query = mysql_query("INSERT INTO something VALUES ($i)");
}
replace something with your table ;D
if you want to test concurrency you will have to thread your insert/update statements.
An easy and very simple way(without going into fork/threads and all that jazz) would be to do it in bash as follows
1. Create an executable PHP script
#!/usr/bin/php -q
<?php
/*your php code to insert/update/whatever you want to test for concurrency*/
?>
2. Call it within a for loop by appending & so it goes in the background.
#!/bin/bash
for((i=0; i<100; i++))
do
/path/to/my/php/script.sh &;
done
wait;
You can always extend this by creating multiple php scripts having various insert/update/select queries and run them through the for loop (remember to change i<100 to higher number if you want more load. Just don't forget to add the & after you call your script. (Of course, you will need to chmod +x myscript.sh )
Edit: Added the wait statement, below this you can write other commands/stuff you may want to do after flooding your mysql db.
I did a quick search and found the following page at MySQL documentation => http://dev.mysql.com/doc/refman/5.0/en/custom-benchmarks.html. This page contains the following interesting links:
the Open Source Database Benchmark, available at
http://osdb.sourceforge.net/.
For example, you can try benchmarking packages such as SysBench and
DBT2, available at http://sourceforge.net/projects/sysbench/, and
http://osdldbt.sourceforge.net/#dbt2. These packages can bring a
system to its knees, so be sure to use them only on your development
systems.
For MySQL to be fast you should look into Memcached or Redis to cache your queries. I like Redis a lot and you can get a free (small) instance thanks to http://redistogo.com. Most of the times the READS are killing your server and not the WRITES which are less frequently(most of the times). When WRITES are frequently most of the times it is not really a big case when you lose some data. Sites which have big WRITE rates are for example Twitter or Facebook. But then again I don't think it is the end of the world if a tweet or Facebook wall post gets lost. Like I point out previously you can fix this easily by using Memcached or Redis.
If the WRITES are killing you could look into bulk insert if possible, transactional insert, delayed inserts when not using InnoDB or partitioning. If data is not really critical you could put the queries in memory first and then do bulk insert periodically. This way when you do read from MySQL you would return stale data(could be problem). But then again when you use redis you could easily store all your data in memory, but when your server crashes you can lose data, which could be big problem.
I have a php script that will execute for about 1 hour each time, and during it's runtime, it will need to store a steady stream of comments about what it's doing for me to view later. Basically, each comment includes a timestamp, and a short description, such as "2/25/2010 6:40:29 PM: Updated the price of item 255".
So which is faster, outputting that to a .txt file, or inserting it into a MySQL database? Also, should I use the timestamp from PHP's date(), or should I create a time object in MySQL?
Second part of my question is that since the program is going to run for about an hour, should I connect to MySQL, insert data, and close the connection to the MySQL database each time I log a comment, or should I just connect once, insert data for the runtime of the program, and then close the connection when the program exits, about an hour after creating the initial connection?
Thank you in advance for all your advice.
It depends on your need for the data at the end of the day. Do you need to be able to do Audits on the data outside of scrolling through a file. If you don't need to browse the data or store it in perpetuity, then a flat file will be faster than MySQL, most likely, if you are just appending to the end of a file.
If you need the data to be more useful, you'll want to store it in mysql. I would suggest that you structure your table like:
id int
timestamp datetime default now()
desc varchar
That way you don't have to actually create a timestamp in PHP and just let mysql do the work, then you'll e able to do more complex queries off of your table. But, another consideration you'll want to think about is the volume of the data going into this table, as that will also affect your final decision.
If you're simply logging information for viewing later, writing to file will be quicker. Writing to the database still has to write somewhere, and you get the added overhead of the database engine.
In my experience, it's much faster overall to write the .txt file than to use MySQL to write the log. See, if you write comments into the DB, then you have to write more code to get those comments out of the DB later, instead of just using cat or more or vi or similar to see the comments.
If you choose the DB route: It's perfectly OK to keep a connection open for your hour, but you have to be able to handle "server went away" in case you haven't written to the DB in a while.
-- pete
I've got an application which needs to run a daily script; the daily script consists in downloading a CSV file with 1,000,000 rows, and inserting those rows into a table.
I host my application in Dreamhost. I created a while loop that goes through all the CSV's rows and performs an INSERT query for each one. The thing is that I get a "500 Internal Server Error". Even if I chop it out in 1000 files with 1000 rows each, I can't insert more than 40 or 50 thousand rows in the same loop.
Is there any way that I could optimize the input? I'm also considering going with a dedicated server; what do you think?
Thanks!
Pedro
Most databases have an optimized bulk insertion process - MySQL's is the LOAD DATA FILE syntax.
To load a CSV file, use:
LOAD DATA INFILE 'data.txt' INTO TABLE tbl_name
FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES;
Insert multiple values, instead of doing
insert into table values(1,2);
do
insert into table values (1,2),(2,3),(4,5);
Up to an appropriate number of rows at a time.
Or do bulk import, which is the most efficient way of loading data, see
http://dev.mysql.com/doc/refman/5.0/en/load-data.html
Normally I would say just use LOAD DATA INFILE, but it seems you can't with your shared hosting environment.
I haven't used MySQL in a few years, but they have a very good document which describes how to speed up insertions for bulk insertions:
http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html
A few ideas that can be gleaned from this:
Disable/enable keys around the insertions:
ALTER TABLE tbl_name DISABLE KEYS;
ALTER TABLE tbl_name ENABLE KEYS;
Use many values in your insert statements.
I.e.: INSERT INTO table (col1, col2) VALUES (val1, val2),(.., ..), ...
If I recall correctly, you can have up to 4096 values per insertion statement.
Run a FLUSH TABLES command before you even start, to ensure that there are no pending disk writes that may hurt your insertion performance.
I think this will make things fast. I would suggest using LOCK TABLES, but I think disabling the keys makes that moot.
UPDATE
I realized after reading this that by disabling your keys you may remove consistency checks that are important for your file loading. You can fix this by:
Ensuring that your table has no data that "collides" with the new data being loaded (if you're starting from scratch, a TRUNCATE statement will be useful here).
Writing a script to clean your input data to ensure no duplicates locally. Checking for duplicates is probably costing you a lot of database time anyway.
If you do this, ENABLE KEYS should not fail.
You can create cronjob script which adds x records to the database at one request.
Cronjob script will check if last import have not addded all needed rows he takes another x rows.
So you can add as many you need rows.
If you have your dedicated server it's more easier. You just run loop with all insert queries.
Of course you can try to set time_limit to 0 (if it's working on dreamhost) or make it bigger.
Your PHP script is most likely being terminated because it exceeded the script time limit. Since you're on a shared host, you're pretty much out of luck.
If you do switch to a dedicated server and if you get shell access, the best way would be to use the mysql command-line tool to insert the data.
OMG Ponies suggestion is great, but I've also 'manually' formatted data into the same format that mysqldump uses, then loaded it that way. Very fast.
Have you tried doing transactions? Just send the command BEGIN to MySQL, do all your inserts then do COMMIT. This would speed it up significantly,but like casablanca said, your script is probably timing out as well.
I've ran into this problem myself before and nos pretty much got it right on the head, but you'll need to do a bit more to get it to perform the best.
I found that in my situation that I couldn't MySQL to accept one large INSERT statement, but found that if I split it up into groups of about 10k INSERTS at a time like how nos suggested then it'll do it's job pretty quickly. One thing to note is that when doing multiple INSERTs like this that you will most likely hit PHP's timeout limit, but this can be avoided by resetting the timout with set_time_limit($seconds), I found that doing this after each successful INSERT worked really well.
You have to be careful about doing this, because you could find yourself in a loop on accident with an unlimited timout and for that I would suggest testing to make sure that each INSERT was successful by either checking for errors reported by MySQL with mysql_errno() or mysql_error(). You could also catch errors by checking the number of rows affected by the INSERT with mysql_affected_rows(). You could then stop after the first error happens.
It would be better if you use sqlloader.
You would need two things first control file that specifies the actions which SQL Loader should do and second csv file that you want to be loaded
Here is the below link that would help you out.
http://www.oracle-dba-online.com/sql_loader.htm
Go to phpmyadmin and select the table you would like to insert into.
Under the "operations" tab, and then the ' table options' option /section , change the storage engine from InnoDB to MyISAM.
I once had a similar challenge.
Have a good time.
I have undertaken a small project which already evolved a current database. The application was written in php and the database was mysql.
I am rewriting the application, yet I still need to maintain the database's structure as well as data. I have received an sql dump file. When I try running it in sql server management studio I receive many errors. I wanted to know what work around is there to convert the sql script from the phpMyAdmin dump file that was created to tsql?
Any Ideas?
phpMyAdmin is a front-end for MySQL databases. Dumping databases can be done in various formats, including SQL script code, but I guess your problem is that you are using SQL Server, and T-SQL is different from MySQL.
EDIT: I see the original poster was aware of that (there was no MySQL tag on the post). My suggestion would be to re-dump the database in CSV format (for example) and to import via bulk insert, for example, for a single table,
CREATE TABLE MySQLData [...]
BULK
INSERT MySQLData
FROM 'c:\mysqldata.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
This should work fine if the database isn't too large and has only few tables.
You do have more problems than making a script run, by the way: Mapping of data types is definitely not easy.
Here is an article about migration MySQL -> SQL Server via the DTS Import/Export wizard, which may well be a good way if your database is large (and you still have access, ie, not only have the dump).
The syntax between Tsql and Mysql is not a million miles off, you could probably rewrite it through trial and error and a series of find and replaces.
A better option would probably be to install mysql and mysqlconnector, and restore the database using the dubp file.
You could then create a Linked Server on the SQL server and do a series of queries like the following:
SELECT *
INTO SQLTableName
FROM OPENQUERY
(LinkedServerName, 'SELECT * FROM MySqlTableName')
MySQL's mysqldump utility can produce somewhat compatible dumps for other systems. For instance, use --compatible=mssql. This option does not guarantee compatibility with other servers, but might prevent most errors, leaving less for you to manually alter.