I usually take backup of my db using MySQL dump same time I observed that my sub domain where i have hosted the application does not load. When I checked the status in PHPmyadmin. It shows an error saying 'waiting for table lock property'. I searched few threads on the same but i'm just not understanding how is it related to my loading of sub domain because i haven't used any db connections or queries in my index page. Please help.
Thanks in advance.
Your mysqldump command is locking your table during the process.
Add the option "--single-transaction" in your mysqldump command so the dump process does not lock your table until the dump is completed for the table.
mysqldump --single-transaction ....
Check this link http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_single-transaction
Related
I have school database who having more then 80 000 records and I want to update and insert into my newSchool database using php, whenever I try to run query update or insert almost 2 000 records and after some time its stopped automatically please help
You could (should) do a full dump and import that dump later. I'm not sure how to do it with php - and think you'd be better doing this with those commands on the cli:
mysqldump -u <username> -p -A -R -E --triggers --single-transaction > backup.sql
And on your localhost:
mysql -u <username> -p < backup.sql
The backup statement flags meanings from the docs:
-u
DB_USERNAME
-p
DB_PASSWORD
Don't paste your password here, but enter it after mysql asks for it. Using a password on the command line interface can be insecure.
-A
Dump all tables in all databases. This is the same as using the --databases option and naming all the databases on the command line.
-E
Include Event Scheduler events for the dumped databases in the output.
This option requires the EVENT privileges for those databases.
The output generated by using --events contains CREATE EVENT
statements to create the events. However, these statements do not
include attributes such as the event creation and modification
timestamps, so when the events are reloaded, they are created with
timestamps equal to the reload time.
If you require events to be created with their original timestamp
attributes, do not use --events. Instead, dump and reload the contents
of the mysql.event table directly, using a MySQL account that has
appropriate privileges for the mysql database.
-R
Include stored routines (procedures and functions) for the dumped
databases in the output. Use of this option requires the SELECT
privilege for the mysql.proc table.
The output generated by using --routines contains CREATE PROCEDURE and
CREATE FUNCTION statements to create the routines. However, these
statements do not include attributes such as the routine creation and
modification timestamps, so when the routines are reloaded, they are
created with timestamps equal to the reload time.
If you require routines to be created with their original timestamp
attributes, do not use --routines. Instead, dump and reload the
contents of the mysql.proc table directly, using a MySQL account that
has appropriate privileges for the mysql database.
--single-transaction
This option sets the transaction isolation mode to REPEATABLE READ and
sends a START TRANSACTION SQL statement to the server before dumping
data. It is useful only with transactional tables such as InnoDB,
because then it dumps the consistent state of the database at the time
when START TRANSACTION was issued without blocking any applications.
If you only need the data and don't need routines nor events, just skip those flags.
Be sure to do commit after a few commands, for example after 500 rows. That save memory but has the problem that in case of rollback only going back to the last commit.
I know previously several question had been posted on this topic. But mine is a bit difference. I already tried all the previous solution. What happened is whenever I try to select data from a specific table mysql crashes. I do work fine on all other tables but whenever I select data from that specific table it crashes even from command line. Now I am unable to mysqldump the database and also cant drop the table as it contains valuable data. Please suggest some options.
Use mysqlcheck to check specific table in db.
mysqlcheck -c db_name tbl_name -u root -p
provide password and it will tell you whether your table is corrupted or not.
Then you can use following command to repair table
mysqlcheck -r db_name tbl_name -u root -p
mysqlcheck work with MyISAM and archive tables.
After several attempts and various suggestions from you guys I finally find a somewhat solution. It is true that the specific table was corrupted. And all other above mentioned options failed. So, I executed a query with limiting my results to 0, 100 and it works fine. Then I dump that data by using that query with mysqldump. I keep on going and changed the limit from 100, 200 and so on. Whenever I get error I simply skipped few rows. At last I had recovered almot 95% of my data which is not bad. Thanks guys for all your support.
I'm running a cron job that executes mysqldump via a PHP script, the dump requires the RELOAD privilege. Using the MySQL admin account doesn't feel right, but neither does creating a user with admin privileges.
My main concern is the security aspect, I'm loading the db attributes (username, password, etc.) in a protected array "property" of the class I'm using.
I'm wondering which approach makes more sense or if there's another way to achieve the same results.
Overview:
LAMP Server: CENTOS 5.8, Apache 2.2.3, MySQL 5.0.95, PHP 5.3.3
Cron job outline:
Dump raw stats data from two InnoDB tables in the website db, they
have a foreign key relationship.
Load the data into tables in the stats db
Get the last value of the auto-incrementing primary key that was
transferred
Use the primary key value in a query that deletes the copied data
from the website db
Process the transferred data in the stats db to populate the reports
tables
When processing completes, delete the raw stats data from the stats
db
The website database is configured as a master with binary logging, and the replicated server will be set up once the stats data is no longer stored and processed in the website database (replicating the website database was the impetus for moving the stats to their own database).
All files accessed during the cron job are located outside the DocumentRoot directory.
The nitty gritty:
The mysqldump performed in the first step requires the RELOAD privilege, here's the command:
<?php
$SQL1 = "--no-create-info --routines --triggers --master-data ";
$SQL1 .= "--single-transaction --quick --add-locks --default-character-set=utf8 ";
$SQL1 .= "--compress --tables stats_event stats_event_attributes";
$OUTPUT_FILENAME = "/var/stats/daily/daily-stats-18.tar.gz";
$cmd1 = "/usr/bin/mysqldump -u website_user -pXXXXXX website_db $SQL1 | gzip -9 > $OUTPUT_FILENAME";
exec( $cmd1 );
?>
The error message:
mysqldump: Couldn't execute 'FLUSH TABLES': Access denied; you need the RELOAD privilege for this operation (1227)
Works fine if I use the mysql admin credentials.
I'm wondering which approach makes more sense or if there's another way to achieve the same results.
The bottom line is that you need a user with certain privileges to run that mysqldump command. While it may seem silly to create a new user just for this one cron job, it's the most straightforward and simple approach you can take that at least gives the outward appearance of lolsecurity.
Given that this is a stopgap measure until you can get replication up and running, there's no real harm being done here. Doing this by replication is totally the way to go, and the stopgap measure seems sane.
Also, when it comes time to get replication going, xtrabackup is your friend. It includes binary log naming and position information with the snapshot it takes, which makes setting up new slaves a breeze.
I just ran across this same error (probably on the same site you were working on :) ), even when running as the MySQL root user. I managed to get around it by not specifying --skip-add-locks, e.g. this worked:
/usr/bin/mysqldump -u USERNAME -pPW DATABASE_NAME --skip-lock-tables --single-transaction --flush-logs --hex-blob
I'm using a php script to backup my sql databases remotely that utilizes mysqldump. http://www.dagondesign.com/files/backup_dbs.txt
and I tried to add the the --lock-tables=false since I'm using MyISAM tables but still got an error.
exec( "$MYSQL_PATH/mysqldump --lock-tables=false $db_auth --opt $db 2>&1 >$BACKUP_TEMP/$db.sql", $output, $res);
error:
mysqldump: Couldn't execute 'show fields from `advisory_info`': Can't create/write to file 'E:\tmp\#sql_59c_0.MYD' (Errcode: 17) (1)
Someone told me this file was the lock file it self and I was able to find it in my Server that I wanted to backup.
So is this the lock file? And does it lock the database if you do remotely no matter if I put the variable --lock-tables=false? Or should it not be there since there are a lot of people working on the server and someone might have created it?
It's likely --lock-tables=false isn't doing what you think it's doing. Since you're passing --lock-tables, it's probably assuming you do want to lock the tables (even though this is the default), so it's locking them. In Linux, we don't prevent flags but appending something like =false or =0, but normally by having a --skip-X or --no-X.
You might want to try --skip-opt:
--skip-opt Disable --opt. Disables --add-drop-table, --add-locks,
--lock-tables, --set-charset, and --disable-keys.
Because --opt is enabled by default, you can --skip-opt then add back any flags you want.
On Windows 7 using Wamp, the option is --skip-lock-tables
Took from this answer
Is there any way I can Import a huge database into my local server.
The database is of 1.9GB and importing it into my local is causing me a lot of problems.
I have tried sql dumping and was not successful in getting it in my local and have also tried changing the Php.ini settings.
Please let me know if there is any other way of getting this done.
I have used BigDump and also Sql Dump Splitter but I am still to able to find a solution
mysql -u #username# -p #database# < #dump_file#
Navigate to your mysql bin directory and login to your mysql
Select the database
use source command to import the data
[user#localhost] mysql -uroot -hlocalhost // assuming no password
[user#localhost] use mydb // mydb is the databasename
[user#localhost] source /home/user/datadump.sql
Restoring a backup of that size is going to take a long time. There's some great advice here: http://vitobotta.com/smarter-faster-backups-restores-mysql-databases-with-mysqldump/ which essentially gives you some additional options you can use to speed up both the initial backup and the subsequent restore.