I copied the contents of a large data table from one table to another with 2 additional columns,
the table1(original data) is getting queried by
select * from cc2;
But the same data with 2 more additional columns having NULL values throughout are not getting executed normally. i have to put limit clause to make it execute. like
select * from cc *limit 0,68000*;
the database is same, table and content are same. the question is WHY this weird behavior. and to parse my this data to foreach() loop, i am having to run for() loop and it is affecting the performance.
Any suggestions would be tried and tested asap.
Thanks in advance geniuses
Instead of using php for importing a lot of data, just try to execute directly from the commandline.
first dump your table:
mysqldump -u yourusername -p yourpassword yourdatabase tableName > text_file.sql
then change the tablename in the top of that file (and make sure the extra columns have default NULL ). And import with
mysql -u yourusername -p yourpassword yourdatabase < text_file.sql
Using a textfile containing the query is always preferred for large data sets, so you don't run into problems with PHP or the webserver.
Related
I want to do an export of a table and we do not have mysqldump installed.
I thought I can do this:
root:~> mysql news media > news.media.7.26.2016.sql
where news is the database name and media is the table name
it doesn't seem to work correctly.
Your command tries to mimic mysqldump but mysql does not have a table parameter. You could run it like this:
mysql -D news -e "SELECT * FROM media" > news.media.7.26.2016.txt
That will work but you won't get nice SQL statements in the output, just tabular data export.
I mean that you may (or may not) run into problems when importing the data back. There's a chance to use
mysql -D news -e "LOAD DATA INFILE 'news.media.7.26.2016.txt' INTO TABLE media"
but I do not have much experience with that. First of your concerns is secure-file-priv setting that has been made strict starting in MySQL 5.7.6. Second, I would be a tad bit nervous about preserving data types.
I use PHP to parse big CSV file and generate a SQL file that contains INSERT requests.
But at the beginning of the file, I also put a DELETE statement to clear the database before.
My file looks like something like this :
DELETE FROM `my_table` WHERE id IN (<list>) ;
INSERT INTO `my_table` .... (lot of values);
The lines are correctly inserted but the old ones are not deleted. I tried the delete request in PHPMyAdmin : it works. So the issue comes from the way I run the sql file.
I use the exec method in PHP :
$command = "mysql.exe -u user -pPassword my_database < sqlfile.sql";
It seems that this line, using the left chevron, works fine for INSERT statements, but not for DELETE ones.
Any idea to solve that ?
Thanks a lot :)
Code isn't entirely clear, but judging by your WHERE ... IN clause, you're either manually generating an ID via SELECT and manually incrementing (Usually bad), or you're simply doing it wrong. I see 2 options based on these scenarios:
TRUNCATE TABLE `my_table`
This empties all previous values from said table.
Second, if you were auto-generating ID's, set up the ID as auto-increment and try
# Null corresponds to auto-increment/primary key ID field in MySQL table
INSERT INTO `my_table` ... VALUES (null, value, value)
I am using mysqldump to create DB dumps of the live application to be used by developers.
This data contains customer data. I want to anonymize this data, i.e. remove customer names / credit card data.
An option would be:
create copy of database (create dump and import dump)
fire SQL queries that anonymize the data
dump the new database
But this has to much overhead.
A better solution would be, to do the anonymization during dump creation.
I guess I would end up parsing all the mysqlsqldump output? Are there any smarter solutions?
You can try Myanon: https://myanon.io
Anonymization is done on the fly during dump:
mysqldump | myanon -f db.conf | gzip > anon.sql.gz
Why are you selecting from your tables if you want to randomize the data?
Do a mysqldump of the tables that are safe to dump (configuration tables, etc) with data, and a mysqldump of your sensitive tables with structure only.
Then, in your application, you can construct the INSERT statements for the sensitive tables based on your randomly created data.
I had to develop something similar few days ago. I couldn't do INTO OUTFILE because the db is AWS RDS. I end up with that approach:
Dump data in tabular text form from some table:
mysql -B -e 'SELECT `address`.`id`, "address1" , "address2", "address3", "town", "00000000000" as `contact_number`, "example#example.com" as `email` FROM `address`' some_db > addresses.txt
And then to import it:
mysql --local-infile=1 -e "LOAD DATA LOCAL INFILE 'addresses.txt' INTO TABLE \`address\` FIELDS TERMINATED BY '\t' ENCLOSED BY '\"' IGNORE 1 LINES" some_db
only mysql command is required to do this.
As the export is pretty quick (couple of seconds for ~30.000 rows), the import process is a bit slower, but still fine. I had to join few tables on the way and there was some foreign keys so it will surely be faster if you don't need that. Also if you disable foreign key checks while importing it will also speed up things.
You could do a select of each table (and not a select *) and specify the columns you want to have and omit or blank those you don't want to have, and then use the export option of phpmyadmin for each query.
You can also use the SELECT ... INTO OUTFILE syntax from a SELECT query to make a dump with a column filter.
I found to similar questions but it looks like there is no easy solution for what you want. You will have to write a custom export yourself.
MySQL dump by query
MySQL: Dump a database from a SQL query
phpMyAdmin provides an export option to the SQL format based on SQL queries. It might be an option to extract this code from PHPmyadmin (which is probably well tested) and use it in this application.
Refer to the phpMyAdmin export plugin - exportData method for the code.
am working on a news paper project but without Rss Feed, so i was Compelled to make it's feed programatically using php , so i have 2300 process of processing pages and inserting in Mysql the results of the processing ,
so the technique i used is to process every single page and then insert it's contents in mysql , it's working good but some times i got "MySQL server gone" ,
i tried to process 30 page and insert them in one request but it stop's after some time
so i am asking about any way to optimize this processing to reduce the time used in ?!
thanks alot
Your batch insert approach is correct and likely to help. You need to find out why it stops after some time like you say.
It is likely the php script timing out. Look for max_execution_time on your php.ini file to make sure it's high enough to allow for the script to finish.
Also, make sure your mysql config allows for a large enough packet because you're sending large batches which may be large.
Hope that helps!
There are plenties of reasons why "MySQL server has gone away". Take a look.
Anyway, it is strange that you load the WHOLE pages. Usually RSS feed implies that you put there just a subject and some text snippet. I'd create RSS feed as simple XML file so it is not necessary to load data from MySQL on EVERY hit users do. You create news -> regenerate RSS XML file, you wrote new article -> regenerate RSS XML file.
If you still want to prepare your data to be inserted, just create a file with ALL inserts and then load data from this file.
$ mysql -u root
mysql> \. /tmp/data_to_load.sql
Yes! all 2300 at a time ;)
$generated_sql = 'insert into Table (c1,c1,c3) values (data1,data2,data3);insert into Table (c1,c1,c3) values (data4,data5,data6);';
$sql_file = '/tmp/somefile';
$fp = fopen($sql_file, 'w');
fwrite($fp, $generated_sql, strlen($generated_sql)); // wrote sql script
fclose($fp);
`mysql -u $mysql_username --password=$mysql_password < $sql_file`;
Backticks are necessary in the last line!
$ mysql -u root test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 171
Server version: 5.1.37-1ubuntu5.5 (Ubuntu)
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create table test (id int(11) unsigned not null primary key);
Query OK, 0 rows affected (0.12 sec)
mysql> exit
Bye
$ echo 'insert into test.test values (1); insert into test.test values (2);' > file
$ php -a
Interactive shell
php > `mysql -u root < /home/nemoden/file`;
php > exit
$ mysql -u root test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 180
Server version: 5.1.37-1ubuntu5.5 (Ubuntu)
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> select * from test
-> ;
+----+
| id |
+----+
| 1 |
| 2 |
+----+
2 rows in set (0.00 sec)
So, as you can see, it worked perfectly.
I have the following 1 db table in Database 1 and 1db table in Database 2, now the stucture of both tables are exactly the same. Now what happens is table 1 (DB1) gets new rows added daily, I need to update the table 1 (DB 1) new rows in table 1 (DB 2) so that these 2 tables remain the same. A cron will trigger a php script on midnight to do this task. What is the best way to do this and how using PHP/mysql?
You might care to have a look at replication (see http://dev.mysql.com/doc/refman/5.4/en/replication-configuration.html). That's the 'proper' way to do it; it isn't to be trifled with, though, and for small tables the above solutions are probably better (and certainly easier).
This might help you out, its what i do on my database for a similar kinda thing
$dropSQL = "DROP TABLE IF EXISTS `$targetTable`";
$createSQL = "CREATE TABLE `$targetTable` SELECT * FROM `$activeTable`";
$primaryKeySQL = "ALTER TABLE `$targetTable` ADD PRIMARY KEY(`id`)";
$autoIncSQL = "ALTER TABLE `$targetTable` CHANGE `id` `id` INT( 60 ) NOT NULL AUTO_INCREMENT";
mysql_query($dropSQL);
mysql_query($createSQL);
mysql_query($primaryKeySQL);
mysql_query($autoIncSQL);
obviously you will have to modify the taget and active table variables. Dropping the table will lose the primary key when you do this, oh well .. easy enough to add back in
I would recommend replication as has already been suggested. However, another option is to use mysqldump to grab the rows you need and send them to the other table.
mysqldump -uUSER -pPASSWORD -hHOST --compact -t --where="date=\"CURRENT_DATE\"" DB1 TABLE | mysql -uUSER -pPASSWORD -hHOST -D DB2
Replace USER, HOST, and PASSWORD with login info for your database. You can use different information for each part of the command if DB1 and DB2 have different access information. DB1 and DB2 are the names of your databases, and TABLE is the name of the table.
You can also modify the --where option to grab only the rows which need to updated. Hopefully you have some query you can use. As mentioned previously, if the table has a primary key, you could grab the last key which DB2 has using a command something like
KEY=`echo "SELECT MAX(KEY_COLUMN) FROM TABLE;" mysql -uUSER -pPASSWORD -hHOST -D DB2`
for a bash shell script (then use this value in the WHERE clause above). Depending on how your primary key is generated, this may be a bad idea since rows may be added in holes in the keyspace if they exist.
This solution will also work if rows are changed as long as you have a query which can select these rows. Just add the --replace option to the mysqldump command. In your situation, it would be best to add some type of value such as date updated which you can compare by.