my issue is this
I have table in my database which have more than 1 million rows. Sometimes i need have sql dump of my table in my PC and export whole table taking very long.
Actually i need exported table for example with last 5000 rows.
So is there a way to export MySql table by selecting last X rows.
I know some ways to do it by terminal commands, but i need poor MySql query if it is possible.
Thanks
If I understand well, you could try by using the INTO OUTFILE functionality provided by MySQL. of course I don't know what's your current query but you can easily change my structure with yours:
SELECT *
FROM table_name
INTO OUTFILE '/tmp/your_dump_table.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
ORDER BY id DESC
LIMIT 5000
Since the user #Hayk Abrahamyan has expressed preference to export the dump as .sql file, let's try to analyze a valid alternative:
We can run the query from phpmyadmin or (it's for sure a better solution) MysqlWorkbench SQL editor console and save it by press the export button (see the picture below):
As .sql output file result you will have something like the structure below:
/*
-- Query: SELECT * FROM mytodo.todos
ORDER BY id DESC
LIMIT 5000
-- Date: 2018-01-07 13:15
*/
INSERT INTO `todos` (`id`,`description`,`completed`) VALUES (3,'Eat something',1);
INSERT INTO `todos` (`id`,`description`,`completed`) VALUES (2,'Buy something',1);
INSERT INTO `todos` (`id`,`description`,`completed`) VALUES (1,'Go to the store',0);
Related
Using PHP a secure user will enter a Ref (ex. NB093019) a query will be used to determine which PO(s) have that Ref and if they have any quantity. The issue is that we have 86 columns to check if that Ref is in and then once it finds what column it is in how to check the corresponding column that contains that quantity( the table cannot be edited).
I can make this work with 86 if else statements in PHP and then more if else statements inside of each PHP statement. I have no launching point once i do the initial query.
select 'remainder'as prefix, po, *comments,*GuideRef, *Qty
from remainder
where ('NB092419')IN (NWANTcomments,NWANTGuideRef,NWANTpreviouscomments,
NWANTpreviousGuideRef,NWANTprevious2comments,
NWANTprevious2GuideRef, BPrev2GuideRef,
BPrev2comments, BPrevGuideRef, BPrevcomments,
aGuideRef, Mcomments,MGuideRef,acomments,
MAGuideRef,BOGuideRef )
group by po
I have removed some of the in() information so it is not so long also the *comments, *GuideRef, *Qty would be decided by which one of the columns in the IN() statement returns information. Is this even possible
You could perhaps write an SQL that writes an SQL:
select REPLACE(
'SELECT ''{colstub}GuideRef'' as which, {colstub}Qty FROM remainder WHERE {colstub}Ref like ''%somevalue%'' UNION ALL',
'{colstub}',
REPLACE(column_name, 'GuideRef', '')
)
FROM information_schema.columns
WHERE table_name = 'remainder' and column_name LIKE '%Ref'
It works like "pull all the column names out of the info schema where the column name is like %guideref, replace guideref with nothing to get just the fragment of the column name that is varied: NWANTguideref -> NWANT, NWANTpreviousguideref -> NWANTprevious ... then uses this stub to form a query that gives a string depicting the column name, the qty from the quantity column, where the relevant guideref column is LIKE some value"
If you run this it will produce a result set like:
SELECT 'aGuideRef' as which, aQty FROM table WHERE aGuideRef LIKE '%lookingfor%' UNION ALL
SELECT 'bGuideRef' as which, bQty FROM table WHERE bGuideRef LIKE '%lookingfor% ...
So it's basically utputted a load of strings that are SQLs in themselves. It might need a bit of fine tuning, and hopefully all your columns are reliably and rigidly like xQty, xGuideRef, xComments triplets, but it essentially writes most the query for you
If you then copy the result set out of the results grid and paste it back into the query window, remove the last UNION ALL and run it, it will search the columns and tell you where it was found as well as the quantity
It's not too usable for a production system, but you could do the same in php- run the query, get the strings into another sql command, re-run it..
I would suggest you consider changing your table structure though:
prefix, qty, guideref, comments
You shouldn't have 86 columns that are the mostly same thing; you should have one column that is one of 86/3 different values then you can just query the guideref and the type. If this were an address table, I'm saying you **shouldn't* have HomeZipcode, WorkZipcode, UniversityZipcode, MomZipcode, DadZipcode.. and every time you want to store another kind of address you add more columns (BoyfriendZipcode, GirlfriendZipcode, Child1Zipcode...). Instead if you just had an "addresstype" column then you can store any number of different kinds of addresses without recompiling your app and changing your db schema
You can use this technique to re-shape the table - write an SQL that writes a bunch of UNION ALL sqls (without WHERE clauses), one of the columns should be the "recordtype" column (from colstub) and the other columns should just be "qty", "guide", "comments". Once you have your result set with the unions you can make a table to hold these 4 things, and then place INSERT INTO newtable at the head of the block of unions
I have a table with over a million rows. The PHP export script with headers and ajax that I normally use to create the user interface to export to csv, is not able to handle these many rows and times out. Hence looking for an alternative
After a fews days of digging I collated the below script from the internet, this downloads wonderfully but only to the local server > mysql folder
What I am looking for is to create a php mysql script so I can let users download large tables to csv's through the php user interface itself
SELECT 'a', 'b', 'c' UNION ALL SELECT a,b,c INTO OUTFILE '2026.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' FROM table ;
You need to fetch from the DB with paginated results.
Select * from a LIMIT 0,100; the result of this query you must put in in a variable, then parse it like this:
Export to CSV via PHP
After you put the first 100 elements to the csv, you have to fetch from 100 to 200 and so on...then to reopen the csv and put the 100 to 200 elements and so on.
And then finally send the csv to the user.
I have a very large database table (more than 700k records) that I need to export to a .csv file. Before exporting it, I need to check some options (provided by the user via GUI) and filter the records. Unfortunately this filtering action cannot be achieved via SQL code (for example, a column contains serialized data, so I need to unserialize and then check if the record "passes" the filtering rules.
Doing all records at once leads to memory limit issues, so I decided to break the process in chunks of 50k records. So instead of loading 700k records at once, I'm loading 50k records, apply filters, save to the .csv file, then load other 50k records and go on (until it reaches the 700k records). In this way I'm avoiding the memory issue, but it takes around 3 minutes (This time will increase if the number of records increase).
Is there any other way of doing this process (better in terms of time) without changing the database structure?
Thanks in advance!
The best thing one can do is to get PHP out of the mix as much as possible. Always the case for loading CSV, or exporting it.
In the below, I have a 26 Million row student table. I will export 200K rows of it. Granted, the column count is small in the student table. Mostly for testing other things I do with campus info for students. But you will get the idea I hope. The issue will be how long it takes for your:
... and then check if the record "passes" the filtering rules.
which naturally could occur via the db engine in theory without PHP. Without PHP should be the mantra. But that is yet to be determined. The point is, get PHP processing out of the equation. PHP is many things. An adequate partner in DB processing it is not.
select count(*) from students;
-- 26.2 million
select * from students limit 1;
+----+-------+-------+
| id | thing | camId |
+----+-------+-------+
| 1 | 1 | 14 |
+----+-------+-------+
drop table if exists xOnesToExport;
create table xOnesToExport
( id int not null
);
insert xOnesToExport (id) select id from students where id>1000000 limit 200000;
-- 200K rows, 5.1 seconds
alter table xOnesToExport ADD PRIMARY KEY(id);
-- 4.2 seconds
SELECT s.id,s.thing,s.camId INTO OUTFILE 'outStudents_20160720_0100.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
FROM students s
join xOnesToExport x
on x.id=s.id;
-- 1.1 seconds
The above 1AM timestamped file with 200K rows was exported as a CSV via the join. It took 1 second.
LOAD DATA INFILE and SELECT INTO OUTFILE are companion functions that, for one one thing, cannot be beat for speed short of raw table moves. Secondly, people rarely seem to use the latter. They are flexible too if one looks into all they can do with use cases and tricks.
For Linux, use LINES TERMINATED BY '\n' ... I am on a Windows machine at the moment with the code blocks above. The only differences tend to be with paths to the file, and the line terminator.
Unless you tell it to do otherwise, php slurps your entire result set at once into RAM. It's called a buffered query. It doesn't work when your result set contains more than a few hundred rows, as you have discovered.
php's designers made it use buffered queries to make life simpler for web site developers who need to read a few rows of data and display them.
You need an unbuffered query to do what you're doing. Your php program will read and process one row at a time. But be careful to make your program read all the rows of that unbuffered result set; you can really foul things up if you leave a partial result set dangling in limbo between MySQL and your php program.
You didn't say whether you're using mysqli or PDO. Both of them offer mode settings to make your queries unbuffered. If you're using the old-skool mysql_ interface, you're probably out of luck.
Could please somebody help me find out how to iterate these raw txt data to mysql. The format is
user id | item id | rating | timestamp
and i want to insert these data to my table in MySql (using PHPmyAdmin), let's say the table structure is : user_id (int), item_id(int), rating(int), timestamp(int) with its name "Rating".
So, i want to know how to insert these data to my table, i'm fine with php, or if there are easier way to do this.
If you want to generate raw SQL queries, you can do so by using find and replace in your text editor (that looks like Notepad++). I'm guessing that your delimiters are tabs.
Find and replace all tab characters and replace them with a comma. We do not need to quote anything as all of your fields are integers.
Find and replace all newline characters and replace them with a SQL query.
Execute these commands in regular expression mode:
Columns
Find: \t
Replace: ,
Rows
Find: \r\n (if that doesn't find anything, look for \n)
Replace: );\r\nINSERT INTO Rating (user_id, item_id, rating, timestamp) VALUES (
On the first row, insert the text INSERT INTO Rating (user_id, item_id, rating, timestamp) VALUES ( to make the row a valid SQL statement.
On the last row, remove any trailing portions of SQL query after the last semicolon.
Copy and paste this into your PHPMyAdmin and it should be all good.
The simplest way I have found for doing similar is to use Excel. Import the text file into a new document - judging by the look it should be easy to seperate the columns as they appear to be tab delimited. Once you have the required columns set up a string concatenation to include the values... kind of like
=CONCATENATE("INSERT INTO Rating SET user_id='",A1,"', item_id='",B1,"', rating='",C1,"', timestamp='",D1,"';")
Then repeat for all rows, copy and paste into sql client
you can use toad for mysql , import wisard and you create a table with the same structure (user id | item id | rating | timestamp) of you file after import all data you export the sql insert of you new table.
I usually prepare reports and charts from excel manually using pivot table adding several columns manually from the raw data and then using pivot table on the fields and populating it.
And I would like to see if this can be automated by:
a) Loading the data into a mysql database
b) Using several queries to add additional columns and then prepare the data ready to be used by
c) Chart APIs/JQuery.
Since I know csv to mysql is easier, I now have the raw data file in CSV format.
The raw data basically contains different fields mainly time, date time and strings.
Using a PHP script, I was able to load these data using the LOAD DATA LOCAL INFILE command.
Based on dates, I need to prepare a column y which says months and this month column has to be updated with the month name('jan', etc.) depending on the date field(yyyy-mm-dd hh:mm:ss) on certain x column in the same table.
or maybe just use this and reference in the graphs(Not sure how complex that would be):-
mysql> select count(*) as Count, monthname(date) from alerts;
+-------+---------------------------------+
| Count | monthname(date) |
+-------+---------------------------------+
| 24124 | March |
+-------+---------------------------------+
1 row in set (0.19 sec)
Similarly, I need a column a that says "Duration < 5 minutes" and a column b that says "Duration > 5 min < 10 min" , where I would put a numeric value '1', if it falls within the range.
I looked into the self-join examples but I could not make it work in my case inspite of several efforts.
I need some help to get me going because my belief is that a table with all relevant columns is better off than using queries at runtime.
Also, is it better to format the data first and load it to mysql OR load the data and format it?
Please let me know.
Thanks
Update1
Okay, I got this working with a self join as below
UPDATE t1 p1 INNER JOIN ( select monthname(dt_received) AS EXTMONTHNAME from t1)p2 SET p1.MONTH=p2.EXTMONTHNAME;
but why does it update all the month with the same month name even though dt_received has other months ?
Can someone help?
Update2
Again, still struggling, I was made aware of the 1093 error/constraint. The workarounds are simply not helping
Unlike Excel where manual formatting was required, I found querying the database much easier using queryies
This resolved the issue
UPDATE tablename p1 INNER JOIN ( select monthname(dt_received) AS EXTMONTHNAME from tablename )p2 SET p1.MONTH=p2.EXTMONTHNAME where monthname(p1.dt_received)=p2.EXTMONTHNAME;
But would someone know, why it takes close to 14 minutes to change 36879 rows?
How do I optimize it.