INSERT cash_transaction2017 SELECT * FROM cash_transaction WHERE created_at < "2018-01-01 00:00:00"
Above is the result from the script I executed, total rows is 336,090.
However, when I browse the table from phpMyAdmin, I can only see 334,473 rows.
After ordering my rows in cash_transaction2017 table in ascending, I found out that some rows are missing because the last created_date is different with that of cash_transaction.
Why is this happening? The result is the same no matter I execute the script from mysql console, or using php codes.
I also tried to use mysqldump with --skip-tz-utc and it's also missing some rows.
UPDATE
SELECT count(*) FROM `cash_transaction2017`
SELECT count(*) FROM `cash_transaction` WHERE created_at < "2018-01-01 00:00:00"
Apparently executing these 2 queries give me same number of rows, however, the last rows from these 2 queries are different. See the screenshots below:
UPDATE 2
Since both tables are transactions table, so if they have the same total amount, it should signifies that they have the same number of rows without any data loss.
So I tried SELECT SUM(amount) on both tables and turns out both the tables have same total amount from SUM(amount)
So the question now is, are there actually any missing rows? Does this problem occur because I'm using innodb?
You may try to add the line in your config.inc.php from your phpMyAdmin directory:
$cfg['MaxExactCount'] = 1000000*
*(Make sure $cfg['MaxExactCount'] is large enough)
This problem probably occurs only with InnoDB tables. Hope this is useful.
Related
I am running a complex LEFT JOIN query of two tables.
Table A - 1.6 million rows
Table B - 700k rows.
All columns are indexed.
I tried different debuggings but had no success on finding the problem since I guess that's not too many data.
Anyway I found out that there is no problem if I remove the 'WHERE' clause in my query
But when I try this simple query on table A - it returns "Lost connection".
SELECT id FROM table_A ORDER BY id LIMIT 10
What is the best practice to run this query? I don't wish to exceed the timeout.
Are my tables too big and should I "empty" the old data or something?
How do you handle big tables with millions of rows and JOINS? All I know that can help is indexing, and I've already done that.
A million rows -- not a problem; a billion rows -- then it gets interesting. Your tables are not "too big".
"All columns are indexed." -- Usually a mistake. We need to see the actual query before commenting on what index(es) would be useful.
Possibly you need a "composite" index.
SELECT id FROM table_A ORDER BY id LIMIT 10 -- If there is an index starting with id, that will return nearly instantly. Please provide SHOW CREATE TABLE table_A so we can see the schema.
I am facing issue in figure out the number of rows in mysql table 'users'.
Number of rows when browse the table is different then number of rows when fire SQL query to count(*) from table.
Screenshot :
but when i fire following query
SELECT COUNT(*) FROM `users`
It is giving me different number of rows
Screenshot is of different database, in actual database records are 7,40,215 and in count query it is 16,12,145
Please help me out to figure this issue out.
I'm building a sales system for a company, but I'm stuck with the following issue.
Every day I load .XML productfeed into a database called items. The rows in the productfeed are never in the same order, so sometimes the row with Referentie = 380083 is at the very top, and the other day that very same row is at the very bottum.
I also have to get all the instock value, but when I run the following query
SELECT `instock` FROM SomeTable WHERE `id` > 0
I get all values, but not in the same order as in the other table.
So I have to get the instock value of all rows where referentie in table A is the same as it is in table B.
I already have this query:
select * from `16-11-23 wed 09:37` where `referentie` LIKE '4210310AS'
and this query does the right job, but I have like 500 rows in the table.
So I need to find a way to automate the: LIKE '4210310AS' bit, so it selects all 500 values in one go.
Can somebody tell me how that can be done?
I'm not even sure I understand your problem...
Don't take this personally, but you seem to be concerned/confused by the ordering of the data in the tables which suggests to me your understanding of relational databases and SQL is lacking. I suggest you brush up on the basics.
Can't you just use the following query?
SELECT a.referentie
, b.instock
FROM tableA a
, tableB b
WHERE b.referentie = a.referentie
Say I have a table with 1million rows. One column lists the "Group", and another lists "Sales". The Group #'s range from 1 to 100,000 such that each Group has about 10 Sales entries. I want to somehow summarize the data into 100,000 rows with the sum of Sales for each group rather than each individual sale.
My method so far has been to run a PHP loop from 1 to 100,000 where each iteration sends an SQL query to sum(Sales) WHERE Group=$i. Then I can either echo it into an html table, or insert it into a new SQL table. Problem is it takes hours this method.
Any tips on how I can improve this process? Is there a way to write this as a single SQL query that will massively increase speed? Thanks
Just try a GROUP BY:
SELECT `group`, sum(sales)
FROM your_table
GROUP BY `group`
Edit to include back ticks for group. Without them you will receive an error
You should always avoid a SQL query in a loop unless there's no other solution. In this case, you can grab all the fields at once and have them in an array and add them up in PHP that way.
I have the following query:
SELECT vBrowser,iconBrowser, count(iconBrowser) as 'N'
FROM user_ip_tmp WHERE code='9m9g9tsv2y'
GROUP BY iconBrowser
ORDER BY N DESC
LIMIT 40
And this works properly. But the delirious cause query took a long time.
Showing rows 0 - 17 ( 18 total, Query took 4.4189 sec)
Things that are in WHERE statement, should be indexed.
Try to use EXPLAIN statement before your SELECT to see what and how is used to retrief your requested results.
And if the column code is not an unique value, i would recommend to put it in some other table, where it is unique. Then build the query using JOIN though the FOREIGN KEY.