MySQL between query returning redundant results - php

SELECT webcal_entry.cal_id, webcal_entry.cal_name , webcal_entry.cal_priority,
webcal_entry.cal_date , webcal_entry.cal_time , webcal_entry_user.cal_status,
webcal_entry.cal_create_by , webcal_entry.cal_access, webcal_entry.cal_duration ,
webcal_entry.cal_description , webcal_entry_user.cal_category
FROM webcal_entry, webcal_entry_user
WHERE webcal_entry.cal_date BETWEEN '20090601' AND '20090631'
When I execute that query php throws back:
mysql_query(): unable to save result set
MySQL client ran out of memory
When I limit the results I see that it is pulling some 2.8 million results back. This table has 7,241 rows.
I realize I could use LIKE, but I really don't want to go that route.
Thanks for any help!

You are joining the webcal_entry and webcal_entry_user tables in your FROM clause but you do not have any JOIN ON or WHERE conditions constraining which rows from webcal_entry_user to return. The result will be the cartesian product of the two tables, which basically means you'll get back each webcal_entry row that has a valid cal_date, multiplied by the number of rows in the webcal_entry_user table.
In other words, if you webcal_entry_user has 400 rows you'll get back 400*7241 = 2.8 million rows! Yikes!

You're doing a cartesian join. Basically, for each row in the first table you're joining against all the rows in the second. If the first table has 4 rows and the second 10 the result set will have 40 (4x10)

You need to add the key you're joining on that exists in both tables to the WHERE clause. Something like:
AND webcal_entry.user_id = webcal_entry_user.user_id

Related

Lost connection after simple query for a big table

I am running a complex LEFT JOIN query of two tables.
Table A - 1.6 million rows
Table B - 700k rows.
All columns are indexed.
I tried different debuggings but had no success on finding the problem since I guess that's not too many data.
Anyway I found out that there is no problem if I remove the 'WHERE' clause in my query
But when I try this simple query on table A - it returns "Lost connection".
SELECT id FROM table_A ORDER BY id LIMIT 10
What is the best practice to run this query? I don't wish to exceed the timeout.
Are my tables too big and should I "empty" the old data or something?
How do you handle big tables with millions of rows and JOINS? All I know that can help is indexing, and I've already done that.
A million rows -- not a problem; a billion rows -- then it gets interesting. Your tables are not "too big".
"All columns are indexed." -- Usually a mistake. We need to see the actual query before commenting on what index(es) would be useful.
Possibly you need a "composite" index.
SELECT id FROM table_A ORDER BY id LIMIT 10 -- If there is an index starting with id, that will return nearly instantly. Please provide SHOW CREATE TABLE table_A so we can see the schema.

Missing rows when duplicating data from another table using MySQL

INSERT cash_transaction2017 SELECT * FROM cash_transaction WHERE created_at < "2018-01-01 00:00:00"
Above is the result from the script I executed, total rows is 336,090.
However, when I browse the table from phpMyAdmin, I can only see 334,473 rows.
After ordering my rows in cash_transaction2017 table in ascending, I found out that some rows are missing because the last created_date is different with that of cash_transaction.
Why is this happening? The result is the same no matter I execute the script from mysql console, or using php codes.
I also tried to use mysqldump with --skip-tz-utc and it's also missing some rows.
UPDATE
SELECT count(*) FROM `cash_transaction2017`
SELECT count(*) FROM `cash_transaction` WHERE created_at < "2018-01-01 00:00:00"
Apparently executing these 2 queries give me same number of rows, however, the last rows from these 2 queries are different. See the screenshots below:
UPDATE 2
Since both tables are transactions table, so if they have the same total amount, it should signifies that they have the same number of rows without any data loss.
So I tried SELECT SUM(amount) on both tables and turns out both the tables have same total amount from SUM(amount)
So the question now is, are there actually any missing rows? Does this problem occur because I'm using innodb?
You may try to add the line in your config.inc.php from your phpMyAdmin directory:
$cfg['MaxExactCount'] = 1000000*
*(Make sure $cfg['MaxExactCount'] is large enough)
This problem probably occurs only with InnoDB tables. Hope this is useful.

PHP MYSQL Query Multiple Tables Where the Second Table returns multiple rows

I'm trying to query 2 tables where the first table will return 1 row and the second table will return multiple rows. So basically the first table with return text on a page and the second table will return a list that will go within the page. Both tables have a reference row which is what both tables are queried on. (See Below)
SELECT shop_rigs.*, shop_rigs_images.*, shop_rigs_parts.*
FROM shop_rigs
LEFT JOIN shop_rigs_images
ON shop_rigs.shoprigs_ref = shop_rigs_images.shoprigsimg_ref
LEFT JOIN shop_rigs_parts
ON shop_rigs.shoprigs_ref = shop_rigs_parts.shoprigsparts_ref
WHERE shoprigs_enabled='1' AND shoprigs_ref='$rig_select'
ORDER BY shoprigs_order ASC
Is it better to just do 2 queries?
Thanks,
dane
I would do this in two queries. The problem isn't efficiency or the size of the respective tables, the problem is that you're create a Cartesian product between shop_rigs_images and shop_rigs_parts.
Meaning that if a given row of shop_rigs has three images and four parts, you'll get back 3x4 = 12 rows for that single shop_rig.
So here's how I'd write it:
SELECT ...
FROM shop_rigs
INNER JOIN shop_rigs_images
ON shop_rigs.shoprigs_ref = shop_rigs_images.shoprigsimg_ref
WHERE shoprigs_enabled='1' AND shoprigs_ref='$rig_select'
ORDER BY shoprigs_order ASC
SELECT ...
FROM shop_rigs
INNER JOIN shop_rigs_parts
ON shop_rigs.shoprigs_ref = shop_rigs_parts.shoprigsparts_ref
WHERE shoprigs_enabled='1' AND shoprigs_ref='$rig_select'
ORDER BY shoprigs_order ASC
I left the select-list of columns out, because I agree with #Doug Kress that you should select only the columns you need from a given query, not all columns with *.
If you're pulling a large amount of data from the first table, then it would be better to do two queries.
Also, for efficiency, it would be better to specify each column that you actually need, instead of all columns - that way, less data will be fetched and retrieved.
Joins are usually more efficient than running 2 queries, as long as you are joining on indexes, but then it depends on your data and indexes.
You may want to run a "explain SELECT ....." for both options and compare "possible keys" and "rows" from your results.

Postgresql Query

I have working in database Postgresql for the first time. I need your help to finding out a solution. One table contains 15 rows With a regn_srno as P.K., another table has the same regn_srno as F.K. I want to count the total number of rows which has the same regn_srno. But My problem is the second table has contain 2 or 3 fields with the same regn_srno. So when i use count in the query it shows 12 (including the same regn_srno), but the original number is 10. Due to the same regn_srno repeat in the second table i got the answer as 12.
When we group by regn_srno we get the result as 1,1,1,1,2,1,2,1,1,1. So i need the query to get the count as 10.Please help me. Please send me the answer through my mail id.
For what I could figure out without tables schema, I think you want
SELECT count(DISTINCT regn_smo) FROM t1 JOIN t2 USING (regn_smo);
You could simply do:
SELECT count(DISTINCT regn_smo) FROM t2

Is it possible to have 2 limits in a MySQL query?

Ok here is the situation (using PHP/MySQL) you are getting results from a large mysql table,
lets say your mysql query returns 10,000 matching results and you have a paging script to show 20 results per page, your query might look like this
So page 1 query
SELECT column
FROM table_name
WHERE userId=1
AND somethingelse=something else
LIMIT 0,20
So page 2 query
SELECT column
FROM table_name
WHERE userId=1
AND somethingelse=something else
LIMIT 20,40
Now you are grabbing 20 results at a time but there are a total of 10,000 rows that match your search criteria,
How can you only return 3,000 of the 10,000 results and still do your paging of 20 per page with a LIMIT 20 in your query?
I thought this was impossible but myspace does it on there browse page somehow, I know they aren't using php/mysql but how can it be achieved?
UPDATE
I see some people have replied with a couple of methods, it seems none of these would actually improve the performance by limiting the number to 3000?
Program your PHP so that when it finds itself ready to issue a query that ends with LIMIT 3000, 20 or higher, it would just stop and don't issue the query.
Or I am missing something?
Update:
MySQL treats LIMIT clause nicely.
Unless you have SQL_CALC_FOUND_ROWS in your query, MySQL just stops parsing results, sorting etc. as soon as it finds enough records to satisfy your query.
When you have something like that:
SELECT column
FROM table_name
WHERE userId=1
AND somethingelse='something else'
LIMIT 0, 20
, MySQL will fetch first 20 records that satisfy the criteria and stop.
Doesn't matter how many records match the criteria: 50 or 1,000,000, performance will be the same.
If you add an ORDER BY to your query and don't have an index, then MySQL will of course need to browse all the records to find out the first 20.
However, even in this case it will not sort all 10,000: it will have a "running window" of top 20 records and sort only within this window as soon as it finds a record with value large (or small) enough to get into the window.
This is much faster than sorting the whole myriad.
MySQL, however, is not good in pipelining recorsets. This means that this query:
SELECT column
FROM (
SELECT column
FROM table_name
WHERE userId=1
AND somethingelse='something else'
LIMIT 3000
)
LIMIT 0, 20
is worse performance-wise than the first one.
MySQL will fetch 3,000 records, cache them in a temporary table (or in memory) and apply the outer LIMIT only after that.
Firstly, the LIMIT paramters are Offset and number of records, so the second parameter should always be 20 - you don't need to increment this.
Surely if you know the upper limit of rows you want to retrieve, you can just put this into your logic which runs the query, i.e. check that Offset + Limit <= 3000
As Sohnee said, or (depending on your requirements) you can get all the 3000 records by SQL and then use array_slice in php to get chunks of the array.
You could achieve this with a subquery...
SELECT name FROM (
SELECT name FROM tblname LIMIT 0, 3000
) `Results` LIMIT 20, 40
Or with a temporary table, whereby you select all 3000 rows into a temp table then page by the temporary row id, which will be sequential.
You can specify the limit as a function of the page number (20*p, 20*p+2) in your php code, and limit the value of the page number to 150.
Or you could get the 3000 records and them using jquery tabs, split the records on 20 per page.

Categories