MySQL SubQueries Multiple Results - php

I have a query as follows:
SELECT event
FROM log
WHERE user = (SELECT SUBSTRING(details,55) FROM log WHERE details LIKE 'ID: 308%')
I know I can use an inner join or php loop separate here but I have queries where I cannot use inner joins and a problem similar to this happens (which im about to explain).
The subquery for the where clause returns many email addresses, and I want to then bring up any log events relating to any of them. My problem is I then get an error message 'Subquery returns more than 1 row'.
If anyone could help that would be much appreciated!

I think this should work:
SELECT event FROM log
WHERE user IN
(SELECT SUBSTRING(details,55) FROM log WHERE details LIKE 'ID: 308%')

Use WHERE user IN <subquery> instead of WHERE user = <subquery>.
However, in my experience, MySQL's performance of IN <subquery> is usually very poor. This can always be rewritten as a JOIN, and that usually performs better.
Here's your example query rewritten as a JOIN:
SELECT event
FROM log l1
JOIN (SELECT DISTINCT SUBSTRING(details,55) loguser
FROM log
WHERE details LIKE 'ID: 308%') l2
ON l1.user = l2.loguser
In this particular case, I suspect performance will be similar. Where MySQL usually gets it wrong is when the subquery in the JOIN returns an indexed column, and you're joining with an indexed column in the main table. MySQL decides to use the index in the subquery instead of the main table, and that's usually wrong (in most cases of queries like this, the subquery returns a small subset of values).

Your other option is to use EXISTS
SELECT `event`
FROM log
WHERE EXISTS(SELECT *
FROM log AS log1
WHERE log1.details LIKE 'ID: 308%'
AND log.user = SUBSTRING(log1.details,55))

Related

Mysql fetch from last to first - [many records]

i want to fetch records from mysql starting from last to first LIMIT 20. my database have over 1M records. I am aware of order by. but from my understanding when using order by its taking forever to load 20 records i have no freaking idea. but i think mysql fetch all the records before ordering.
SELECT bookings.created_at, bookings.total_amount,
passengers.name, passengers.id_number, payments.amount,
passengers.ticket_no,bookings.phone,bookings.source,
bookings.destination,bookings.date_of_travel FROM bookings
INNER JOIN passengers ON bookings.booking_id = passengers.booking_id
INNER JOIN payments on payments.booking_id = bookings.booking_id
ORDER BY bookings.booking_id DESC LIMIT 10
I suppose if you execute the query without the order by the time would be satisfactory?
You might try to create an index in the column your are ordering:
create index idx_bookings_booking_id on bookings(booking_id)
You can try to find out complexity of the Query using
EXPLAIN SELECT bookings.created_at, bookings.total_amount,
passengers.name, passengers.id_number, payments.amount,
passengers.ticket_no,bookings.phone,bookings.source,
bookings.destination,bookings.date_of_travel FROM bookings
INNER JOIN passengers ON bookings.booking_id = passengers.booking_id
INNER JOIN payments on payments.booking_id = bookings.booking_id
ORDER BY bookings.booking_id DESC LIMIT 10
then check the proper index has been created on the table
SHOW INDEX FROM `db_name`.`table_name`;
if the index us not there create proper index on all the table
please add if anything is missing
The index lookup table needs to be able to reside in memory, if I'm not mistaken (filesort is much slower than in-mem lookup).
Use small index / column size
For a double in capacity use UNSIGNED columns if you need no negative values..
Tune sort_buffer_size and read_rnd_buffer_size (maybe better on connection level, not global)
See https://dev.mysql.com/doc/refman/5.7/en/order-by-optimization.html , particularly regarding using EXPLAIN and the maybe trying another execution plan strategy.
You seem to need another workaround like materialized views.
Tell me if this sounds like it:
Create another table like the booking table e.g. CREATE TABLE booking_short LIKE booking. Though you only need the booking_id column
And check your code for where exactly you create booking orders, e.g. where you first insert into booking. SELECT COUNT(*) FROM booking_short. If it is >20, delete the first record. Insert the new booking_id.
You can select the ID and join from there before joining for more details with the rest of the tables.
You won't need limit or sorting.
Of course, this needs heavy documentation to avoid maintenance problems.
Either that or https://stackoverflow.com/a/5912827/6288442

Database Group By error

I've been working with Mysql for a while, but this is the first time I've encountered this problem.
The thing is that I have a select query...
SELECT
transactions.inventoryid,
inventoryName,
inventoryBarcode,
inventoryControlNumber,
users.nombre,
users.apellido,
transactionid,
transactionNumber,
originalQTY,
updateQTY,
finalQTY,
transactionDate,
transactionState,
transactions.observaciones
FROM
transactions
LEFT JOIN
inventory ON inventory.inventoryid = transactions.inventoryid
LEFT JOIN
users ON transactions.userid = users.userid
GROUP BY
transactions.transactionNumber
ORDER BY
transactions.inventoryid
But the GROUP BY is eliminating 2 values from the QUERY.
In this case, when I output:
foreach($inventory->inventory as $values){
$transactionid[] = $values['inventoryid'];
}
It returns:
2,3,5
If I eliminate the GROUP BY Statement it returns
2,3,4,5,6
Which is the output I need for this particular case.
The question is:
Is there a reason for this to happen?
If I'm grouping by a transaction and that was supposed to affect the query, wouldn't it then return only 1 value?
Maybe I'm over thinking this one or been working too long on the code that I don't see the obvious flaw in my logic. But if someone can lend me a hand I would appreciate it.
In standard SQL you can only SELECT colums which are
contained in GROUP BY clause
(or) aggregate "colums", like MAX() or COUNT().
You need to consult the MySQL description of the interpretation they use for columns which are not contained by GROUP BY (and which are no aggregated column) MySQL Handling of GROUP BY to find out what happens here.
Do you need more information?

PHP array_diff VS mysql NOT IN

I tried to compare two zipcode columns between two tables to see if values were missing in the second one.
I first wanted to do it with mysql, my query was something like
'SELECT code FROM t1 WHERE t1 NOT IN (select code FROM t2)'
But it was really slow so I tried another way :
I made two select, and then compared the results with array_diff().
With mysql : few minutes, and sometimes crash
With PHP : less than 1 second.
Can someone explain these differences ?
Is my SQL query wrong ?
If your main table has 50k rows, using a sub select in your query will result into 1 + 50k executions of selects. One for the first table, and 50k selects, one for each row. The server compares the row with your sub select that is reloaded every time iterating the main table. This is why your sql code takes its time and it also may be a huge memory problem as well.
See serjoschas information about joins to fix it in sql, it should be even faster that your php solution.
Checking which values are missing within a table (compared to another) can easily be done with a LEFT or RIGHT JOIN they are just made for actions like this.. alternatively take a look at this: How to Find Missing Value Between Two Mysql Tables – serjoscha
One solution to:
SELECT code FROM t1
WHERE code NOT IN ( SELECT code FROM t2 )
will be:
SELECT t1.code
FROM t1
LEFT JOIN t2
ON t1.code = t2.code
WHERE t2.code is null
Have a try. Also have a look on indexing as Cyclone suggests:
If you don't have an index you should definitly add one since this will speed up your query. You could add an index like this: ALTER TABLE ADD INDEX code_idx (code) this should be done for both tables. If you then were to execute EXPLAIN for the query you would see something like Using where; Using index; Using join buffer which is good – Cyclone
Indexing speeds up your query. If the table only provides one column, searching an index table with the same content as the source table will be exactly the same and redundant. Otherwise I strongly recommend indexing the code column of t2 which leads to a high increase of performance and less memory consumtion.

Is it OK to run the WHILE loops in MySQL?

Is it ok to a mysql query inside a while loop using the ID of each row passed to fetch results from another table? OR is there a better way to do it?
$q = $__FROG_CONN__->query("SELECT cms_page.id, cms_page.title, cms_page.slug, cms_page_part.* FROM cms_page LEFT JOIN cms_page_part ON cms_page_part.page_id=cms_page.id WHERE cms_page.parent_id='8'");
$r = $q->fetchAll(PDO::FETCH_ASSOC);
echo '<ul id="project-list">';
foreach ($r as $row) {
echo '<li>';
echo '<img src="<img src="phpThumb/phpThumb.php?src=public/images/'.$row[0].'/th.jpg&w=162" alt="" />';
echo '<div class="p-text">';
echo '<h4>'.$row["location"].'<span>'.$row["project_date"].'</span></h4>';
echo '<p>'.$row["body"].'</p>';
echo '</div>';
echo '</li>';
}
echo '</ul>';
I am trying to pull the project_date, body and location fields from another table where the sql query matches. The title and slug are held in another table. There should only be a maximum of eight or so results but im getting alot more.
The suggestions using IN are fine, but if you are getting the ids from another query, it might be better to combine these two queries into one query using a join.
Instead of:
SELECT id FROM users WHERE age <30
SELECT id, x FROM userinfo WHERE userid IN ($id1, $id2, ..., $idn)
do:
SELECT users.id, userinfo.x
FROM users
LEFT JOIN userinfo ON userinfo.userid = users.id
WHERE age < 30
To reduce the overhead of preforming a query, you may want to look at getting all the data in a single query. In which case you may want to take a look at IN(), e.g.
SELECT * WHERE x IN (1, 2);
There is also BETWEEN()
SELECT * WHERE x BETWEEN 1 AND 2;
See the mysql docs for more information
I would try to build the query in a way where I only need to pass it once. Something like WHERE ID=1 OR ID=2 OR ... Passing multiple queries and returning multiple recordsets is expensive in terms of processing and network traffic.
This will be very inefficient, what you want is to join the tables on the ID
SELECT * FROM table1 LEFT JOIN table2 ON (table1.ID = table2.ID) WHERE condition
Mysql join documentation
This will return one set of rows with all the information you need, returned from both tables.
In a small application / small result set, this might be okay, but it results in a lot of (small) calls to the database.
If you can find an alternative way (perhaps see Yacoby's suggestion?) in which you can do one call to the database, this is probably better.
EDIT
If you are only interested in the IDs from one table, in order to get the correct results out of another table, perhaps a JOIN is what you are looking for?
SELECT t1.fieldA, t1.fieldB
FROM table1 t1
JOIN table2 t2 ON t1.ID = t2.ID
WHERE t2.someField = 'someValue'
Is it ok to a mysql query inside a while loop using the ID of each row passed to fetch results from another table? OR is there a better way to do it?
You should reformulate your query in SQL. Say, put the ids into a memory table and use it in a JOIN:
SELECT *
FROM idtable
JOIN mytable
ON mytable.id = idtable.id
This way, MySQL will make the loops for you but will make them in (usually) more efficient way.
SQL is a language designed to work with sets.
So if you have a set of ids and a table (which is a set of tuples), your first thought should be "how do I apply the set-based operations to these sets using SQL"?
Of course it is possible to run a bunch of simple queries in a loop but this way you just do extra work which SQL engine developers most probably have already done for you (and usually have done it in more efficient way).
You may want to read this article in my blog:
Double-thinking in SQL

what is better to use php query set or mysql function?

If you had data in table 1 that you need to use to return data in table 2 for each row returned in table 1. What is more efficient to use a set of querys in PHP one inbedded in the while loop of the other or an SQL function within a query?
for example:
$qry = mysql_query(SELECT date FROM table1)
while($res = mysql_fetch_array($qry))
{
$qry = mysql_query("SELECT name FROM table2 WHERE date=$res['date']")
}
or to do this as a function that returns the Id from table1 within the query.
A (LEFT / RIGHT) JOIN?
Unless I've misunderstood the question...
I think you're looking for JOIN sql syntax. If you have 2 tables: messages and author and you want to return messages with authors. Then you can write following SQL statement:
SELECT m.body, a.name FROM message m
LEFT JOIN author a ON (a.id=m.author_id)
This will return message body with corresponding author name
Table author:
id - primary key
name - name of the author
Table message:
body - text of the message
author_id - id of the author
UPD1:
This will be faster then looping each message and select an author. Because JOIN allows you to return all data in single query (not N x queries when using for loop).
UPD2:
With your tables the query will look like:
SELECT t1.date, t2.name FROM table1 t1 LEFT JOIN table2 t2 ON (t2.date=t1.date)
It depends on if the data is easier to find during the while loop or in the query itself.
So if the DB has to base the sub-query on the result of each row in the main query, and there are 1000 total rows and 100 results in the main query, it has to check all of the rows 100 times, so that's 100,000 sub-queries it runs.
So think it terms of the number of results of the main query. If your while loop has to query the DB 100 times while the DB can do it much faster and efficiently in one query, that's better. But if you want a subset of answers that you can say 'query only based on the last set of results' the while loop is better.
What is more efficient to use
a set of querys in PHP one inbedded in the while loop of the other
or
an SQL function within a query
Seems you answered your question yourself, didn't you?
Every query you send to the dbms has to be sent over the network, parsed, analyzed then executed. You may want to minimize the number of queries sent to the db.
There may be exceptions, for example if the processing of the data requires operations which the dbms is not capable of.

Categories