Efficiency of queries against DATABASE Issue - php

I`m trying to query my database with 4 queries, each query will give me the number of rows that I need but I faced with but I notice when I do the query on the server that strong, everything works fast (relatively), but if I make it a relatively simple server that I deserve the maximum working time (30 seconds, the limit execution time(PHP)).
my while loop works with list of MokedCcode each MokedCcode goes into the queries and go to the next MokedCcode.
Notes :
1) I know it will be proportional to the number of MokedCcode-s.
2) I need to increase the execution time limit?
1) there is more efficient way to make those queries? maybe I dont use the mysql features right.
for example the first query called $emerg, this query need to give me the number of rows between dates, where WCODE have priority 1 and it has to be match of MokedCcode on both tables ( t and e ).
$emerg = mysql_num_rows(mysql_query("SELECT t.*,e.DISCODE,e.AREA,e.COLOR,e.PRIORITY FROM $tbl_name AS t
LEFT JOIN eventcodes AS e ON t.MokedCcode = e.MokedCcode AND t.WCODE=e.WCODE
WHERE (t.MokedCcode='$MokedCcode' ) AND e.PRIORITY='1'
AND t.ndate BETWEEN '$start' AND '$end' ORDER By `id` DESC "));
In addition I added the 3 more queries, I would like to get some advice how to make it faster Or I dont have any choice and keep it like that.
The 3 other queries:
$regular = mysql_num_rows(mysql_query("SELECT t.*,e.DISCODE,e.AREA,e.COLOR,e.PRIORITY FROM $tbl_name AS t
LEFT JOIN eventcodes AS e ON t.MokedCcode = e.MokedCcode AND t.WCODE=e.WCODE
WHERE (t.MokedCcode='$MokedCcode' ) AND e.PRIORITY!='1'
AND t.ndate BETWEEN '$start' AND '$end' ORDER By `id` DESC "));
$regHandled = mysql_num_rows(mysql_query("SELECT t.*,e.DISCODE,e.AREA,e.COLOR,e.PRIORITY FROM $tbl_name AS t
LEFT JOIN eventcodes AS e ON t.MokedCcode = e.MokedCcode AND t.WCODE=e.WCODE
WHERE (t.MokedCcode='$MokedCcode' ) AND e.PRIORITY!='1'
AND t.EventHandling!='0' AND t.ndate BETWEEN '$start' AND '$end' ORDER By `id` DESC "));
$emergHandled = mysql_num_rows(mysql_query("SELECT t.*,e.DISCODE,e.AREA,e.COLOR,e.PRIORITY FROM $tbl_name AS t
LEFT JOIN eventcodes AS e ON t.MokedCcode = e.MokedCcode AND t.WCODE=e.WCODE
WHERE (t.MokedCcode='$MokedCcode' ) AND e.PRIORITY='1'
AND t.EventHandling!='0' AND t.ndate BETWEEN '$start' AND '$end' ORDER By `id` DESC "));

I didn't exactly understand what you are trying to achieve.
If you just want the count Why are you selecting all the rows? cant you use COUNT() MySQL function? It is always slow to get all the data to your script and then count the rows.
Even if you want to select all those columns, try using t.field1, t.field2, ... instead of t.*

Related

PHP - SQL optimization min/max too slow

I'm having some problems with a query that finds the next ID of an orders with certain filters on it - Like it should be from a specific city, etc.
Currently it's used for a function, where it'll either spit out the previous or the next ID based on the current order. So it can either be min(id) or max(id), where max(id) is obviously faster, since it has to go through less rows.
The query is working just fine, but it's rather slow, since it's going through 123953 rows to find the ID. Is there any way I could optimize this?
Function example:
SELECT $minmax(orders.orders_id) AS new_id FROM orders LEFT JOIN orders_status ON orders.orders_status = orders_status.orders_status_id $where_join WHERE orders_status.language_id = '$languages_id' AND orders.orders_date_finished != '1900-01-01 00:00:00' AND orders.orders_id $largersmaller $ordersid $where;
Live example
SELECT min(orders.orders_id)
FROM orders
LEFT JOIN orders_status ON orders.orders_status = orders_status.orders_status_id
WHERE orders_status.language_id = '4'
AND orders.orders_date_finished != '1900-01-01 00:00:00'
AND orders.orders_id < 4868771
LIMIT 1
so concluding:
SELECT orders.orders_id
FROM orders
JOIN orders_status ON orders.orders_status = orders_status.orders_status_id
WHERE orders_status.language_id = '4'
AND orders.orders_date_finished != '1900-01-01 00:00:00'
AND orders.orders_id < 4868771
ORDER BY orders.orders_id ASC
LIMIT 1
Extra:
to get the MAX value, use DESC where ASC is now.
And looking at your question: be sure to escape the values like $language_id etcetera. I suppose they could come from some html form?
(or use prepared statements)

Generate random MySQL rows quickly and run same sql query multiple times

I have found an example where it generates a random row quickly:
MySQL select 10 random rows from 600K rows fast
Now I would like to run that query 10 times but I'm getting exactly same output instead of different rows. Any ideas how to solve this:
Here is my code:
<?php
for ($e = 0; $e <= 14; $e++) {
$sql_users = "SELECT user_name, user_age, country, age_from, age_to, gender, profile_image, gender_search, kind_of_relationship
FROM users AS r1 JOIN
(SELECT CEIL(RAND() *
(SELECT MAX(id)
FROM users)) AS id)
AS r2
WHERE r1.id >= r2.id
ORDER BY r1.id ASC
LIMIT 1";
$statement6 = $dbConn->prepare($sql_users);
$statement6->execute();
more = $statement6->fetch(PDO::FETCH_BOTH);
?>
<?php echo $more['user_name'];?>
<?php } ?>
If you want ten rows, how bad is the performance of:
select u.*
from users u
order by rand()
limit 10;
This does do exactly what you want. And, getting all the rows in a single query saves lots of overhead in running multiple queries. So, despite the order by rand(), it might be faster than your approach. However, that depends on the number of users.
You can also do something like this:
select u.*
from users u cross join
(select count(*) as cnt from users u) x
where rand() < (10*5 / cnt)
order by rand()
limit 10;
The where clause randomly chooses about 50 rows -- give or take. But with a high confidence, there will be at least 10. This number sorts quickly and you can randomly choose 10 of them.

MySQL time out issue

I am facing serious issue in my workout related to PHP and MySql on Linux server while when am running same code with same database in localhost, it's working fine.
As well as I have almost 30,000 records in database table and mysql is:
SELECT * FROM tbl_movies where id not in (select movie_id from tbl_usermovieque where user_id='3' union
select movie_id from tbl_user_movie_response where user_id='3' union
select movie_id from tbl_user_movie_fav where user_id='3') and id < 220 order by rand() limit 0,20
its taking 0.0010 sec in my localhost and INFINITE on our linux server. i unable to find the reason.
Thanks
Kamal
Can you confirm this return the same result ? It should be faster this way. Union are usefull sometime but not really optimized.
SELECT * FROM tbl_movies where id not in (
select distinct movie_id
from tbl_movies m
inner join tbl_usermovieque um ON um.movie_id = m.movie_id = m.movie_id
inner join tbl_user_movie_response umr ON umr.movie_id = m.movie_id = m.movie_id
inner join tbl_user_movie_fav umf ON umf.movie_id = m.movie_id = m.movie_id
where um.user_id = 3 or umr.user_id = 3 or umf.user_id = 3
) and id < 220 order by rand() limit 0,20;
PS : I assume you have Index un oser_id and id_movie
EDIT : your problem may come from rand()
MySQL order by optimization Look for RAND() in the page : in comment there are some performance test => rand() alone seams to be a bad solution
Performance
Now let's see what happends to our performance. We have 3 different
queries for solving our problems.
Q1. ORDER BY RAND()
Q2. RAND() * MAX(ID)
Q3. RAND() * MAX(ID) + ORDER BY ID
Q1 is expected to cost N * log2(N), Q2 and Q3 are nearly constant.
The get real values we filled the table with N rows ( one thousand to
one million) and executed each query 1000 times.
Rows ||100 ||1.000 ||10.000 ||100.000 ||1.000.000
Q1||0:00.718s||0:02.092s||0:18.684s||2:59.081s||58:20.000s
Q2||0:00.519s||0:00.607s||0:00.614s||0:00.628s||0:00.637s
Q3||0:00.570s||0:00.607s||0:00.614s|0:00.628s ||0:00.637s
As you can see the plain ORDER BY RAND() is already behind the
optimized query at only 100 rows in the table.

reduce left join query excution time

I'm developing an sql query that can join two table and it returns some results.
I have 2 tables in first table i save my order and in another table save my like information .
I want to show to user picture from order table that, user doesn't like picture yet . I use this query
SELECT amg_order.*
FROM amg_order
LEFT OUTER JOIN amg_like ON amg_like.order_id=amg_order.order_id
AND amg_like.user_id=:user_id
WHERE amg_order.status = '1'
AND amg_order.user_id != :user_id
AND (amg_like.user_id != :user_id || amg_like.user_id is null)
ORDER BY amg_order.likeType DESC, RAND()
This query return correct result but when like information be over 15000 time to execution this query has been 6 seconds .
Does anyone has any idea to reduce this time ?
I'm sorry my English is so bad :)
You can try following query. This will of course reduce some of your execution time. You can specify fields name instead of * sign in your select statement.
Here is updated query:
SELECT amg_order.* FROM amg_order
LEFT JOIN amg_like ON amg_order.order_id = amg_like.order_id
WHERE amg_order.status= '1' AND amg_order.user_id != :user_id AND (amg_like.user_id != :user_id || amg_like.user_id is null)
ORDER BY amg_order.likeType DESC LIMIT 10;

Is not advisable to move calculations from MySQL to PHP?

I got stuck with the question/problem I've asked earlier about – How to optimise a table for AVG query? It turns out that #MichaelT was right about one thing – calculating AVG is faster using PHP than MySQL (like 80% faster with 5m records and 24 GB RAM machine).
It isn't always even an options. However, consider this code example (dataset size 5m records).
The MySQL way.
1) aggregate data (creating a temporary data) (500ms)
CREATE TEMPORARY TABLE `temporary_grouped_data` AS
(
SELECT
`r1`.`id`,
`c1`.`wt`,
`c1`.`cpu`,
`c1`.`mu`,
`c1`.`pmu`
FROM
`requests` `r1`
INNER JOIN
`request_hosts` `rh1`
ON
`rh1`.`id` = `r1`.`request_host_id`
INNER JOIN
`request_uris` `ru1`
ON
`ru1`.`id` = `r1`.`request_uri_id`
INNER JOIN
`calls` `c1`
ON
`c1`.`id` = `r1`.`request_caller_id`
WHERE
1=1 {$sql_query['where']}
);
2) get overall AVG (300ms)
SELECT COUNT(`id`), MIN(`wt`), MAX(`wt`), AVG(`wt`), MIN(`cpu`), MAX(`cpu`), AVG(`cpu`), MIN(`mu`), MAX(`mu`), AVG(`mu`), MIN(`pmu`), MAX(`pmu`), AVG(`pmu`) FROM `temporary_grouped_data`;
3) calculate 95th percentile (200ms)
SELECT `wt` FROM `temporary_grouped_data` ORDER BY `wt` ASC LIMIT 1726, 1;
SELECT `cpu` FROM `temporary_grouped_data` ORDER BY `cpu` ASC LIMIT 1726, 1;
SELECT `mu` FROM `temporary_grouped_data` ORDER BY `mu` ASC LIMIT 1726, 1;
SELECT `pmu` FROM `temporary_grouped_data` ORDER BY `pmu` ASC LIMIT 1726, 1;
4) calculate mode (200ms)
SELECT `wt`, COUNT(`wt`) `quantity` FROM `temporary_grouped_data` GROUP BY `wt` ORDER BY `quantity` DESC LIMIT 1;
SELECT `cpu`, COUNT(`cpu`) `quantity` FROM `temporary_grouped_data` GROUP BY `cpu` ORDER BY `quantity` DESC LIMIT 1;
SELECT `mu`, COUNT(`mu`) `quantity` FROM `temporary_grouped_data` GROUP BY `mu` ORDER BY `quantity` DESC LIMIT 1;
SELECT `pmu`, COUNT(`pmu`) `quantity` FROM `temporary_grouped_data` GROUP BY `pmu` ORDER BY `quantity` DESC LIMIT 1
The PHP way.
1) Get all the relevant records into an array (200ms).
SELECT
`r1`.`id`,
`c1`.`wt`,
`c1`.`cpu`,
`c1`.`mu`,
`c1`.`pmu`
FROM
`requests` `r1`
INNER JOIN
`request_hosts` `rh1`
ON
`rh1`.`id` = `r1`.`request_host_id`
INNER JOIN
`request_uris` `ru1`
ON
`ru1`.`id` = `r1`.`request_uri_id`
INNER JOIN
`calls` `c1`
ON
`c1`.`id` = `r1`.`request_caller_id`
2) Perform all the calculations (200ms).
The PHP approach is by far faster. Is there any reason why I shouldn't use PHP to perform these calculations?
Shifting the work off to PHP means having to transfer the whole result set over the wire, which could be really bad depending on the size. Also, I am not a database person by any stretch of the imagination but these results are unexpected. You should look into the possibility you are doing things the wrong way in the database version.
Speed isn't everything. I'd put it where it belongs to.
Will it scale in PHP?
Also with the PHP approach, you have to transfer all the data from the db to php. How much RAM does it cost, etc.
Maybe your database isn't well optimized. Your DB's disks might be bad, etc.
Seriously, I'd leave it in the db and check to see why it performs so badly.

Categories