I have almost thousands of data to display for my reports and it makes my browser lags due to the heavy data. I think that my query is the real problem. How can I optimized my query? is there something that I should add in my query?
I am using Xampp which supports PHP7.
SELECT
`payroll_billed_units`.`allotment_code`,
`payroll_billed_units`.`category_name`,
`payroll_billed_units`.`ntp_number`,
`payroll_billed_units`.`activity`,
`payroll_billed_units`.`regular_labor`,
`payroll_sub`.`block_number`,
(SELECT
GROUP_CONCAT(DISTINCT `lot_number` SEPARATOR ', ')
FROM
`payroll_billed_units` `lot_numbers`
WHERE
`lot_numbers`.`allotment_code` = `payroll_billed_units`.`allotment_code`
AND `lot_numbers`.`category_name` = `payroll_billed_units`.`category_name`
AND `lot_numbers`.`ntp_number` = `payroll_billed_units`.`ntp_number`
AND `lot_numbers`.`activity` = `payroll_billed_units`.`activity`) AS `lot_numbers`,
(SELECT
COUNT(`billed`.`ntp_id`)
FROM
`regular_ntp` `billed`
WHERE
`billed`.`allotment_code` = `payroll_billed_units`.`allotment_code`
AND `billed`.`category_name` = `payroll_billed_units`.`category_name`
AND `billed`.`ntp_number` = `payroll_billed_units`.`ntp_number`
AND `billed`.`activity` = `payroll_billed_units`.`activity`) AS `billed`,
(SELECT
COUNT(`approved`.`id`)
FROM
`payroll_billed_units` `approved`
WHERE
`approved`.`allotment_code` = `payroll_billed_units`.`allotment_code`
AND `approved`.`category_name` = `payroll_billed_units`.`category_name`
AND `approved`.`ntp_number` = `payroll_billed_units`.`ntp_number`
AND `approved`.`activity` = `payroll_billed_units`.`activity`) AS `approved`
FROM
`payroll_billed_units`
JOIN payroll_transaction ON payroll_billed_units.billing_number =
payroll_transaction.billing_number
JOIN payroll_sub ON payroll_transaction.billing_number =
payroll_sub.billing_number
WHERE payroll_billed_units.billing_date = '2019-02-13'
AND payroll_transaction.contractor_name = 'Roy Codal' GROUP BY allotment_code, category_name, activity
I was expecting that it will load or display all my data.
The biggest problem are the dependendt sub-selects, they are responsible for a bad performance. A sub-select will be executed for EVERY ROW of the outer query. And if you cascade subs-selects, you'll quickly have a query run forever.
If any of the parts would yield only 5 resultsets, 3 sub-select would mean that the database has to run 625 queries (5^4)!
Use JOINs.
Several of your tables need this 'composite' index:
INDEX(allotment_code, category_name, ntp_number, activity) -- in any order
payroll_transaction needs INDEX(contractor_name), though it may not get used.
payroll_billed_units needs INDEX(billing_date), though it may not get used.
For further discussion, please provide SHOW CREATE TABLE for each table and EXPLAIN SELECT ...
Use simply COUNT(*) instead of COUNT(foo). The latter checks the column for being not-NULL before including it. This is usually not needed. The reader is confused by thinking that there might be NULLs.
Your GROUP BY is improper because it is missing ntp_number. Read about the sql_mode of ONLY_FULL_GROUP_BY. I bring this up because you can almost get rid of some of those subqueries.
Another issue... Because of the "inflate-deflate" nature of JOIN with GROUP BY, the numbers may be inflated. I recommend you manually check the values of the COUNTs.
Related
Im programming a search with ZF3 and the DB module.
Everytime i use more than 1 short keyword - like "49" and "am" or "1" and "is" i get this error:
Statement could not be executed (HY000 - 2006 - MySQL server has gone away)
Using longer keywords works perfectly fine as long as i dont use 2 or more short keywords.
The problem only occurs on the live server its working fine on the local test server.
The project table has ~2200 rows with all kind of data the project_search table has 17000 rows with multiple entries for each project , each looking like:
id, projectid, searchtext
The searchtext Column is fulltext. Here the relevant part of the php code:
$sql = new Sql($this->db);
$select = $sql->select(['p'=>'projects']);
if(isset($filter['search'])) {
$keywords = preg_split('/\s+/', trim($filter['search']));
$join = $sql->select('project_search');
$join->columns(['projectid' => new Expression('DISTINCT(projectid)')]);
$join->group("projectid");
foreach($keywords as $keyword) {
$join->having(["LOCATE('$keyword', GROUP_CONCAT(searchtext))"]);
}
$select->join(
["m" => $join],
"m.projectid = p.id",
['projectid'],
\Zend\Db\Sql\Select::JOIN_RIGHT
);
}
Here the resulting Query:
SELECT p.*, m.projectid
FROM projects AS p
INNER JOIN (
SELECT projectid
FROM project_search
GROUP BY projectid
HAVING LOCATE('am', GROUP_CONCAT(searchtext))
AND LOCATE('49', GROUP_CONCAT(searchtext))
) AS m
ON m.projectid = p.id
GROUP BY p.id
ORDER BY createdAt DESC
I rewrote the query using "MATCH(searchtext) AGAINST('$keyword)" and "searchtext LIKE '%keyword%' with the same result.
The problem seems to be with the live mysql server how can i debug this ?
[EDIT]
After noticing that the error only occured in a special view which had other search related queries - each using multiple joins (1 join / keyword) - i merged those queries and the error was gone. The amount of queries seemed to kill the server.
Try refactoring your inner query like so.
SELECT a.projectid
FROM (
SELECT DISTINCT projectid
FROM projectsearch
WHERE searchtext LIKE '%am%'
) a
JOIN (
SELECT DISTINCT projectid
FROM projectsearch
WHERE searchtext LIKE '%49%'
) b ON a.projectid = b.projectid
It should give you back the same set of projectid values as your inner query. It gives each projectid value that has matching searchtext for both search terms, even if those terms show up in different rows of project_search. That's what your query does by searching GROUP_CONCAT() output.
Try creating an index on (searchtext, projectid). The use of column LIKE '%sample' means you won't be able to random-access that index, but the two queries in the join may still be able to scan the index, which is faster than scanning the table. To add that index use this command.
ALTER TABLE project_search ADD INDEX project_search_text (searchtext, projectid);
Try to do this in a MySQL client program (phpmyadmin for example) rather than directly from your php program.
Then, using the MySQL client, test the inner query. See how long it takes. Use EXPLAIN SELECT .... to get an explanation of how MySQL is handling the query.
It's possible your short keywords are returning a ridiculously high number of matches, and somehow overwhelming your system. In that case you can put a LIMIT 1000 clause or some such thing at the end of your inner query. That's not likely, though. 17 kilorows is not a large number.
If that doesn't help your production MySQL server is likely misconfigured or corrupt. If I were you I would call your hosting service tech support, somehow get past the front-line support agent (who won't know anything except "reboot your computer" and other such foolishness), and tell them the exact times you got the "gone away" message. They'll be able to check the logs.
Pro tip: I'm sure you know the pitfalls of using LIKE '%text%' as a search term. It's not scalable because it's not sargable: it can't random access an index. If you can possibly redesign your system, it's worth your time and effort.
You could TRY / CATCH to check if you get a more concrete error:
BEGIN TRY
BEGIN TRANSACTION
--Insert Your Queries Here--
COMMIT
END TRY
BEGIN CATCH
DECLARE #ErrorMessage NVARCHAR(4000);
DECLARE #ErrorSeverity INT;
DECLARE #ErrorState INT;
SELECT
#ErrorMessage = ERROR_MESSAGE(),
#ErrorSeverity = ERROR_SEVERITY(),
#ErrorState = ERROR_STATE();
IF ##TRANCOUNT > 0
ROLLBACK
RAISERROR (#ErrorMessage, -- Message text.
#ErrorSeverity, -- Severity.
#ErrorState -- State.
);
END CATCH
Although because you are talking about short words and fulltext it seems to me it must be related to StopWords.
Try running this query from both your dev server and production server and check if there are any differences:
SELECT * FROM INFORMATION_SCHEMA.INNODB_FT_DEFAULT_STOPWORD;
Also check in my.ini (if that is the config file) text file if these are set to:
ft_stopword_file = ""
ft_min_word_len = 1
As stated in my EDIT the problem wasnt the query from the original Question, but some other queries using the search - parameter as well. Every query had a part like follows :
if(isset($filter['search'])) {
$keywords = preg_split('/\s+/', trim($filter['search']));
$field = 1;
foreach($keywords as $keyword) {
$join = $sql->select('project_search');
$join->columns(["pid$field" => 'projectid']);
$join->where(["LOCATE('$keyword', searchtext)"]);
$join->group("projectid");
$select->join(
["m$field" => $join],
"m$field.pid$field = p.id"
);
$field++;
}
}
This resulted in alot of queries with alot of resultrows killing the mysql server eventually. I merged those Queries into the first and the error was gone.
i need to update a lot of db values, so i guess it's better to use a sql statement, maybe creating and uploading a php file and running it from time to time.
in my db i have 3 related tables, let's say
tableA_label
tableB_image
tableC_text
the relations are as follows:
tableaA_label.ImageID refers to tableB_image.ID
tableB_image.TextID refers to tableC_text.ID
my goal is:
update tableA_label.Name
tableA_label.Name = tableC_text.title
where
tableC_text.ID = tableB_image.TextID
and
tableB_image.ID = tableA_label.ImageID
.....
how can accomplish this using an sql statement?
thank you for supporting
Try this query:
UPDATE tableA_label SET
tableA_label.Name = (SELECT TableC_text.title FROM TableC_text INNER
JOIN TableB_image ON TableB_image.TextID = TableC_text.ID
WHERE TableB_image.ID = tableA_label.imageID)
I'm having a performace problem with the execution of a select in PHP PDO.
Using a script available here at stackoverflow (Simplest way to profile a PHP script), I identified where the problem IS, but I have not found a solution.
My select that is the problem is:
SELECT REDACAO.ID_REDACAO AS ID_REDACAO,
DATE_FORMAT(REDACAO.DATA,'%d/%m/%Y') AS DATAE,
ALUNO.ID_ALUNO AS ID_ALUNO,
(SELECT IFNULL((DATEDIFF(DATE_ADD((SELECT MAX(DATA) FROM REDACAO WHERE ID_ALUNO = ALUNO.ID_ALUNO AND ID_REDACAO NOT IN (SELECT ID_REDACAO FROM CORRECAO)), INTERVAL 7 DAY), now())),NULL) as DATA FROM REDACAO LIMIT 1) AS ULTIMA,
ALUNO.NOME as ALUNO,
REDACAO.ID_TEMA AS ID_TEMA,
TEMA.TITULO as TEMA,
TEMA.MOTIVACIONAIS AS MOTIVACIONAIS,
REDACAO.TEXTO AS TEXTO,
REDACAO.ID_STATUS AS STATUS,
B.NOTA as NOTA,
B.RCORRIGIDA AS CORRIGIDA,
B.NOTA1,
B.COMENTARIO1,
B.NOTA2,
B.COMENTARIO2,
B.NOTA3,
B.COMENTARIO3,
B.NOTA4,
B.COMENTARIO4,
B.NOTA5,
B.COMENTARIO5,
B.COMENTARIO6,
C.COMENTARIO AS COMENTARIO
FROM REDACAO
LEFT OUTER JOIN (SELECT SUM(CORRECAO.C1+CORRECAO.C2+CORRECAO.C3+CORRECAO.C4+CORRECAO.C5) AS NOTA, RCORRIGIDA AS RCORRIGIDA, CORRECAO.C1 as NOTA1, CORRECAO.COM1 as COMENTARIO1, CORRECAO.C2 as NOTA2, CORRECAO.COM2 as COMENTARIO2, CORRECAO.C3 as NOTA3, CORRECAO.COM3 as COMENTARIO3, CORRECAO.C4 as NOTA4, CORRECAO.COM4 as COMENTARIO4, CORRECAO.C5 as NOTA5, CORRECAO.COM5 as COMENTARIO5, CORRECAO.COMGERAL AS COMENTARIO6, CORRECAO.ID_REDACAO FROM CORRECAO GROUP BY CORRECAO.ID_REDACAO) B
ON B.ID_REDACAO = REDACAO.ID_REDACAO
JOIN ALUNO ON ALUNO.ID_ALUNO = REDACAO.ID_ALUNO
JOIN TEMA ON TEMA.ID_TEMA = REDACAO.ID_TEMA
LEFT OUTER JOIN (SELECT (COUNT(COMENTARIO.ID_COMENTARIO)) AS COMENTARIO, COMENTARIO.ID_REDACAO FROM COMENTARIO GROUP BY COMENTARIO.ID_REDACAO) C
ON C.ID_REDACAO = REDACAO.ID_REDACAO
WHERE REDACAO.ID_PROFESSOR = $CodProfessor
and REDACAO.ID_STATUS != 6
ORDER BY (CASE WHEN REDACAO.ID_STATUS = 4 THEN 1 ELSE 0 END) DESC
I'm using (PDO :: FETCH_ASSOC) to get the data. Some columns respond in less than 1 second and others in more than 20 seconds.
Any idea what could be the problem and how to solve it?
Your query contains following that will slow it down:
many joins
many subselects
select without where
functions like COUNT, isnull, datediff, sum.(some of these may cancel an index)
case when
order by
group by
Depending on your indexes, on how the tables are joined, and on how big are the tables, this will eventually get very slower.
Try using 'explain' command, and simplify the query if possible.
explain output
a good video about explain
PDO is not at fault; the query is complex. And there may be missing indexes.
Turn this into a LEFT JOIN (because IN (SELECT...) optimizes poorly.
AND ID_REDACAO NOT IN ( SELECT ID_REDACAO FROM CORRECAO)
Upgrade to 5.6; it has some improvements.
You have two JOIN ( SELECT ... ). Before 5.6 that would be optimized terribly. Move one of them out into a temp table, to which you add a suitable index.
In one of the subqueries, GROUP BY CORRECAO.ID_REDACAO seems to be unnecessary.
These indexes (or PRIMARY KEYs) are needed:
CORRECAO: (ID_REDACAO)
REDACAO: (ID_REDACAO), (ID_PROFESSOR)
ALUNO: (ID_ALUNO)
TEMA: (ID_TEMA)
COMENTARIO: (ID_REDACAO, ID_COMENTARIO) ("compound index")
If those suggestions do not help enough, come back with SHOW CREATE TABLE for each table.
So I have
a friends website and I'm trying to help him optimize the queries
because he got an email from the host, that his website was eating too
much resources. So looking at this query I was thinking, is there any way
this can be faster or smarter (consume less time or less memory) I don't
know which would be the best approach maybe prepared statements or some
improvement of the current code.
$query=#mysql_query("
SELECT
tl.title,
tl.dt,
tk.nr as nr2,
tk.name as cat,
tf.name,
tf.type,
tf.alt,
tt.trupi
FROM
".$tab['news']." as tl,
".$tab['category']." as tk,
".$tab['foto']." as tf,
".$tab['entry']." as th,
".$tab['body']." as tt
WHERE
tl.nr = '$nr' AND
tl.foto = tf.nr AND
tl.kategoria = tk.nr AND
tl.hyrja = th.nr AND
tl.trupi = tt.nr");
I'd appreciate all of your suggestions,
Thank you.
For a start, you could use the ANSI 92 join syntax.
Combined with getting rid of two letter acronyms, that would have the benefit of making it clearer what your queries were doing.
Maybe try...
$nr = mysql_real_escape_string( $nr );
$tab['category'] = mysql_real_escape_string( $tab['category'] );
$tab['foto'] = mysql_real_escape_string( $tab['foto'] );
$tab['entry'] = mysql_real_escape_string( $tab['entry'] );
$tab['body'] = mysql_real_escape_string( $tab['body'] );
$query = #mysql_query("
SELECT
tl.title,
tl.dt,
tk.nr as nr2,
tk.name as cat,
tf.name,
tf.type,
tf.alt,
tt.trupi
FROM
".$tab['news']." as tl
LEFT JOIN ".$tab['category']." as tk ON ( tk.nr = tl.kategoria )
LEFT JOIN ".$tab['foto']." as tf ON ( tf.nr = tl.foto )
LEFT JOIN ".$tab['entry']." as th ON ( th.nr = tl.hyrja )
LEFT JOIN ".$tab['body']." as tt ON ( tt.nr = tl.trupi )
WHERE
tl.nr = '$nr'");
Note the escaping of variables before using them in an SQL string (See Bobby Tables.)
Another thing to bear in mind is the design of the database in it's entirety. If there are any instances of "one-to-one" relationships between tables, then those two tables should be collapsed into a single table, etc.
no need to use as for each and every table, and also use Explain to check how optimal your query is you can find a great guide here http://www.databasejournal.com/features/mysql/article.php/1382791/Optimizing-MySQL-Queries-and-Indexes.htm on how to interpret the result of explain
All added actions to a query will decrease performance in one way or another, but I think the bottleneck is the join itself and the way you relate the tables. Try using integers(ids) when relating one table to another and if that's not possible make sure the column that your relating with is indexed. That would increase your query remarkable im certain.
I have a situation where lets say i'm trying to get the information about some food. Then I need to display all the information plus all the ingredients in that food.
With my query, i'm getting all the information in an array but only the first ingredient...
myFoodsArr =
[0]
foodDescription = "the description text will be here"
ratingAverage = 0
foodId = 4
ingredient = 1
ingAmount = 2
foodName = "Awesome Food name"
typeOfFood = 6
votes = 0
I would like to get something back like this...
myFoodsArr =
[0]
foodDescription = "the description text will be here"
ratingAverage = 0
foodId = 4
ingArr = {ingredient: 1, ingAmount: 4}, {ingredient: 3, ingAmount: 2}, {ingredient: 5, ingAmount: 1}
foodName = "Awesome Food name"
typeOfFood = 6
votes = 0
This is the query im working with right now. How can I adjust this to return the food ID 4 and then also get ALL the ingredients for that food? All while at the same time doing other things like getting the average rating of that food?
Thanks!
SELECT a.foodId, a.foodName, a.foodDescription, a.typeOfFood, c.ingredient, c.ingAmount, AVG(b.foodRating) AS ratingAverage, COUNT(b.foodId) as tvotes
FROM `foods` a
LEFT JOIN `foods_ratings` b
ON a.foodId = b.foodId
LEFT JOIN `foods_ing` c
ON a.foodId=c.foodId
WHERE a.foodId=4
EDIT:
Catcall introduced this concept of "sub queries" I never heard of, so I'm trying to make that work to see if i can do this in 1 query easily. But i just keep getting a return false. This is what I was trying with no luck..
//I changed some of the column names to help them be more distinct in this example
SELECT a.foodId, a.foodName, a.foodDescription, a.typeOfFood, AVG(b.foodRating) AS ratingAverage, COUNT(b.foodId) as tvotes
FROM foods a
LEFT JOIN foods_ratings b ON a.foodId = b.foodId
LEFT JOIN (SELECT fId, ingredientId, ingAmount
FROM foods_ing
WHERE fId = 4
GROUP BY fId) c ON a.foodId = c.fId
WHERE a.foodId = 4";
EDIT 1 more thing related to ROLANDS GROUP_CONCAT/JSON Idea as a solution 4 this
I'm trying to make sure the JSON string im sending back to my Flash project is ready to be properly parsed Invalid JSON parse input. keeps popping up..
so im thinking i need to properly have all the double quotes in the right places.
But in my MySQL query string, im trying to escape the double quotes, but then it makes my mySQL vars not work, for example...
If i do this..
GROUP_CONCAT('{\"ingredient\":', \"c.ingredient\", ',\"ingAmount\":', \"c.ingAmount\", '}')`
I get this...
{"ingredient":c.ingredient,"ingAmount":c.ingAmount},{"ingredient":c.ingredient,"ingAmount":c.ingAmount},{"ingredient":c.ingredient,"ingAmount":c.ingAmount}
How can i use all the double quotes to make the JSON properly formed without breaking the mysql?
This should do the trick:
SELECT food_ingredients.foodId
, food_ingredients.foodName
, food_ingredients.foodDescription
, food_ingredients.typeOfFood
, food_ingredients.ingredients
, AVG(food_ratings.food_rating) food_rating
, COUNT(food_ratings.foodId) number_of_votes
FROM (
SELECT a.foodId
, a.foodName
, a.foodDescription
, a.typeOfFood
, GROUP_CONCAT(
'{ingredient:', c.ingredient,
, ',ingAmount:', c.ingAmount, '}'
) ingredients
FROM foods a
LEFT JOIN foods_ing c
ON a.foodsId = c.foodsId
WHERE a.foodsId=4
GROUP BY a.foodId
) food_ingredients
LEFT JOIN food_ratings
ON food_ingredients.foodId = food_ratings.foodId
GROUP BY food_ingredients.foodId
Note that the type of query you want to do is not trivial in any SQL-based database.
The main problem is that you have one master (food) with two details (ingredients and ratings). Because those details are not related to each other (other than to the master) they form a cartesian product with each other (bound only by their relationship to the master).
The query above solves that by doing it in 2 steps: first, join to the first detail (ingredients) and aggregate the detail (using group_concat to make one single row of all related ingredient rows), then join that result to the second detail (ratings) and aggregate again.
In the example above, the ingredients are returned in a structured string, exactly like it appeared in your example. If you want to access the data inside PHP, you might consider adding a bit more syntax to make it a valid JSON string so you can decode it into an array using the php function json_decode(): http://www.php.net/manual/en/function.json-decode.php
To do that, simply change the line to:
CONCAT(
'['
, GROUP_CONCAT(
'{"ingredient":', c.ingredient
, ',"ingAmount":', c.ingAmount, '}'
)
, ']'
)
(this assumes ingredient and ingAmount are numeric; if they are strings, you should double quote them, and escape any double quotes that appear within the string values)
The concatenation of ingredients with GROUP_CONCAT can lead to problems if you keep a default setting for the group_concat_max_len server variable. A trivial way to mitigate that problem is to set it to the maximum theoretical size of any result:
SET group_concat_max_len = ##max_allowed_packet;
You can either execute this once after you open the connection to mysql, and it will then be in effect for the duration of that session. Alternatively, if you have the super privilege, you can change the value across the board for the entire MySQL instance:
SET GLOBAL group_concat_max_len = ##max_allowed_packet;
You can also add a line to your my.cnf or my.ini to set group_concat_max_lenght to some arbitrary large enough static value. See http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_group_concat_max_len
One obvious solution is to actually perform two queries:
1) get the food
SELECT a.foodId, a.foodName, a.foodDescription, a.typeOfFood
FROM `foods` a
WHERE a.foodsId=4
2) get all of its ingredients
SELECT c.ingredient, c.ingAmount
FROM `foods_ing` c
WHERE c.foodsId=4
This approach has the advantage that you don't duplicate data from the "foods" table into the result. The disadvantage is that you have to perform two queries. Actually you have to perform one extra query for each "food", so if you want to have a listing of foods with all their ingredients, you would have to do a query for each of the food record.
Other solutions usually have many disadvantages, one of them is using GROUP_CONCAT function, but it has a tough limit on the length of the returned string.
When you compare MySQL's aggregate functions and GROUP BY behavior to SQL standards, you have to conclude that they're simply broken. You can do what you want in a single query, but instead of joining directly to the table of ratings, you need to join on a query that returns the results of the aggregate functions. Something along these lines should work.
select a.foodId, a.foodName, a.foodDescription, a.typeOfFood,
c.ingredient, c.ingAmount,
b.numRatings, b.avgRating
from foods a
left join (select foodId, count(foodId) numRatings, avg(foodRating) avgRating
from foods_ratings
group by foodId) b on a.foodId = b.foodId
left join foods_ing c on a.foodId = c.foodId
order by a.foodId