Mysql Server timing out on specific locate queries - php

Im programming a search with ZF3 and the DB module.
Everytime i use more than 1 short keyword - like "49" and "am" or "1" and "is" i get this error:
Statement could not be executed (HY000 - 2006 - MySQL server has gone away)
Using longer keywords works perfectly fine as long as i dont use 2 or more short keywords.
The problem only occurs on the live server its working fine on the local test server.
The project table has ~2200 rows with all kind of data the project_search table has 17000 rows with multiple entries for each project , each looking like:
id, projectid, searchtext
The searchtext Column is fulltext. Here the relevant part of the php code:
$sql = new Sql($this->db);
$select = $sql->select(['p'=>'projects']);
if(isset($filter['search'])) {
$keywords = preg_split('/\s+/', trim($filter['search']));
$join = $sql->select('project_search');
$join->columns(['projectid' => new Expression('DISTINCT(projectid)')]);
$join->group("projectid");
foreach($keywords as $keyword) {
$join->having(["LOCATE('$keyword', GROUP_CONCAT(searchtext))"]);
}
$select->join(
["m" => $join],
"m.projectid = p.id",
['projectid'],
\Zend\Db\Sql\Select::JOIN_RIGHT
);
}
Here the resulting Query:
SELECT p.*, m.projectid
FROM projects AS p
INNER JOIN (
SELECT projectid
FROM project_search
GROUP BY projectid
HAVING LOCATE('am', GROUP_CONCAT(searchtext))
AND LOCATE('49', GROUP_CONCAT(searchtext))
) AS m
ON m.projectid = p.id
GROUP BY p.id
ORDER BY createdAt DESC
I rewrote the query using "MATCH(searchtext) AGAINST('$keyword)" and "searchtext LIKE '%keyword%' with the same result.
The problem seems to be with the live mysql server how can i debug this ?
[EDIT]
After noticing that the error only occured in a special view which had other search related queries - each using multiple joins (1 join / keyword) - i merged those queries and the error was gone. The amount of queries seemed to kill the server.

Try refactoring your inner query like so.
SELECT a.projectid
FROM (
SELECT DISTINCT projectid
FROM projectsearch
WHERE searchtext LIKE '%am%'
) a
JOIN (
SELECT DISTINCT projectid
FROM projectsearch
WHERE searchtext LIKE '%49%'
) b ON a.projectid = b.projectid
It should give you back the same set of projectid values as your inner query. It gives each projectid value that has matching searchtext for both search terms, even if those terms show up in different rows of project_search. That's what your query does by searching GROUP_CONCAT() output.
Try creating an index on (searchtext, projectid). The use of column LIKE '%sample' means you won't be able to random-access that index, but the two queries in the join may still be able to scan the index, which is faster than scanning the table. To add that index use this command.
ALTER TABLE project_search ADD INDEX project_search_text (searchtext, projectid);
Try to do this in a MySQL client program (phpmyadmin for example) rather than directly from your php program.
Then, using the MySQL client, test the inner query. See how long it takes. Use EXPLAIN SELECT .... to get an explanation of how MySQL is handling the query.
It's possible your short keywords are returning a ridiculously high number of matches, and somehow overwhelming your system. In that case you can put a LIMIT 1000 clause or some such thing at the end of your inner query. That's not likely, though. 17 kilorows is not a large number.
If that doesn't help your production MySQL server is likely misconfigured or corrupt. If I were you I would call your hosting service tech support, somehow get past the front-line support agent (who won't know anything except "reboot your computer" and other such foolishness), and tell them the exact times you got the "gone away" message. They'll be able to check the logs.
Pro tip: I'm sure you know the pitfalls of using LIKE '%text%' as a search term. It's not scalable because it's not sargable: it can't random access an index. If you can possibly redesign your system, it's worth your time and effort.

You could TRY / CATCH to check if you get a more concrete error:
BEGIN TRY
BEGIN TRANSACTION
--Insert Your Queries Here--
COMMIT
END TRY
BEGIN CATCH
DECLARE #ErrorMessage NVARCHAR(4000);
DECLARE #ErrorSeverity INT;
DECLARE #ErrorState INT;
SELECT
#ErrorMessage = ERROR_MESSAGE(),
#ErrorSeverity = ERROR_SEVERITY(),
#ErrorState = ERROR_STATE();
IF ##TRANCOUNT > 0
ROLLBACK
RAISERROR (#ErrorMessage, -- Message text.
#ErrorSeverity, -- Severity.
#ErrorState -- State.
);
END CATCH
Although because you are talking about short words and fulltext it seems to me it must be related to StopWords.
Try running this query from both your dev server and production server and check if there are any differences:
SELECT * FROM INFORMATION_SCHEMA.INNODB_FT_DEFAULT_STOPWORD;
Also check in my.ini (if that is the config file) text file if these are set to:
ft_stopword_file = ""
ft_min_word_len = 1

As stated in my EDIT the problem wasnt the query from the original Question, but some other queries using the search - parameter as well. Every query had a part like follows :
if(isset($filter['search'])) {
$keywords = preg_split('/\s+/', trim($filter['search']));
$field = 1;
foreach($keywords as $keyword) {
$join = $sql->select('project_search');
$join->columns(["pid$field" => 'projectid']);
$join->where(["LOCATE('$keyword', searchtext)"]);
$join->group("projectid");
$select->join(
["m$field" => $join],
"m$field.pid$field = p.id"
);
$field++;
}
}
This resulted in alot of queries with alot of resultrows killing the mysql server eventually. I merged those Queries into the first and the error was gone.

Related

PHP - Left join from two tables in different databases with different credentials using PDO

Please note: While my original issue was not possible to be solved in the way I expected, #Bamar solution marked in this post is an alternative that reaches the same goal and works perfectly. What I proposed in this post to be done doesn't seem to be viable if the databases are located in different hosts.
I've been searching for a while and I seem to be unable to solve my issue.
THE DATA I HAVE
My service provider is 1&1. In the current contract I have with them I could create up to 100 databases with a maximun size of 2GB each.
Each database that is created, is assingned a random hostname, port and username (the only item which I can choose is the password).
I've got two different databases, lets call them DB_1 and DB_2.
In the DB_1 I've got a table called T_USERS which fields of interest for this particular problem are:
ID: The ID of the record.
userName: The user name registered on the database.
In the DB_2 I've got a table called T_SCORES which fields of interest for this particular problem are:
ID_User: it's a foregin key that refers to the ID of a particular user in DB_1.T_USERS
score: a numeric value that indicates the score of that user.
It is important to take into account that to access both databases each of them needs different credentials!
WHAT I WANT TO ACHIEVE
What I want to achieve seems simple at a first glance but I was unable to find any documentation or solution online on how to do this using PHP and PDO.
I just want to perform a join with DB_2.ID_USER and DB_1.ID
My final result should look something like this:
DB_1.userName
DB_2.score
Alex
237
Peter
120
Mark
400
...
...
WHERE I'M CURRENTLY STUCK
This is what I've currently tried.
First of all I perform the connection to my databases as follows (I normally use a try/catch when connecting to a DB but I will omit it here):
//Connection to the DB1
$db1_hostName = "hostnameofDB1";//The host name of the database 1
$db1_name = "db1";//The name of the database 1
$db1_userName = "user1";//The username in the database 1
$db1_password = "pw1";//The password for the database 1
$pdo_db1Handle = new PDO("mysql:host=$db1_hostName; dbname=$db1_name;", $db1_userName, $db1_password);
$pdo_db1Handle->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
//Connection to the DB2
$db2_hostName = "hostnameofDB2";//The host name of the database 2
$db2_name = "db2";//The name of the database 2
$db2_userName = "user2";//The username in the database 2
$db2_password = "pw2";//The password for the database 2
$pdo_db2Handle = new PDO("mysql:host=$db2_hostName; dbname=$db2_name;", $db2_userName, $db2_password);
$pdo_db2Handle->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
So basically up to this point what I've done is very simple, create a pdo_db1Handle and pdo_db2Handle. Now to the tricky part...
If I now want to perform a join my SQL syntax should be something like this:
SELECT DB_1.T_USERS.userName, DB_2.T_SCORES.score
FROM DB_2.T_SCORES
LEFT JOIN DB_1.T_USERS
ON (DB_2.T_SCORES.ID_User=DB_1.T_USERS.ID)
ORDER BY DB_2.T_SCORES.score ASC 'The ordering is optional, I'm interested in the join part first
But as far as I'm aware and with all the information I was able to find, you execute the SQL statement against one of the two handles I previously defined in the following way:
$stmt=$pdo_db1Handle->prepare($mySQLStatement);
$stmt->execute();
When I try to do this, an error shows up telling me missing credentials for the DB_2. It happens the opposite (missing credentials of DB_1) if I try to execute it against pdo_db2Handle.
How should I proceed? any solution using PDO for this?
Thanks in advance :)
You can't join if you have to use separate PDO connections, so use nested loops and join the data in PHP.
$stmt_user = $pdo_db1Handle->query("SELECT id, username FROM t_users");
$stmt_score = $pdo_db2Handle->prepare("SELECT score FROM t_scores WHERE id_user = :userid");
$results = [];
while ($row_user = $stmt_user->fetch(PDO::FETCH_ASSOC)) {
$scores = [];
$stmt_score->execute(':userid' => $row_user['id']);
while ($row_score = $stmt_score->fetch(PDO::FETCH_ASSOC)) {
$scores[] = $row_score['score'];
}
$results[$row_user['username']] = $scores;
}
This will create an associative array whose keys are usernames and values are an array of their scores.
Depending on your use case, a work around may be to copy the table from one database to another temporarily and the perform your sql once you have both tables in a single database:
$pdo1 = new PDO('mysql:host=$db1_hostName; dbname=$db1_name', $db1_userName, $db1_password);
$pdo2 = new PDO('mysql:host=$db2_hostName; dbname=$db1_name', $db2_userName, $db2_password);
$insert_stmt = $pdo2->prepare("INSERT INTO T_SCORES (col1, col2, col3, ...) VALUES (:col1, :col2, :col3, ...) ON DUPLICATE KEY IGNORE");
$select_results = $pdo1->query("SELECT * FROM T_SCORES");
while ($row = $select_results->fetch(PDO::FETCH_ASSOC)) {
$insert_stmt->execute($row);
}
-- now work with the tables as you usually would.
You can create the table in the target database before hand and truncate the data before and/or after performing the insert.

How to optimize long query that displays thousands of data

I have almost thousands of data to display for my reports and it makes my browser lags due to the heavy data. I think that my query is the real problem. How can I optimized my query? is there something that I should add in my query?
I am using Xampp which supports PHP7.
SELECT
`payroll_billed_units`.`allotment_code`,
`payroll_billed_units`.`category_name`,
`payroll_billed_units`.`ntp_number`,
`payroll_billed_units`.`activity`,
`payroll_billed_units`.`regular_labor`,
`payroll_sub`.`block_number`,
(SELECT
GROUP_CONCAT(DISTINCT `lot_number` SEPARATOR ', ')
FROM
`payroll_billed_units` `lot_numbers`
WHERE
`lot_numbers`.`allotment_code` = `payroll_billed_units`.`allotment_code`
AND `lot_numbers`.`category_name` = `payroll_billed_units`.`category_name`
AND `lot_numbers`.`ntp_number` = `payroll_billed_units`.`ntp_number`
AND `lot_numbers`.`activity` = `payroll_billed_units`.`activity`) AS `lot_numbers`,
(SELECT
COUNT(`billed`.`ntp_id`)
FROM
`regular_ntp` `billed`
WHERE
`billed`.`allotment_code` = `payroll_billed_units`.`allotment_code`
AND `billed`.`category_name` = `payroll_billed_units`.`category_name`
AND `billed`.`ntp_number` = `payroll_billed_units`.`ntp_number`
AND `billed`.`activity` = `payroll_billed_units`.`activity`) AS `billed`,
(SELECT
COUNT(`approved`.`id`)
FROM
`payroll_billed_units` `approved`
WHERE
`approved`.`allotment_code` = `payroll_billed_units`.`allotment_code`
AND `approved`.`category_name` = `payroll_billed_units`.`category_name`
AND `approved`.`ntp_number` = `payroll_billed_units`.`ntp_number`
AND `approved`.`activity` = `payroll_billed_units`.`activity`) AS `approved`
FROM
`payroll_billed_units`
JOIN payroll_transaction ON payroll_billed_units.billing_number =
payroll_transaction.billing_number
JOIN payroll_sub ON payroll_transaction.billing_number =
payroll_sub.billing_number
WHERE payroll_billed_units.billing_date = '2019-02-13'
AND payroll_transaction.contractor_name = 'Roy Codal' GROUP BY allotment_code, category_name, activity
I was expecting that it will load or display all my data.
The biggest problem are the dependendt sub-selects, they are responsible for a bad performance. A sub-select will be executed for EVERY ROW of the outer query. And if you cascade subs-selects, you'll quickly have a query run forever.
If any of the parts would yield only 5 resultsets, 3 sub-select would mean that the database has to run 625 queries (5^4)!
Use JOINs.
Several of your tables need this 'composite' index:
INDEX(allotment_code, category_name, ntp_number, activity) -- in any order
payroll_transaction needs INDEX(contractor_name), though it may not get used.
payroll_billed_units needs INDEX(billing_date), though it may not get used.
For further discussion, please provide SHOW CREATE TABLE for each table and EXPLAIN SELECT ...
Use simply COUNT(*) instead of COUNT(foo). The latter checks the column for being not-NULL before including it. This is usually not needed. The reader is confused by thinking that there might be NULLs.
Your GROUP BY is improper because it is missing ntp_number. Read about the sql_mode of ONLY_FULL_GROUP_BY. I bring this up because you can almost get rid of some of those subqueries.
Another issue... Because of the "inflate-deflate" nature of JOIN with GROUP BY, the numbers may be inflated. I recommend you manually check the values of the COUNTs.

Query works on SQL Query Analyzer, but fails on PHP mssql_query()

I have a complex MSSQL query with multiple joins. When I query using SQL Query Analyzer, it returns results after consuming about 5 minutes. However, when I query using PHP mssql_query() function, it returns an error mssql_query() failed with no useful information.
Followings are the analysis of this problem so far.
Response time limit of php server is not a problem. Time limit is set to infinite. In fact, this same query returns results after about 30 seconds on different production servers.
I tried using mssql_get_last_message() to get more details on why it failed. But so far, no luck. All I get is "Changed database context to 'dbsdb'", where 'dbsdb' is the name of database I am querying.
This is the query I am using, if that matters.
SELECT L.[strStation], L.[nEntitlement1], L.[nEntitlement2], O.nOrders,
L.[nEntitlement3], L.[nEntitlement4], S.[strFirm], S.[nStatus],
S.[nFeatures], S.[nFeatures2], S.[nFeatures3], S.[strDescription]
FROM [tblLogin] AS L
LEFT JOIN [tblStation] AS S ON L.[strStation] = S.[strStation]
LEFT JOIN (
SELECT [strStation], COUNT(DISTINCT(nRecordId)) as nOrders
FROM [tblOrder]
WHERE [timeOrderDate] >= '2013-06-04' AND
[timeOrderDate] <= '2013-06-28'
GROUP BY strStation
) AS O ON O.[strStation] = S.[strStation]
WHERE L.[timeLoginTime] >= '2013-06-04 00:00:00' AND
L.[timeLoginTime] <= '2013-06-28 23:59:59' AND
S.[strFirm] in ('FIRMA','FIRMB')
This is the code that I use to query.
$link = mssql_connect('192.168.251.90', 'sa', '');
if (!mssql_select_db('dbsdb', $link))
var_dump(mssql_get_last_message());
$query = mssql_query($sql, $link);
EDIT I
I narrowed down to one spot. When I remove O.nOrders part from the SELECT clause, the query works perfectly fine.
But WHY???
EDIT II
This is driving me crazy. Query works sometimes even with O.nOrders part in SELECT clause. This seems totally random to me, but I know there is no such thing as real random in this world...there should be a reason...

PostgreSQL Database - Inner Join Query Error

With the help of this community, I was able to produce a query using an inner join which I thought would work for what I'm trying to do. Unfortunately, when I attempted to run the query found below, I received the following error:
ERROR: table name "bue" specified more than once
From what I've read on Google, some people have said that the "FROM bue" is not needed, but when I removed this, I got another error found below:
ERROR: syntax error at or near "INNER" at character 98
I'd very much appreciate your assistance in troubleshooting this. Thank you so very much.
Query:
UPDATE
bue
SET
rgn_no = chapterassociation.rgn_no,
chp_cd = chapterassociation.chp_cd
FROM
bue
INNER JOIN
chapterassociation
ON bue.work_state = chapterassociation.work_state
AND bue.bgu_cd = chapterassociation.bgu_cd
WHERE
bue.mbr_no IS NULL AND bue.chp_cd IS NULL
In PostgreSQL, specifying the table to be updated needs to be done only in the UPDATE clause, e.g. UPDATE bue. The FROM clause is only for additional tables referenced in the query. (If you were doing a self-join on bue, you would mention it again in the FROM clause, but you aren't in this case.)
The second error you get is likely just a simple syntax error. The other tricky thing is that JOIN/ON syntax doesn't fit in the FROM clause, so you have to move the join conditions to the WHERE clause. Try something like:
UPDATE
bue
SET
rgn_no = chapterassociation.rgn_no,
chp_cd = chapterassociation.chp_cd
FROM
chapterassociation
WHERE
bue.mbr_no IS NULL AND bue.chp_cd IS NULL
AND bue.work_state = chapterassociation.work_state
AND bue.bgu_cd = chapterassociation.bgu_cd
See http://www.postgresql.org/docs/current/interactive/sql-update.html.
(N.B. At least I don't know how to put JOIN/ON into an UPDATE statement... I could be missing something.)

debugging a mysql insert fail in php

I'm having problems debugging a failing mysql 5.1 insert under PHP 5.3.4. I can't seem to see anything in the mysql error log or php error logs.
Based on a Yahoo presentation on efficient pagination, I was adding order numbers to posters on my site (order rank, not order sales).
I wrote a quick test app and asked it to create the order numbers on one category. There are 32,233 rows in that category and each and very time I run it I get 23,304 rows updated. Each and every time. I've increased memory usage, I've put ini setting in the script, I've run it from the PHP CLI and PHP-FPM. Each time it doesn't get past 23,304 rows updated.
Here's my script, which I've added massive timeouts to.
include 'common.inc'; //database connection stuff
ini_set("memory_limit","300M");
ini_set("max_execution_time","3600");
ini_set('mysql.connect_timeout','3600');
ini_set('mysql.trace_mode','On');
ini_set('max_input_time','3600');
$sql1="SELECT apcatnum FROM poster_categories_inno LIMIT 1";
$result1 = mysql_query($sql1);
while ($cats = mysql_fetch_array ($result1)) {
$sql2="SELECT poster_data_inno.apnumber,poster_data_inno.aptitle FROM poster_prodcat_inno, poster_data_inno WHERE poster_prodcat_inno.apcatnum ='$cats[apcatnum]' AND poster_data_inno.apnumber = poster_prodcat_inno.apnumber ORDER BY aptitle ASC";
$result2 = mysql_query($sql2);
$ordernum=1;
while ($order = mysql_fetch_array ($result2)) {
$sql3="UPDATE poster_prodcat_inno SET catorder='$ordernum' WHERE apnumber='$order[apnumber]' AND apcatnum='$cats[apcatnum]'";
$result3 = mysql_query($sql3);
$ordernum++;
} // end of 2nd while
}
I'm at a head-scratching loss. Just did a test on a smaller category and only 13,199 out of 17,662 rows were updated. For the two experiments only 72-74% of the rows are getting updated.
I'd say your problem lies with your 2nd query. Have you done an EXPLAIN on it? Because of the ORDER BY clause a filesort will be required. If you don't have appropriate indices that can slow things down further. Try this syntax and sub in a valid integer for your apcatnum variable during testing.
SELECT d.apnumber, d.aptitle
FROM poster_prodcat_inno p JOIN poster_data_inno d
ON poster_data_inno.apnumber = poster_prodcat_inno.apnumber
WHERE p.apcatnum ='{$cats['apcatnum']}'
ORDER BY aptitle ASC;
Secondly, since catorder is just an integer version of the combination of apcatnum and aptitle, it's a denormalization for convenience sake. This isn't necessarily bad, but it does mean that you have to update it every time you add a new title or category. Perhaps it might be better to partition your poster_prodcat_inno table by apcatnum and just do the JOIN with poster_data_inno when you need the actually need the catorder.
Please escape your query input, even if it does come from your own database (quotes and other characters will get you every time). Your SQL statement is incorrect because you're not using the variables correctly, please use hints, such as:
while ($order = mysql_fetch_array($result2)) {
$order = array_filter($order, 'mysql_real_escape_string');
$sql3 = "UPDATE poster_prodcat_inno SET catorder='$ordernum' WHERE apnumber='{$order['apnumber']}' AND apcatnum='{$cats['apcatnum']}'";
}

Categories