I have a query that uses stuff() when using it in PHP it fails due to it running out of memory.
the error occurs on this line;
$data = odbc_exec($conn, $query);
at this point php doesn't actually receive any data it is just assigning a network resource. the connection is fine, i use it everywhere else, and the query strlen is only ~400.
so what is actually going on here?
select oeename.id, oeename.Name, stuff(
(select distinct ', ' + departmentequipment.name from type join DepartmentEquipment on departmentequipmentfk = departmentequipment.id where type.OEENameFK = oeename.ID order by ', ' + departmentequipment.name for xml path('')) ,1,1,'') as departmentequipmentnames
from oeename order by oeename.name asc
even when i add top 1 to the query the query still exceeds the memory limit.
EDIT: the php script was working before, someone had detected a bug in a somewhat related part of my website and upon attempting to fault find the issue i came across this problem on my local development server. the table is now partly larger, but its failing on exec rather than reading the data in.
I believe you're running into this bug: https://bugs.php.net/bug.php?id=68964
To summarize the workaround, you need to cast any ntext fields to nvarchar(max) instead.
Related
I have a conundrum that appears to defy logic, involving Lumen, PHP, PDO, and SQL Server.
I have a controller which contains an action, that executes a stored procedure on a QL Server instance before returning the results as a JSON string. Nothing special is happening but for certain parameters, I do not get any response.
Right, some code. Here's the PHP/PDO prepared statement.
// Prepare our query.
$query = $pdo->prepare("
EXEC [dbase].[dbo].[myStoredProc]
#A = :A,
#B = :B,
#C = :C,
#D = :D,
#E = :E,
#F = :F,
#G = :G
");
// Bind the parameters and execute the query.
$query->bindParam(':A', $A);
$query->bindParam(':B', $B);
$query->bindParam(':C', $C);
$query->bindParam(':D', $D);
$query->bindParam(':E', $E);
$query->bindParam(':F', $F);
$query->bindParam(':G', $G);
$query->execute();
// Uncomment the following line for debugging purposes.
$query->debugDumpParams();
// Lets get all of the data.
$data = $query->fetchAll(\PDO::FETCH_ASSOC);
print_r($data);
Perfectly normal as I said. If I use POSTMAN and pass in the parameters as follows:
A 'C_ICPMS_06'
B 'AQC1'
C '726'
D NULL
E '2021-08-30 00:00:00'
F '2021-11-30 23:59:59'
G NULL
I get a list of results as expected, both from POSTMAN and PHP as well as through SSMS (using the output from the debug statement).
Now if I change parameter C from '726' to '728', I do not get any output from POSTMAN and PHP, but still, get output from SSMS.
Thinking that there could be some text within the output that is breaking the FETCHALL function, I amended the stored procedure to return a single record, all columns containing 1's. Once more the parameter of 726 works, 728 does not.
I added a VAR_DUMP command to ensure that the parameter isn't being molested on its way to the controller, both parameter values report that they are strings with 3 characters in length.
if I change the prepared statement as below, I still don't get any results seen within POSTMAN/PHP.
// Bind the parameters and execute the query.
$query->bindParam(':A', $A);
$query->bindParam(':B', $B);
//$query->bindParam(':C', $C);
$query->bindValue(':C', '728');
$query->bindParam(':D', $D);
$query->bindParam(':E', $E);
$query->bindParam(':F', $F);
$query->bindParam(':G', $G);
$query->execute();
The debug SQL statement is identical to before (using the param as opposed to value).
If I change the stored procedure, such that regardless of what value is passed in for parameter C, it is hardcoded to 728, it works as intended (obviously it does not matter what the parameter is set within POSTMAN). So I get values within POSTMAN and SSMS, therefore, it is safe to assume that the whole problem is being caused by the parameter and value '728'.
Further digging at this issue, I find that if the parameter has a value of '72F' or '70W', also returns no results via POSTMAN/PHP but does from within SSMS. I've checked and cannot see any error messages being produced.
I added the below lines to the controller to see if I can see an issue, but nothing was seen (not on screen nor within error files).
ini_set('display_errors', 1);
ini_set('display_startup_errors', 1);
error_reporting(E_ALL);
In trying to figure this out, I created a temporary database table in order to capture the input parameters and the number of records as found within the SP. This should show where the problem lies, i.e. within PHP or within SQL Server. It now gets stranger.
Calling the SP from within SSMS, it populates with an expected record, including the number of records the initial search returned. However, calling it from POSTMAN using the controller, everything is the same in terms of parameters, but the number of records found is 0!
So I know something very weird is going on, but cannot put my finger on what and therefore how to fix it. If anyone has any ideas or has come across a similar problem, please let me know. this is bugging me now. No doubt when I get this working, it'll be a silly error and I'll end up kicking myself.
The issue was that a temporary table that was being created couldn't hold a specific value being assigned to it. The column was designated as a TINYINT but should have been a SMALLINT since the value could go negative.
Why SSMS never reported that as an issue and happily allowed it through, God only knows. But when called externally, it failed to insert any records within the temporary table, returning no records as a result.
There go 1.5 days of my life never to be seen again.
Im programming a search with ZF3 and the DB module.
Everytime i use more than 1 short keyword - like "49" and "am" or "1" and "is" i get this error:
Statement could not be executed (HY000 - 2006 - MySQL server has gone away)
Using longer keywords works perfectly fine as long as i dont use 2 or more short keywords.
The problem only occurs on the live server its working fine on the local test server.
The project table has ~2200 rows with all kind of data the project_search table has 17000 rows with multiple entries for each project , each looking like:
id, projectid, searchtext
The searchtext Column is fulltext. Here the relevant part of the php code:
$sql = new Sql($this->db);
$select = $sql->select(['p'=>'projects']);
if(isset($filter['search'])) {
$keywords = preg_split('/\s+/', trim($filter['search']));
$join = $sql->select('project_search');
$join->columns(['projectid' => new Expression('DISTINCT(projectid)')]);
$join->group("projectid");
foreach($keywords as $keyword) {
$join->having(["LOCATE('$keyword', GROUP_CONCAT(searchtext))"]);
}
$select->join(
["m" => $join],
"m.projectid = p.id",
['projectid'],
\Zend\Db\Sql\Select::JOIN_RIGHT
);
}
Here the resulting Query:
SELECT p.*, m.projectid
FROM projects AS p
INNER JOIN (
SELECT projectid
FROM project_search
GROUP BY projectid
HAVING LOCATE('am', GROUP_CONCAT(searchtext))
AND LOCATE('49', GROUP_CONCAT(searchtext))
) AS m
ON m.projectid = p.id
GROUP BY p.id
ORDER BY createdAt DESC
I rewrote the query using "MATCH(searchtext) AGAINST('$keyword)" and "searchtext LIKE '%keyword%' with the same result.
The problem seems to be with the live mysql server how can i debug this ?
[EDIT]
After noticing that the error only occured in a special view which had other search related queries - each using multiple joins (1 join / keyword) - i merged those queries and the error was gone. The amount of queries seemed to kill the server.
Try refactoring your inner query like so.
SELECT a.projectid
FROM (
SELECT DISTINCT projectid
FROM projectsearch
WHERE searchtext LIKE '%am%'
) a
JOIN (
SELECT DISTINCT projectid
FROM projectsearch
WHERE searchtext LIKE '%49%'
) b ON a.projectid = b.projectid
It should give you back the same set of projectid values as your inner query. It gives each projectid value that has matching searchtext for both search terms, even if those terms show up in different rows of project_search. That's what your query does by searching GROUP_CONCAT() output.
Try creating an index on (searchtext, projectid). The use of column LIKE '%sample' means you won't be able to random-access that index, but the two queries in the join may still be able to scan the index, which is faster than scanning the table. To add that index use this command.
ALTER TABLE project_search ADD INDEX project_search_text (searchtext, projectid);
Try to do this in a MySQL client program (phpmyadmin for example) rather than directly from your php program.
Then, using the MySQL client, test the inner query. See how long it takes. Use EXPLAIN SELECT .... to get an explanation of how MySQL is handling the query.
It's possible your short keywords are returning a ridiculously high number of matches, and somehow overwhelming your system. In that case you can put a LIMIT 1000 clause or some such thing at the end of your inner query. That's not likely, though. 17 kilorows is not a large number.
If that doesn't help your production MySQL server is likely misconfigured or corrupt. If I were you I would call your hosting service tech support, somehow get past the front-line support agent (who won't know anything except "reboot your computer" and other such foolishness), and tell them the exact times you got the "gone away" message. They'll be able to check the logs.
Pro tip: I'm sure you know the pitfalls of using LIKE '%text%' as a search term. It's not scalable because it's not sargable: it can't random access an index. If you can possibly redesign your system, it's worth your time and effort.
You could TRY / CATCH to check if you get a more concrete error:
BEGIN TRY
BEGIN TRANSACTION
--Insert Your Queries Here--
COMMIT
END TRY
BEGIN CATCH
DECLARE #ErrorMessage NVARCHAR(4000);
DECLARE #ErrorSeverity INT;
DECLARE #ErrorState INT;
SELECT
#ErrorMessage = ERROR_MESSAGE(),
#ErrorSeverity = ERROR_SEVERITY(),
#ErrorState = ERROR_STATE();
IF ##TRANCOUNT > 0
ROLLBACK
RAISERROR (#ErrorMessage, -- Message text.
#ErrorSeverity, -- Severity.
#ErrorState -- State.
);
END CATCH
Although because you are talking about short words and fulltext it seems to me it must be related to StopWords.
Try running this query from both your dev server and production server and check if there are any differences:
SELECT * FROM INFORMATION_SCHEMA.INNODB_FT_DEFAULT_STOPWORD;
Also check in my.ini (if that is the config file) text file if these are set to:
ft_stopword_file = ""
ft_min_word_len = 1
As stated in my EDIT the problem wasnt the query from the original Question, but some other queries using the search - parameter as well. Every query had a part like follows :
if(isset($filter['search'])) {
$keywords = preg_split('/\s+/', trim($filter['search']));
$field = 1;
foreach($keywords as $keyword) {
$join = $sql->select('project_search');
$join->columns(["pid$field" => 'projectid']);
$join->where(["LOCATE('$keyword', searchtext)"]);
$join->group("projectid");
$select->join(
["m$field" => $join],
"m$field.pid$field = p.id"
);
$field++;
}
}
This resulted in alot of queries with alot of resultrows killing the mysql server eventually. I merged those Queries into the first and the error was gone.
I was doing bulk inserts in the RealTime Index using PHP and by Disabling AUTOCOMIT ,
e.g.
// sphinx connection
$sphinxql = mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','');
//do some other time consuming work
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
//do 50k updates or inserts
// Commit transaction
mysqli_commit($sphinxql);
and kept the script running overnight, in the morning i saw
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate
212334 bytes) in
so when i checked the nohup.out file closely , i noticed , these lines ,
PHP Warning: mysqli_query(): MySQL server has gone away in /home/script.php on line 502
Warning: mysqli_query(): MySQL server has gone away in /home/script.php on line 502
memory usage before these lines was normal , but memory usage after these lines started to increase, and it hit the php mem_limit and gave PHP Fatal error and died.
in script.php , line 502 is
mysqli_query($sphinxql,$update_query_sphinx);
so my guess is, sphinx server closed/died after few hours/ minutes of inactivity.
i have tried setting in sphinx.conf
client_timeout = 3600
Restarted the searchd by
systemctl restart searchd
and still i am facing same issue.
So how can i not make sphinx server die on me ,when no activity is present for longer time ?
more info added -
i am getting data from mysql in 50k chunks at a time and doing while loop to fetch each row and update it in sphinx RT index. like this
//6mil rows update in mysql, so it takes around 18-20 minutes to complete this then comes this following part.
$subset_count = 50000 ;
$total_count_query = "SELECT COUNT(*) as total_count FROM content WHERE enabled = '1'" ;
$total_count = mysqli_query ($conn,$total_count_query);
$total_count = mysqli_fetch_assoc($total_count);
$total_count = $total_count['total_count'];
$current_count = 0;
while ($current_count <= $total_count){
$get_mysql_data_query = "SELECT record_num, views , comments, votes FROM content WHERE enabled = 1 ORDER BY record_num ASC LIMIT $current_count , $subset_count ";
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
if ($result = mysqli_query($conn, $get_mysql_data_query)) {
/* fetch associative array */
while ($row = mysqli_fetch_assoc($result)) {
//sphinx escape whole array
$escaped_sphinx = mysqli_real_escape_array($sphinxql,$row);
//update data in sphinx index
$update_query_sphinx = "UPDATE $sphinx_index
SET
views = ".$escaped_sphinx['views']." ,
comments = ".$escaped_sphinx['comments']." ,
votes = ".$escaped_sphinx['votes']."
WHERE
id = ".$escaped_sphinx['record_num']." ";
mysqli_query ($sphinxql,$update_query_sphinx);
}
/* free result set */
mysqli_free_result($result);
}
// Commit transaction
mysqli_commit($sphinxql);
$current_count = $current_count + $subset_count ;
}
So there are a couple of issues here, both related to running big processes.
MySQL server has gone away - This usually means that MySQL has timed out, but it could also mean that the MySQL process crashed due to running out of memory. In short, it means that MySQL has stopped responding, and didn't tell the client why (i.e. no direct query error). Seeing as you said that you're running 50k updates in a single transaction, it's likely that MySQL just ran out of memory.
Allowed memory size of 134217728 bytes exhausted - means that PHP ran out of memory. This also leads credence to the idea that MySQL ran out of memory.
So what to do about this?
The initial stop-gap solution is to increase memory limits for PHP and MySQL. That's not really solving the root cause, and depending on t he amount of control you have (and knowledge you have) of your deployment stack, it may not be possible.
As a few people mentioned, batching the process may help. It's hard to say the best way to do this without knowing the actual problem that you're working on solving. If you can calculate, say, 10000 or 20000 records instad of 50000 in a batch that may solve your problems. If that's going to take too long in a single process, you could also look into using a message queue (RabbitMQ is a good one that I've used on a number of projects), so that you can run multiple processes at the same time processing smaller batches.
If you're doing something that requires knowledge of all 6 million+ records to perform the calculation, you could potentially split the process up into a number of smaller steps, cache the work done "to date" (as such), and then pick up the next step in the next process. How to do this cleanly is difficult (again, something like RabbitMQ could simplify that by firing an event when each process is finished, so that the next one can start up).
So, in short, there are your best two options:
Throw more resources/memory at the problem everywhere that you can
Break the problem down into smaller, self contained chunks.
You need to reconnect or restart the DB session just before mysqli_begin_transaction($sphinxql)
something like this.
<?php
//reconnect to spinx if it is disconnected due to timeout or whatever , or force reconnect
function sphinxReconnect($force = false) {
global $sphinxql_host;
global $sphinxql_port;
global $sphinxql;
if($force){
mysqli_close($sphinxql);
$sphinxql = #mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','') or die('ERROR');
}else{
if(!mysqli_ping($sphinxql)){
mysqli_close($sphinxql);
$sphinxql = #mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','') or die('ERROR');
}
}
}
//10mil+ rows update in mysql, so it takes around 18-20 minutes to complete this then comes this following part.
//reconnect to sphinx
sphinxReconnect(true);
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
//do your otherstuff
// Commit transaction
mysqli_commit($sphinxql);
I have a fairly large, complex chunk of SQL that creates a report for a particular page. This SQL has several variable declarations at the top, which I suspect is giving me problems.
When I get the ODBC result, I have to use odbc_next_result() to get past the empty result sets that seem to be returned with the variables. That seems to be no problem. When I finally get to the "real" result, odbc_num_rows() tells me it has over 12 thousand rows, when in actuality it has 6.
Here is an example of what I'm doing, to give you an idea, without going into details on the class definitions:
$report = $db->execute(Reports::create_sql('sales-report-full'));
while(odbc_num_rows($report) <= 1) {
odbc_next_result($report);
}
echo odbc_num_rows($report);
The SQL looks something like this:
DECLARE #actualYear int = YEAR(GETDATE());
DECLARE #curYear int = YEAR(GETDATE());
IF MONTH(GETDATE()) = 1
SELECT #curYear = #curYear - 1;
DECLARE #lastYear int = #curYear-1;
DECLARE #actualLastYear int = #actualYear-1;
DECLARE #tomorrow datetime = DATEADD(dd, 1, GETDATE());
SELECT * FROM really_big_query
Generally speaking it's always a good idea to start every stored procedure, or batch of commands to be executed, with the set nocount on instruction - which tells SQL Server to supress sending "rows effected" messages to a client application over ODBC (or ADO, etc.). These messages effect performance and cause empty record sets to be created in the result set - which cause additional effort for application developers.
I've also used ODBC drivers that actually error if you forget to suppress these messages - so it has become instinctive for me to type set nocount on as soon as I start writing any new stored procedure.
There are various question/answers relating to this subject, for example What are the advantages and disadvantages of turning NOCOUNT off in SQL Server queries? and SET NOCOUNT ON usage which cover many other aspects of this command.
Basically, I need to get some data out of a SQL Server 2005, using PHP.
Previously we used the mssql_* functions, however due to various server issues we are only now able to use odbc_* functions.
The table has various columns whose names have spaces in them, and before it is suggested.... no I cannot change them as this is a totally separate piece of software in another language and it would break it, I am just getting stats out of it.
Anywho, I had previously been accessing these columns by putting their names in square brackets, e.g. [column name] and it worked fine under the mssql_* functions, however when I do this:
$sql = "select top 1 [assessment name] as AssessmentName from bksb_Assessments";
$result = odbc_exec($db, $sql);
while($row = odbc_fetch_array($result))
{
var_dump($row);
}
It prints the results out as:
'assessment name' => string 'Mathematics E3 Diagnostic' (length=25)
So as you can see it is totally ignoring the alias I've given it and still calling it [assessment name].
But if I run the exact same thing in SQL Server Management Studio Express, it works fine and uses the alias.
I've tried various different combinations, such as quoting the alias, quoting the column, different brackets, etc... but no luck so far.
This isn't a huge issue, as I can change what my script uses to look for "assessment name" in the result array instead of the alias, but it's just a bit annoying that I couldn't work out why this was happening...
Cheers! :)
EDIT:
Actually i don't think the square brackets make a difference, just trying to alias any column isn't working with through php odbc, however I can still do something like CAST(whatever) AS 'alias' and that works fine... just not selecting a column as an alias...? :/
This is by no means a perfect solution, and I would love to learn the proper way to deal with this issue, but in the mean time, add +'' to each column, after the underlying column name, but before the AS alias and you should be able to circumvent this issue.
While attempting to troubleshoot this issue and determine why some of my team's stored procedures exhibited this behavior, I finally discovered why. What's strange is that we were only experiencing this issue with a few stored procedures, while most returned the the aliases as expected.
It appears that declaring at least one variable, whether or not it is being used, will prevent this particular bug from occurring. The variable can be of any type.
If you're just executing a SELECT query without using a stored procedure, use a DECLARE statement to define a variable before or after the statement:
// This works...
DECLARE #ignore TINYINT; SELECT EmployeeID AS employee_id FROM Employee;
// So does this...
SELECT EmployeeID AS employee_id FROM Employee; DECLARE #ignore TINYINT;
Now if you're running into this issue with your stored procedures, you can take the same approach above by defining a variable anywhere in the body.
One other thing I noticed is that if the stored procedure is defined with parameters, if your the parameter gets set or evaluated in something like an if statement, this bug will also not occur. For example,
// This works...
CREATE PROC get_employee(
#employee_id VARCHAR(16) = NULL
)
AS
IF (#employee_id IS NOT NULL)
BEGIN
SELECT FirstName AS first_name FROM Employee WHERE EmployeeID = #employee_id;
END
GO
Using SET:
// So does this...
CREATE PROC get_employee(
#employee_id VARCHAR(16) = NULL,
#ignore TINYINT
)
AS
SET #ignore = NULL;
SELECT FirstName AS first_name FROM Employee WHERE EmployeeID = #employee_id;
GO
I still don't quite understand the cause or nature of the bug; but these workarounds, similar to the type-casting hack above, appear to get around this annoying bug.
It's probably also worth mentioning that we didn't encounter this issue in Ubuntu 14.04 and PHP 5.4. As soon as we upgraded to Ubuntu 18.04 and PHP 7.2, that was when we began experiencing this issue. In both instances, we do have FreeTDS configured to use version 7.1, which is why we're baffled by this particular bug.
#dearsina's approach in fact seems to be one of the few ways to handle this bug.
However, this solution is not applicable to all datatypes. So for me, the only way to cover all needed columns was to CAST() the relevant column (where initially c.State would be of datatype bit):
SELECT
CAST(c.State AS nvarchar) AS 'customer_state'
FROM
customers AS c;
as else you might get an error message similar to
The data types bit and varchar are incompatible in the add operator.
Reference on CASTing large data sets: https://social.msdn.microsoft.com/Forums/sqlserver/en-US/643b6eda-fa03-4ad3-85a1-051f02097c7f/how-much-do-cast-statements-affect-performance?forum=transactsql
Some further finding on this topic is a recently (November 2017) raised bug report at bugs.php.net claiming that the problem is related to FreeTDS: https://bugs.php.net/bug.php?id=75534.
This bug report contains an inofficial (and at least from my end untested) patch:
diff --git a/ext/odbc/php_odbc_includes.h b/ext/odbc/php_odbc_includes.h
index 93e2c96..8451bcd 100644
--- a/ext/odbc/php_odbc_includes.h
+++ b/ext/odbc/php_odbc_includes.h
## -292,18 +292,16 ## void odbc_sql_error(ODBC_SQL_ERROR_PARAMS);
#define PHP_ODBC_SQLCOLATTRIBUTE SQLColAttribute
#define PHP_ODBC_SQLALLOCSTMT(hdbc, phstmt) SQLAllocHandle(SQL_HANDLE_STMT, hdbc, phstmt)
-
-#define PHP_ODBC_SQL_DESC_NAME SQL_DESC_NAME
#else
#define IS_SQL_LONG(x) (x == SQL_LONGVARBINARY || x == SQL_LONGVARCHAR)
#define PHP_ODBC_SQLCOLATTRIBUTE SQLColAttributes
#define PHP_ODBC_SQLALLOCSTMT SQLAllocStmt
-
-#define PHP_ODBC_SQL_DESC_NAME SQL_COLUMN_NAME
#endif
#define IS_SQL_BINARY(x) (x == SQL_BINARY || x == SQL_VARBINARY || x == SQL_LONGVARBINARY)
+#define PHP_ODBC_SQL_DESC_NAME SQL_DESC_LABEL
+
PHP_ODBC_API ZEND_EXTERN_MODULE_GLOBALS(odbc)
#define ODBCG(v) ZEND_MODULE_GLOBALS_ACCESSOR(odbc, v)
Faced similar problem. The 4th parameter of odbc_connect did the trick in my case choosing cursor type SQL_CUR_USE_ODBC.