Longtext max memory error using mysqli_query - php

The company i work for uses Kayako to manage its support tickets. I was set to make an external webapp that takes all the tickets from one company and displays the history.
Im using mysqli_query to connect to the db.
$link = mysqli_connect($serverName, $userName, $password, $dbName);
$sql = "SELECT ticketid, fullname, creator, contents FROM swticketposts";
$result = mysqli_query($link, $sql) or die(mysqli::$error);
The problem is that the "contents" table in mySQL uses the datatype LONGTEXT.
Trying to read this data with php gives me either timeout or max memory usage errors.
Line 49 is the $result = mysqli_query etc line.
EDIT: Posted wrong error msg in original post
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 8192 bytes)
Trying to work around the memory problem i added:
ini_set('memory_limit', '-1');
which gave me this timeout error instead:
Fatal error: Maximum execution time of 30 seconds exceeded in C:\xampp\htdocs\test\php\TicketCall.php on line 49
Thing is, when we view the tickets on the kayako platform, it reads the same contents table and does it instantly, so there must be a way to read these longtext's faster. How is beyond me.
Solutions i can't use:
Change the datatype to something smaller (would break our kayako system)
TLDR; Is there a way to read from longtext data types without killing memory and getting timout errors using php and mysql.

First of all, you might consider changing your query to something like this:
SELECT ticketid, fullname, creator, contents FROM swticketposts WHERE customer = 'something'
That will make your query return less data by filtering out the information from customers your report doesn't care about. It may save execution time on your query.
Second, you may wish to use
SUBSTRING(contents, 1, 1000) AS contents
in place of contents in your query. This will get you back part of the contents column (the first thousand characters); it may (or may not) be good enough for what you're trying to do.
Third, mysqli_ generally uses buffered querying. That means the entire result set gets slurped into your program's memory when you run the query. That's probably why your memory is blowing out. Read this. http://php.net/manual/en/mysqlinfo.concepts.buffering.php
You can arrange to handle your query row-by-row doing this:
$unbufResult = mysqli_query($link, $sql, MYSQLI_USE_RESULT) or trigger_error($link->error);
if ($unbufResult ) {
while ($row = $unbufResult ->fetch_assoc()) {
/* deal with the data from one row */
}
}
$unbufResult->close();
You need to be careful to use close() when you're done with these unbuffered result sets, because they use resources on your MySQL server until you do. This will probably prevent your memory blowout. (There's nothing special about using fetch_assoc() here; you can fetch each row with any method you wish.)
Fourth, php's memory and time limits are usually tuned for interactive web site operation. Sometimes report generation takes a long time. Put calls to set_time_limit() in and around your loop, something like this.
set_time_limit (300); /* five minutes */
$unbufResult = mysqli_query($link, $sql, MYSQLI_USE_RESULT) or trigger_error($link->error);
if ($unbufResult ) {
while ($row = $unbufResult ->fetch_assoc()) {
set_time_limit (30); /* reset time limit on each row */
/* deal with the data from one row */
}
}
$unbufResult->close();
set_time_limit (300); /* set time limit back to five minutes when done reading */

Related

How to continously run the cron job for bulk of 5000 users

I have the database of 5000 users.Already there is cron job running of Once in A Week.
Initially when users were among 100's things was working fine. Now when users reached to 5000 then what happening is Cron job starts to run for some 500-600 users and breaks down. I researched it and came to the conclusion that since HTTP follow stateless protocol, so whenever any new request comes then cron job break down in between. Now my question is How can I be able to run the Cron Job for all 5000 users without break down. Please help me.
I would firstly check your PHP error logs as you may be hitting time and memory limits. If you are performing database queries I would also check the logs to see if any limits are being hit.
PHP Memory Limit Increase
Increasing the memory limit will allow your script to run for longer if it currently running out of memory.
Option One
Update your php.ini file. Change 256 to suit your requirements.
php_value memory_limit 256MB
Option Two
Add ini_set('memory_limit', nM) to increase the memory limit, again change 256 to suit your requirements:
ini_set('memory_limit','256M');
PHP Execution Limit Increase
Add set_time_limit(n) to your PHP file to increase the current execution timeout (changing 300 to suit your requirements):
ini_set('max_execution_time', 300); //300 seconds = 5 minutes
Split Up Database Results (Batches)
If you are performing a query that returns a large number of rows, it could be timing out. You can try implementing the following example logic, which splits a big query into smaller chunks using limit and offset.
// Get total rows count
$total_rows = SELECT count(id) FROM users;
// Set a block size
$block_size = 300;
// Init starting offset
$block_offset = 0;
for($block = $block_offset; $block < $total_rows; $block = $block + $block_size) {
// Query
$data = SELECT * FROM table LIMIT $block_size OFFSET $block_offset;
// Loop through each row and process here
foreach($data as $row) {
.. code here
// You can also echo out something here so script is returning some data. Sometimes if nothing is sent back for a while it can cause issues (not generally for a cron though) e.g.
echo 'Done block ' . $block;
}
// Update block offset, so offset increments by block size (300)
$block_offset = $block_offset + $block_size ;
}

Sphinx How can i keep connection active even if no activity for longer time?

I was doing bulk inserts in the RealTime Index using PHP and by Disabling AUTOCOMIT ,
e.g.
// sphinx connection
$sphinxql = mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','');
//do some other time consuming work
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
//do 50k updates or inserts
// Commit transaction
mysqli_commit($sphinxql);
and kept the script running overnight, in the morning i saw
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate
212334 bytes) in
so when i checked the nohup.out file closely , i noticed , these lines ,
PHP Warning: mysqli_query(): MySQL server has gone away in /home/script.php on line 502
Warning: mysqli_query(): MySQL server has gone away in /home/script.php on line 502
memory usage before these lines was normal , but memory usage after these lines started to increase, and it hit the php mem_limit and gave PHP Fatal error and died.
in script.php , line 502 is
mysqli_query($sphinxql,$update_query_sphinx);
so my guess is, sphinx server closed/died after few hours/ minutes of inactivity.
i have tried setting in sphinx.conf
client_timeout = 3600
Restarted the searchd by
systemctl restart searchd
and still i am facing same issue.
So how can i not make sphinx server die on me ,when no activity is present for longer time ?
more info added -
i am getting data from mysql in 50k chunks at a time and doing while loop to fetch each row and update it in sphinx RT index. like this
//6mil rows update in mysql, so it takes around 18-20 minutes to complete this then comes this following part.
$subset_count = 50000 ;
$total_count_query = "SELECT COUNT(*) as total_count FROM content WHERE enabled = '1'" ;
$total_count = mysqli_query ($conn,$total_count_query);
$total_count = mysqli_fetch_assoc($total_count);
$total_count = $total_count['total_count'];
$current_count = 0;
while ($current_count <= $total_count){
$get_mysql_data_query = "SELECT record_num, views , comments, votes FROM content WHERE enabled = 1 ORDER BY record_num ASC LIMIT $current_count , $subset_count ";
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
if ($result = mysqli_query($conn, $get_mysql_data_query)) {
/* fetch associative array */
while ($row = mysqli_fetch_assoc($result)) {
//sphinx escape whole array
$escaped_sphinx = mysqli_real_escape_array($sphinxql,$row);
//update data in sphinx index
$update_query_sphinx = "UPDATE $sphinx_index
SET
views = ".$escaped_sphinx['views']." ,
comments = ".$escaped_sphinx['comments']." ,
votes = ".$escaped_sphinx['votes']."
WHERE
id = ".$escaped_sphinx['record_num']." ";
mysqli_query ($sphinxql,$update_query_sphinx);
}
/* free result set */
mysqli_free_result($result);
}
// Commit transaction
mysqli_commit($sphinxql);
$current_count = $current_count + $subset_count ;
}
So there are a couple of issues here, both related to running big processes.
MySQL server has gone away - This usually means that MySQL has timed out, but it could also mean that the MySQL process crashed due to running out of memory. In short, it means that MySQL has stopped responding, and didn't tell the client why (i.e. no direct query error). Seeing as you said that you're running 50k updates in a single transaction, it's likely that MySQL just ran out of memory.
Allowed memory size of 134217728 bytes exhausted - means that PHP ran out of memory. This also leads credence to the idea that MySQL ran out of memory.
So what to do about this?
The initial stop-gap solution is to increase memory limits for PHP and MySQL. That's not really solving the root cause, and depending on t he amount of control you have (and knowledge you have) of your deployment stack, it may not be possible.
As a few people mentioned, batching the process may help. It's hard to say the best way to do this without knowing the actual problem that you're working on solving. If you can calculate, say, 10000 or 20000 records instad of 50000 in a batch that may solve your problems. If that's going to take too long in a single process, you could also look into using a message queue (RabbitMQ is a good one that I've used on a number of projects), so that you can run multiple processes at the same time processing smaller batches.
If you're doing something that requires knowledge of all 6 million+ records to perform the calculation, you could potentially split the process up into a number of smaller steps, cache the work done "to date" (as such), and then pick up the next step in the next process. How to do this cleanly is difficult (again, something like RabbitMQ could simplify that by firing an event when each process is finished, so that the next one can start up).
So, in short, there are your best two options:
Throw more resources/memory at the problem everywhere that you can
Break the problem down into smaller, self contained chunks.
You need to reconnect or restart the DB session just before mysqli_begin_transaction($sphinxql)
something like this.
<?php
//reconnect to spinx if it is disconnected due to timeout or whatever , or force reconnect
function sphinxReconnect($force = false) {
global $sphinxql_host;
global $sphinxql_port;
global $sphinxql;
if($force){
mysqli_close($sphinxql);
$sphinxql = #mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','') or die('ERROR');
}else{
if(!mysqli_ping($sphinxql)){
mysqli_close($sphinxql);
$sphinxql = #mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','') or die('ERROR');
}
}
}
//10mil+ rows update in mysql, so it takes around 18-20 minutes to complete this then comes this following part.
//reconnect to sphinx
sphinxReconnect(true);
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
//do your otherstuff
// Commit transaction
mysqli_commit($sphinxql);

Reading from a big table

I want to read data from a table (or view) on SQL server 2008R2 using PHP 5.4.24 and freetds 0.91.
In PHP, I write:
$ret = mssql_query( 'SELECT * FROM mytable', $Conn ) ;
Then I read rows one at a time, process them, all is ok. Except, when the table is really big, then I get an error:
Fatal error: Allowed memory size of 1073741824 bytes exhausted (tried to allocate 4625408 bytes)
in /home/prove/test.php on line 43
The error happens in the mssql_query(), so it does not matter how I query.
Altering the query to return less rows or less columns is not viable, because I must read a lot of data from many tables in a limited time.
What can I do to convince PHP to read in memory one row at a time, or a reasonable number at a time?
I agree with #James in the Question comments, if you need to read a table so big it exhausts your memory to stash the results before even handing back to PHP, it probably just means you need to figure out a better way. However, here is a possible solution (untested and I've only used MSSQL a couple of times; but if not perfect, I tried my best):
$ret = mssql_query( 'SELECT COUNT(*) as TotalRows FROM mytable', $Conn);
$row = mssql_fetch_row($ret);
$offset = 0;
$increment = 50;
while ($offset < $row['TotalRows'])
{
$ret_2 = mssql_query("SELECT * FROM mytable ORDER BY Id ASC OFFSET {$offset} ROWS FETCH NEXT {$increment} ROWS ONLY", $Conn);
//
// loop over those 50 rows, do your thing...
//
$offset += $increment;
}
//
// there will probably be a remainder, so you'll have to account for that in the post-iteration; probably making the interior loop a function or method would be wise.
//

PDO/MySQL memory consumption with large result set

I'm having a strange time dealing with selecting from a table with about 30,000 rows.
It seems my script is using an outrageous amount of memory for what is a simple, forward only walk over a query result.
Please note that this example is a somewhat contrived, absolute bare minimum example which bears very little resemblance to the real code and it cannot be replaced with a simple database aggregation. It is intended to illustrate the point that each row does not need to be retained on each iteration.
<?php
$pdo = new PDO('mysql:host=127.0.0.1', 'foo', 'bar', array(
PDO::ATTR_ERRMODE=>PDO::ERRMODE_EXCEPTION,
));
$stmt = $pdo->prepare('SELECT * FROM round');
$stmt->execute();
function do_stuff($row) {}
$c = 0;
while ($row = $stmt->fetch()) {
// do something with the object that doesn't involve keeping
// it around and can't be done in SQL
do_stuff($row);
$row = null;
++$c;
}
var_dump($c);
var_dump(memory_get_usage());
var_dump(memory_get_peak_usage());
This outputs:
int(39508)
int(43005064)
int(43018120)
I don't understand why 40 meg of memory is used when hardly any data needs to be held at any one time. I have already worked out I can reduce the memory by a factor of about 6 by replacing "SELECT *" with "SELECT home, away", however I consider even this usage to be insanely high and the table is only going to get bigger.
Is there a setting I'm missing, or is there some limitation in PDO that I should be aware of? I'm happy to get rid of PDO in favour of mysqli if it can not support this, so if that's my only option, how would I perform this using mysqli instead?
After creating the connection, you need to set PDO::MYSQL_ATTR_USE_BUFFERED_QUERY to false:
<?php
$pdo = new PDO('mysql:host=127.0.0.1', 'foo', 'bar', array(
PDO::ATTR_ERRMODE=>PDO::ERRMODE_EXCEPTION,
));
$pdo->setAttribute(PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, false);
// snip
var_dump(memory_get_usage());
var_dump(memory_get_peak_usage());
This outputs:
int(39508)
int(653920)
int(668136)
Regardless of the result size, the memory usage remains pretty much static.
Another option would be to do something like:
$i = $c = 0;
$query = 'SELECT home, away FROM round LIMIT 2048 OFFSET %u;';
while ($c += count($rows = codeThatFetches(sprintf($query, $i++ * 2048))) > 0)
{
foreach ($rows as $row)
{
do_stuff($row);
}
}
The whole result set (all 30,000 rows) is buffered into memory before you can start looking at it.
You should be letting the database do the aggregation and only asking it for the two numbers you need.
SELECT SUM(home) AS home, SUM(away) AS away, COUNT(*) AS c FROM round
The reality of the situation is that if you fetch all rows and expect to be able to iterate over all of them in PHP, at once, they will exist in memory.
If you really don't think using SQL powered expressions and aggregation is the solution you could consider limiting/chunking your data processing. Instead of fetching all rows at once do something like:
1) Fetch 5,000 rows
2) Aggregate/Calculate intermediary results
3) unset variables to free memory
4) Back to step 1 (fetch next set of rows)
Just an idea...
I haven't done this before in PHP, but you may consider fetching the rows using a scrollable cursor - see the fetch documentation for an example.
Instead of returning all the results of your query at once back to your PHP script, it holds the results on the server side and you use a cursor to iterate through them getting one at a time.
Whilst I have not tested this, it is bound to have other drawbacks such as utilising more server resources and most likely reduced performance due to additional communication with the server.
Altering the fetch style may also have an impact as by default the documentation indicates it will store both an associative array and well as a numerical indexed array which is bound to increase memory usage.
As others have suggested, reducing the number of results in the first place is most likely a better option if possible.

php out of memory, array breaking into subarrays

I am using an update function, where I insert some 40,000 rows to a mysql database. While making that array, I am getting out of memory error (tried to allocate 41 bytes).
The final function is like this:
function Tright($area) {
foreach ($area as $a1=>&$a2) {
mysql_query('INSERT INTO 0_right SET section=\''.$a2['sec_id'].'\', user_id=\''.$a2['user_id'].'\', rank_id=\''.$a2['rank_id'].'\', menu_id=\''.$a2['menu_id'].'\', droit = 1;');
}
}
I have two questions. Is it natural that this above work load becomes too much for php to handle?
If no, can anyone suggest where should I check? And if yes, is there a way to break that $area array to subarrays and execute the function, maybe that way I won't get the out of memory issue. Any other workaround?
Thanks guys.
Edit: #halfdan, #Patrick Fisher, both of you have spoken about making a single multi insert query. How do you do that, in this example please.
First, you should combine all the values into a single INSERT statement, instead of 40,000 different queries.
Second, yes, it is quite natural that you are running out of memory. You can increase this limit at runtime with ini_set(), e.g. ini_set('memory_limit', '16M');
To insert multiple values at once, your SQL should look something like this:
INSERT INTO 0_right (section, user_id, rank_id, menu_id, droit) VALUES
(1,1,1,1,1),
(1,2,1,1,1),
(1,3,1,1,1)
You can build the query like so:
$values = '';
foreach ($area as $a){
if ($values != ''){
$values .= ',';
}
$values .= "('{$a['sec_id']}', '{$a['user_id']}', '{$a['rank_id']}', '{$a['menu_id']}', 1)";
}
$sql = "INSERT INTO 0_right (section, user_id, rank_id, menu_id, droit) VALUES $values";
These are all nice answers to get around the problem. If this is a one time script, just bump up the RAM and make sure the script doesn't time out (max_execution_time in the php.ini file) and you should be fine.
It may run faster if it was one big insert statement, but then you'd pay the cost of constructing the huge query on the PHP side (so the out of memory issue will still be there and will be even worse with the string concatenation). But honestly, who cares if you're just running this once?
However, if you're to perform this operation all the time (e.g. on a webpage), I'd recommend other approaches... like restricting the size of the area, cutting the feature or storing the data differently.
PHP has a built-in (configurable) memory limit to prevent a single script that goes the wrong way from bogging down the whole machine. Depending on your version, that limit defaults to 8MB (pre-5.2.0), 16MB (5.2.0), or 128MB (5.3.0).
You can change the limit either via ini_set or in the php.ini.

Categories