SQL insert Or update times out if large date in loop - php

i have some data with userid and date.
Sometimes there is large datas i need to loop through and update the sql database but the database times out.
is there any better way i can do this please, sample code below.
foreach($time[$re->userid][$today] as $t){
if(($re->time >= $t->in_from) && ($re->time < $t->in_to)
&& md5($t->WorkDay."_in".$re->date) != $in){//in
$tble = tools::sd("{$t->WorkDay} in");
}
if(($re->time >= $t->out_from) && ($re->time < $t->out_to)
&& md5($t->WorkDay."_out".$re->date) != $out){//out
$tble = tools::sd("{$t->WorkDay} out");
if($tble =='nout'){
$re->date2 = tools::ndate($re->date . "- 1");
}
}
if(!empty($tble)){
$q = array(
"id" => $re->userid
, "dt" => $re->date2
, "{$tble}" => $re->time
);
dump($q); // insert into sql
}
}
dump function:::
function dump($d ='')
{
if(!empty($d)){
end($d);
$tble = key($d);
$d['ld'] = "{$d['dt']} {$d[$tble]}";
$r = $GLOBALS['mssqldb']->get_results("
IF NOT EXISTS (select id,ld,dt,{$tble} from clockL
WHERE id = '{$d['id']}'
AND dt ='{$d['dt']}')
INSERT INTO clockL (id,ld,dt,{$tble})
VALUES ('{$d['id']}','{$d['ld']}','{$d['dt']}'
,'{$d[$tble]}')
ELSE IF EXISTS (select id,{$tble} from clockL
WHERE id = '{$d['id']}'
AND dt ='{$d['dt']}'
AND {$tble} = 'NOC'
)
update clockL SET {$tble} ='{$d[$tble]}', ld ='{$d['ld']}' WHERE id = '{$d['id']}'
AND dt ='{$d['dt']}' AND {$tble} ='NOC'
");
//print_r($GLOBALS['mssqldb']);
}
}
Thank You.

Do the insert/update outside of the loop. Enclose it in a transaction so that you don't get an inconsistent database state if the script dies prematurely. Using one big query is usually faster than making lots of small queries. You might also set higher values for time and memory limits, but be aware of the consequencies.

Are you aware of a PHP function called set_time_limit()? You can find the detailed documentation here.
This can manipulate the execution time, which is 30 seconds default. If you set it to 0, eg set_time_limit(0), there will be no execution time limit.

May be looping is the reason for time out.
Because when your performing the insert/update operations in side the loop, the connection to the database will be in open state until the loop is terminated which may cause the time out problem.
Try doing the insert/update operation outside of the loop.

Related

Database overload in long task using Laravel

I'm currently struggling with an issue that is overloading my database which makes all page requests being delayed significantly.
Current scenario
- A certain Artisan Command is scheduled to be ran every 8 minutes
- This command has to update a whole table with more than 30000 rows
- Every row will have a new value, which means 30000 queries will have to be executed
- For about 14 seconds the server doesn't answer due to database overload (I guess)
Here's the handle method of the command handle()
public function handle()
{
$thingies = /* Insert big query here */
foreach ($thingies as $thing)
{
$resource = Resource::find($thing->id);
if(!$resource)
{
continue;
}
$resource->update(['column' => $thing->value]);
}
}
Is there any other approach to do this without making my page requests being delayed?
Your process is really inefficient and I'm not surprised it takes a long time to complete. To process 30,000 rows, you're making 60,000 queries (half to find out if the id exists, and the other half to update the row). You could be making just 1.
I have no experience with Laravel, so I'll leave it up to you to find out what functions in Laravel can be used to apply my recommendation. I just want to get you to understand the concepts.
MySQL allows you to submit a multi query; One command that executes many queries. It is drastically faster than executing individual queries in a loop. Here is an example that uses MySQLi directly (no 3rd party framework such as Laravel)
//the 30,000 new values and the record IDs they belong to. These values
// MUST be escaped or known to be safe
$values = [
['id'=>145, 'fieldName'=>'a'], ['id'=>2, 'fieldName'=>'b']...
];
// %s and %d will be replaced with column value and id to look for
$qry_template = "UPDATE myTable SET fieldName = '%s' WHERE id = %d";
$queries = [];//array of all queries to be run
foreach ($values as $row){ //build and add queries
$q = sprintf($qry_template,$row['fieldName'],$row['id']);
array_push($queries,$q);
}
//combine all into one query
$combined = implode("; ",$queries);
//execute all queries at once
$mysqli->multi_query($combined);
I would look into how Laravel does multi queries and start there. The last time I implemented something like this, it took about 7 milliseconds to insert 3,000 rows. So updating 30,000 will definitely not take 14 seconds.
As an added bonus, there is no need to first run a query to figure out whether the ID exists. If it doesn't, nothing will be updated.
Thanks to #cyclone comment I was able to update all the values in one single query.
It's not a perfect solution, but the query execution time now takes roughly 8 seconds and only 1 connection is required, which means the page requests are still being handled when the query is being executed.
I'm not marking this question as definitive since there might be improvements to make.
$ids = [];
$caseQuery = '';
foreach ($thingies as $thing)
{
if(strlen($caseQuery) == 0)
{
$caseQuery = '(CASE WHEN id = '. $thing->id . ' THEN \''. $thing->rank .'\' ';
}
else
{
$caseQuery .= ' WHEN id = '. $thing->id . ' THEN \''. $thing->rank .'\' ';
}
array_push($ids, $thing->id);
}
$caseQuery .= ' END)';
// Execute query
DB::update('UPDATE <table> SET <value> = '. $caseQuery . ' WHERE id IN ('. implode( ',' , $ids) .')');

How to handle a large post request

Unfortunately I can't show you the code but I can give you an idea of what it looks like, what it does and what the problem I have...
<?php
include(db.php);
include(tools.php);
$c = new GetDB(); // Connection to DB
$t = new Tools(); // Classes to clean, prevent XSS and others
if(isset($_POST['var'])){
$nv = json_decode($_POST['var'])
foreach($nv as $k) {
$id = $t->clean($k->id);
// ... goes on for about 10 keys
// this might seems redundant or insufficient
$id = $c->real_escape_string($id);
// ... goes on for the rest of keys...
$q = $c->query("SELECT * FROM table WHERE id = '$id'");
$r = $q->fetch_row();
if ($r[1] > 0) {
// Item exist in DB then just UPDATE
$q1 = $c->query(UPDATE TABLE1);
$q4 = $c->query(UPDATE TABLE2);
if ($x == 1) {
$q2 = $c->query(SELECT);
$rq = $q2->fetch_row();
if ($rq[0] > 0) {
// Item already in table just update
$q3 = $c->query(UPDATE TABLE3);
} else {
// Item not in table then INSERT
$q3 = $c->query(INSERT TABLE3);
}
}
} else {
// Item not in DB then Insert
$q1 = $c->query(INSERT TABLE1);
$q4 = $c->query(INSERT TABLE2);
$q3 = $c->query(INSERT TABLE4);
if($x == 1) {
$q5 = $c->query(INSERT TABLE3);
}
}
}
}
As you can see is a very basic INSERT, UPDATE tables script, so before we release to full production we did some test to see that the script is working as it should, and the "result" where excellent...
So, we ran this code against 100 requests, everything when just fine... less than 1.7seconds for the 100 requests... but then we saw the amount of data that needed to be send/post it was a jaw drop for me... over 20K items it takes about 3 to 5min to send the post but the script always crash the "data" is an array in json
array (
[0] => array (
[id] => 1,
[val2] => 1,
[val3] => 1,
[val4] => 1,
[val5] => 1,
[val6] => 1,
[val7] => 1,
[val8] => 1,
[val8] => 1,
[val9] => 1,
[val10] => 1
),
[1] => array (
[id] => 2,
[val2] => 2,
[val3] => 2,
[val4] => 2,
[val5] => 2,
[val6] => 2,
[val7] => 2,
[val8] => 2,
[val8] => 2,
[val9] => 2,
[val10] => 2
),
//... about 10 to 20K depend on the day and time
)
but in json... any way, sending this information is not a problem, like I said it can take about 3 to 5mins the problem is the code that does the job receiving the data and do the queries... in a normal shared hosting we get a 503 error which by doing a debug it turn out to be a time out, so for our VPS we can increment the max_execution_time to whatever we need to, to process 10K+ it takes about 1hr in our VPS, but in a shared hosting we can't use max_execution_time... So I ask the other developer the one that is sending the information that instead of sending 10K+ in one blow to send a batch of 1K and let it rest for a second then send another batch..and so on ... so far I haven't got any answer... so I was thinking to do the "pause" on my end, say, after process 1K of items wait for a sec then continue but I don't see it as efficient as receiving the data in batches... how would you solve this?
Sorry, I don't have enough reputation to comment everywhere, yet, so I have to write this in an answer. I would recommend zedfoxus' method of batch processing above. In addition, I would highly recommend figuring out a way of processing those queries faster. Keep in mind that every single PHP function call, etc. gets multiplied by every row of data. Here are just a couple of the ways you might be able to get better performance:
Use prepared statements. This will allow MySQL to cache the memory operation for each successive query. This is really important.
If you use prepared statements, then you can drop the $c->real_escape_string() calls. I would also scratch my head to see what you can safely leave out of the $t->clean() method.
Next I would evaluate the performance of evaluating every single row individually. I'd have to benchmark it to be sure, but I think running a few PHP statements beforehand will be faster than making umpteen unnecessary MySQL SELECT and UPDATE calls. MySQL is much faster when inserting multiple rows at a time. If you expect multiple rows of your input to be changing the same row in the database, then you might want to consider the following:
a. Think about creating a temporary, precompiled array (depending on memory usage involved) that stores the unique rows of data. I would also consider doing the same for the secondary TABLE3. This would eliminate needless "update" queries, and make part b possible.
b. Consider a single query that selects every id from the database that's in the array. This will be the list of items to use an UPDATE query for. Update each of these rows, removing them from the temporary array as you go. Then, you can create a single, multi-row insert statement (prepared, of course), that does all of the inserts at a single time.
Take a look at optimizing your MySQL server parameters to better handle the load.
I don't know if this would speed up a prepared INSERT statement at all, but it might be worth a try. You can wrap the INSERT statement within a transaction as detailed in an answer here: MySQL multiple insert performance
I hope that helps, and if anyone else has some suggestions, just post them in the comments and I'll try to include them.
Here's a look at the original code with just a few suggestions for changes:
<?php
/* You can make sure that the connection type is persistent and
* I personally prefer using the PDO driver.
*/
include(db.php);
/* Definitely think twice about each tool that is included.
* Only include what you need to evaluate the submitted data.
*/
include(tools.php);
$c = new GetDB(); // Connection to DB
/* Take a look at optimizing the code in the Tools class.
* Avoid any and all kinds of loops–this code is going to be used in
* a loop and could easily turn into O(n^2) performance drain.
* Minimize the amount of string manipulation requests.
* Optimize regular expressions.
*/
$t = new Tools(); // Classes to clean, prevent XSS and others
if(isset($_POST['var'])){ // !empty() catches more cases than isset()
$nv = json_decode($_POST['var'])
/* LOOP LOGIC
* Definitely test my hypothesis yourself, but this is similar
* to what I would try first.
*/
//Row in database query
$inTableSQL = "SELECT id FROM TABLE1 WHERE id IN("; //keep adding to it
foreach ($nv as $k) {
/* I would personally use specific methods per data type.
* Here, I might use a type cast, plus valid int range check.
*/
$id = $t->cleanId($k->id); //I would include a type cast: (int)
// Similarly for other values
//etc.
// Then save validated data to the array(s)
$data[$id] = array($values...);
/* Now would also be a good time to add the id to the SELECT
* statement
*/
$inTableSQL .= "$id,";
}
$inTableSQL .= ");";
// Execute query here
// Then step through the query ids returned, perform UPDATEs,
// remove the array element once UPDATE is done (use prepared statements)
foreach (.....
/* Then, insert the remaining rows all at once...
* You'll have to step through the remaining array elements to
* prepare the statement.
*/
foreach(.....
} //end initial POST data if
/* Everything below here becomes irrelevant */
foreach($nv as $k) {
$id = $t->clean($k->id);
// ... goes on for about 10 keys
// this might seems redundant or insufficient
$id = $c->real_escape_string($id);
// ... goes on for the rest of keys...
$q = $c->query("SELECT * FROM table WHERE id = '$id'");
$r = $q->fetch_row();
if ($r[1] > 0) {
// Item exist in DB then just UPDATE
$q1 = $c->query(UPDATE TABLE1);
$q4 = $c->query(UPDATE TABLE2);
if ($x == 1) {
$q2 = $c->query(SELECT);
$rq = $q2->fetch_row();
if ($rq[0] > 0) {
// Item already in table just update
$q3 = $c->query(UPDATE TABLE3);
} else {
// Item not in table then INSERT
$q3 = $c->query(INSERT TABLE3);
}
}
} else {
// Item not in DB then Insert
$q1 = $c->query(INSERT TABLE1);
$q4 = $c->query(INSERT TABLE2);
$q3 = $c->query(INSERT TABLE4);
if($x == 1) {
$q5 = $c->query(INSERT TABLE3);
}
}
}
}
The key is to minimize queries. Often, where you are looping over data doing one or more queries per iteration, you can replace it with a constant number of queries. In your case, you'll want to rewrite it into something like this:
include(db.php);
include(tools.php);
$c = new GetDB(); // Connection to DB
$t = new Tools(); // Classes to clean, prevent XSS and others
if(isset($_POST['var'])){
$nv = json_decode($_POST['var'])
$table1_data = array();
$table2_data = array();
$table3_data = array();
$table4_data = array();
foreach($nv as $k) {
$id = $t->clean($k->id);
// ... goes on for about 10 keys
// this might seems redundant or insufficient
$id = $c->real_escape_string($id);
// ... goes on for the rest of keys...
$table1_data[] = array( ... );
$table2_data[] = array( ... );
$table4_data[] = array( ... );
if ($x == 1) {
$table3_data[] = array( ... );
}
}
$values = array_to_sql($table1_data);
$c->query("INSERT INTO TABLE1 (...) VALUES $values ON DUPLICATE KEY UPDATE ...");
$values = array_to_sql($table2_data);
$c->query("INSERT INTO TABLE2 (...) VALUES $values ON DUPLICATE KEY UPDATE ...");
$values = array_to_sql($table3_data);
$c->query("INSERT INTO TABLE3 (...) VALUES $values ON DUPLICATE KEY UPDATE ...");
$values = array_to_sql($table4_data);
$c->query("INSERT IGNORE INTO TABLE4 (...) VALUES $values");
}
While your original code executed between 3 and 5 queries per row of your input data, the above code only executes 4 queries in total.
I leave the implementation of array_to_sql to the reader, but hopefully this should explain the idea. TABLE4 is an INSERT IGNORE here since you didn't have an UPDATE in the "found" clause of your original loop.

MySQL INSERTing rows into a database with datetime sanity checks

I am making a meeting room booking system in which there should be no times within the start and end dates so in theory the validation should check for no dates/times within one start and end date time frame.
I have two tables, I can insert into it fine with both start and end dates so the only columns i am interested in at the moment are these
meetingrooms
|------------------------------------||- bookingtime -|-bookingend-|
I understand the principle behind the sanity check and the check i can do in psudocode. Here is the code i have got so far -
>
p4a_db::singleton()->query("INSERT INTO meetingrooms(location_id, bookingtime, bookingend, merono_id)
WHERE bookingtime < " . $date . " AND bookingend > " . $date . "
OR
bookingdate < " . $date . " AND bookingend > " . $dateend . "
VALUES(?,?,?,?)",
array($location, $date, $dateend, $merono));
I don't want to insert data directly into the statement but until i understand how to do this i am stuck, so the question,
How do i perform a sanity check before the data is inserted so that i don't get dates within booked times.
any help would be greatly appreciated.
Edit:
I've been overthinking my answer and I realized that the old solution will not work in your case since you need the time span, comparing the start and end date is useless.
My way of processing this would be:
Save the dates as int, use 24h system (7:40am is 740, 9:50pm is 2150)
Check for stored dates where: (Start<NewStart<End)
Check for stored dates where: (Start<NewEnd<End)
When processing several rooms, just store room number + time as int. That way you can still use the method from 2 and 3.
2 and 3 can be done in a sql query, check out this link.
Old answer (checking for duplicates)
This is an example of how to check for duplicates (in this case email) before inserting the text:
$emailexist = $mysqli->prepare("select email from users where email = ?");
$emailexist->bind_param('s', $email);
$emailexist->execute();
$emailexist->store_result();
if ($emailexist->num_rows > 0) {
$emailexist->close();
$mysqli->close();
return true;
}
else {
$emailexist->close();
$mysqli->close();
return false;
}
It checks if there are rows which contain the string. If so (if number of rows higher than 0) it returns true (which means, the date already exists).
You can just adapt this to you code.
However, you could also just set the columns to UNIQUE. Then you get an error when trying to insert it. It is easier and you won't have problems with concurrent connections.
after a long and intensive search, I have now got a working example of this method, along with a method of protecting against sql injection, here's the code;
if ($this->BookingValue == 1)
{
$sql = "SELECT COUNT(*) as num FROM meeting_room_bookings
WHERE
(
(? < start_at AND ? > start_at)
OR
(? > start_at AND ? < end_at)
)
AND
meeting_room_id = ?";
$result = p4a_db::singleton()->fetchRow($sql, array($date, $date, $date, $dateend, $merono));
if ( 0 == $result["num"] )
{
p4a_db::singleton()->query("INSERT INTO meeting_room_bookings (start_at, end_at, meeting_room_id)
VALUES
(?,?,?)", array($date, $dateend, $merono));
return true;
}
else
{
return false;
There isn't much to explain about this code, but in term of differences, (excluding the change in column names with the table) the query is now prepared before the value is set, then it is possible to use it in an if statement, thus allowing the validation to take place to filter results between different dates.
along with this i have added validation to stop dates from other meeting rooms being included within the statement via the AND statement where the meeting room id is limeted to a single value.
Although now, which will lead on to a separate question is another thrown error that comes from this statement, i know the insert is sound but something from this prepared statement causes the error:
SQLSTATE[HY093]: Invalid parameter number: number of bound variables does not match number of tokens
File: Pdo.php, Line: 234
Although now i am looking into a error that is thrown from the prepared statement and will update this answer when there is a fix, thanks for the help.

Fast inserts results in data loss

I have a strange bug here related to mysql and php. I'm wondering if it could be a performance problem on our server's behalf too.
I got a class used to manage rebate promotional codes. The code is great, works fine and doesn exactly what it is supposed to do. The saveChanges() operation sends an INSERT or UPDATE depending on the state of the object and in the current context will only insert cause i'm trying to generate coupon codes.
The classe's saveChanges goes like this: (I know, i shouldn't be using old mysql, but i've got no choice due to architectural limitations of the software, so don't complain about that part please)
public function saveChanges($asNew = false){
//Get the connection
global $conn_panier;
//Check if the rebate still exists
if($this->isNew() || $asNew){
//Check unicity if new
if(reset(mysql_fetch_assoc(mysql_query('SELECT COUNT(*) FROM panier_rabais_codes WHERE code_coupon = "'.mysql_real_escape_string($this->getCouponCode(), $conn_panier).'"', $conn_panier))) > 0){
throw new Activis_Catalog_Models_Exceptions_ValidationException('Coupon code "'.$this->getCouponCode().'" already exists in the system', $this, __METHOD__, $this->getCouponCode());
}
//Update the existing rebate
mysql_query($q = 'INSERT INTO panier_rabais_codes
(
`no_rabais`,
`code_coupon`,
`utilisation`,
`date_verrou`
)VALUES(
'.$this->getRebate()->getId().',
"'.mysql_real_escape_string(stripslashes($this->getCouponCode()), $conn_panier).'",
'.$this->getCodeUsage().',
"'.($this->getInvalidityDate() === NULL ? '0000-00-00 00:00:00' : date('Y-m-d G:i:s', strtotime($this->getInvalidityDate()))).'"
)', $conn_panier);
return (mysql_affected_rows($conn_panier) >= 1);
}else{
//Update the existing rebate
mysql_query('UPDATE panier_rabais_codes
SET
`utilisation` = '.$this->getCodeUsage().',
`date_verrou` = "'.($this->getInvalidityDate() === NULL ? '0000-00-00 00:00:00' : date('Y-m-d G:i:s', strtotime($this->getInvalidityDate()))).'"
WHERE
no_rabais = '.$this->getRebate()->getId().' AND code_coupon = "'.mysql_real_escape_string($this->getCouponCode(), $conn_panier).'"', $conn_panier);
return (mysql_affected_rows($conn_panier) >= 0);
}
}
So as you can see, the code itself is pretty simple and clean and returns true if the insert succeeded, false if not.
The other portion of the code generates the codes using a random algorithm at goes like this:
while($codes_to_generate > 0){
//Sleep to delay mysql choking on the input
usleep(100);
//Generate a random code
$code = strtoupper('RC'.$rebate->getId().rand(254852, 975124));
$code .= strtoupper(substr(md5($code), 0, 1));
$rebateCode = new Activis_Catalog_Models_RebateCode($rebate);
$rebateCode->setCouponCode($code);
$rebateCode->setCodeUsage($_REQUEST['utilisation_generer']);
try{
if($rebateCode->saveChanges()){
$codes_to_generate--;
$generated_codes[] = $code;
}
}catch(Exception $ex){
}
}
As you can see here, two things to note. The number of codes to generate and the array of generated codes only get filled if i get a return true from the saveChanges, so mysql HAS to report that the information was inserted for this part to happen.
Another tidbit is the first line of the while:
//Sleep to delay mysql choking on the input
usleep(100);
Wtf? Well this post is all about this. My code works flawlessly with small amounts of codes to generate. But if i ask mysql to save more than a few codes at once, i have to throttle myself using usleep or mysql drops some of these lines. It will report that there are affected rows but is not saving them.
Under 100 lines, i don't need throttling and then i need to usleep depending on the amount of lines to insert. It must be something simple but i don't know what. Here is a sum of the lines i tried to insert and the minimum usleep throttle i had to implement:
< 100 lines: none
< 300 lines: 2 ms
< 1000 lines: 5 ms
< 2000 lines: 10 ms
< 5000 lines: 20 ms
< 10000 lines: 100 ms
Thank you for your time
Are you sure that your codes are all inserted and not updated, because, update a non existing line does nothing.

maximum execution time of 30 seconds exceeded php

When I run my script I receive the following error before processing all rows of data.
maximum execution time of 30 seconds
exceeded
After researching the problem, I should be able to extend the max_execution_time time which should resolve the problem.
But being in my PHP programming infancy I would like to know if there is a more optimal way of doing my script below, so I do not have to rely on "get out of jail cards".
The script is:
1 Taking a CSV file
2 Cherry picking some columns
3 Trying to insert 10k rows of CSV data into a my SQL table
In my head I think I should be able to insert in chunks, but that is so far beyond my skillset I do not even know how to write one line :\
Many thanks in advance
<?php
function processCSV()
{
global $uploadFile;
include 'dbConnection.inc.php';
dbConnection("xx","xx","xx");
$rowCounter = 0;
$loadLocationCsvUrl = fopen($uploadFile,"r");
if ($loadLocationCsvUrl <> false)
{
while ($locationFile = fgetcsv($loadLocationCsvUrl, ','))
{
$officeId = $locationFile[2];
$country = $locationFile[9];
$country = trim($country);
$country = htmlspecialchars($country);
$open = $locationFile[4];
$open = trim($open);
$open = htmlspecialchars($open);
$insString = "insert into countrytable set officeId='$officeId', countryname='$country', status='$open'";
switch($country)
{
case $country <> 'Country':
if (!mysql_query($insString))
{
echo "<p>error " . mysql_error() . "</p>";
}
break;
}
$rowCounter++;
}
echo "$rowCounter inserted.";
}
fclose($loadLocationCsvUrl);
}
processCSV();
?>
First, in 2011 you do not use mysql_query. You use mysqli or PDO and prepared statements. Then you do not need to figure out how to escape strings for SQL. You used htmlspecialchars which is totally wrong for this purpose. Next, you could use a transaction to speed up many inserts. MySQL also supports multiple interests.
But the best bet would be to use the CSV storage engine. http://dev.mysql.com/doc/refman/5.0/en/csv-storage-engine.html read here. You can instantly load everything into SQL and then manipulate there as you wish. The article also shows the load data infile command.
Well, you could create a single query like this.
$query = "INSERT INTO countrytable (officeId, countryname, status) VALUES ";
$entries = array();
while ($locationFile = fgetcsv($loadLocationCsvUrl, ',')) {
// your code
$entries[] = "('$officeId', '$country', '$open')";
}
$query .= implode(', ', $enties);
mysql_query($query);
But this depends on how long your query will be and what the server limit is set to.
But as you can read in other posts, there are better way for your requirements. But I thougt I should share a way you did thought about.
You can try calling the following function before inserting. This will set the time limit to unlimited instead of the 30 sec default time.
set_time_limit( 0 );

Categories