maximum execution time of 30 seconds exceeded php - php

When I run my script I receive the following error before processing all rows of data.
maximum execution time of 30 seconds
exceeded
After researching the problem, I should be able to extend the max_execution_time time which should resolve the problem.
But being in my PHP programming infancy I would like to know if there is a more optimal way of doing my script below, so I do not have to rely on "get out of jail cards".
The script is:
1 Taking a CSV file
2 Cherry picking some columns
3 Trying to insert 10k rows of CSV data into a my SQL table
In my head I think I should be able to insert in chunks, but that is so far beyond my skillset I do not even know how to write one line :\
Many thanks in advance
<?php
function processCSV()
{
global $uploadFile;
include 'dbConnection.inc.php';
dbConnection("xx","xx","xx");
$rowCounter = 0;
$loadLocationCsvUrl = fopen($uploadFile,"r");
if ($loadLocationCsvUrl <> false)
{
while ($locationFile = fgetcsv($loadLocationCsvUrl, ','))
{
$officeId = $locationFile[2];
$country = $locationFile[9];
$country = trim($country);
$country = htmlspecialchars($country);
$open = $locationFile[4];
$open = trim($open);
$open = htmlspecialchars($open);
$insString = "insert into countrytable set officeId='$officeId', countryname='$country', status='$open'";
switch($country)
{
case $country <> 'Country':
if (!mysql_query($insString))
{
echo "<p>error " . mysql_error() . "</p>";
}
break;
}
$rowCounter++;
}
echo "$rowCounter inserted.";
}
fclose($loadLocationCsvUrl);
}
processCSV();
?>

First, in 2011 you do not use mysql_query. You use mysqli or PDO and prepared statements. Then you do not need to figure out how to escape strings for SQL. You used htmlspecialchars which is totally wrong for this purpose. Next, you could use a transaction to speed up many inserts. MySQL also supports multiple interests.
But the best bet would be to use the CSV storage engine. http://dev.mysql.com/doc/refman/5.0/en/csv-storage-engine.html read here. You can instantly load everything into SQL and then manipulate there as you wish. The article also shows the load data infile command.

Well, you could create a single query like this.
$query = "INSERT INTO countrytable (officeId, countryname, status) VALUES ";
$entries = array();
while ($locationFile = fgetcsv($loadLocationCsvUrl, ',')) {
// your code
$entries[] = "('$officeId', '$country', '$open')";
}
$query .= implode(', ', $enties);
mysql_query($query);
But this depends on how long your query will be and what the server limit is set to.
But as you can read in other posts, there are better way for your requirements. But I thougt I should share a way you did thought about.

You can try calling the following function before inserting. This will set the time limit to unlimited instead of the 30 sec default time.
set_time_limit( 0 );

Related

PHP While Loop with MySQL Has Inconsistent Execution Time, How Can I Fix This?

My problem is simple. On my website I'm loading several results from MySQL tables inside a while loop in PHP and for some reason the execution time varies from reasonably short (0.13s) or to confusingly long (11s) and I have no idea why. Here is a short version of the code:
<?php
$sql =
"SELECT * FROM test_users, image_uploads
WHERE test_users.APPROVAL = 'granted'
AND test_users.NAME = image_uploads.OWNER
".$checkmember."
".$checkselected."
ORDER BY " . $sortingstring . " LIMIT 0, 27
";
$result = mysqli_query($mysqli, $sql);
$data = "";
$c = 0;
$start = microtime(true);
while($value = mysqli_fetch_array($result)) {
$files_key = $value["KEY"];
$file_hidden = "no";
$inner_query = "SELECT * FROM my_table WHERE KEY = '".$files_key."' AND HIDDEN = '".$file_hidden."'";
$inner_result = mysqli_query($mysqli, $inner_query);
while ($row = mysqli_fetch_array($inner_result)) {
// getting all variables with row[n]
}
$sql = "SELECT * FROM some_other_table WHERE THM=? AND MEMBER=?";
$fstmt = $mysqli->prepare($sql);
$fstmt->bind_param("ss", $value['THM'], 'username');
$fstmt->execute();
$fstmt->store_result();
if($fstmt->num_rows > 0) {
$part0 = 'some elaborate string';
} else {
$part0 = 'some different string';
}
$fstmt->close();
// generate a document using the gathered data
include "../data.php"; // produces $partsMerged
// save to data string
$data .= $partsMerged;
$c++;
}
$time_elapsed_secs = substr(microtime(true) - $start, 0, 5);
// takes sometimes only 0.13 seconds
// and other times up to 11 seconds and more
?>
I was wondering where the problem could be.
Does it have to do with my db connection or is my code flawed? I haven't had this problem at the beginning when I first implemented it but since a few months it's behaving strangely. Sometimes it loads very fast other times as I said it takes 11 seconds or even more.
How can I fix this?
There's a few ways to debug this.
Firstly, any dynamic variables that form part of your query (e.g. $checkmember) - we have no way of knowing here whether these are the same or different each time you're executing the query. If they're different then each time you are executing a different query! So it goes without saying it may take longer depending on what query is being run.
Regardless of the answer, try running the SQL through the MySQL command line and see how long that query takes.
If it's similar (i.e. not an 11 second range) then the answer is it's nothing to do with the actual query itself.
You need to say whether the environment you're running this in is a web server, e.g. accessing the PHP script via a browser, or executing the script via a command line.
There isn't enough information to answer your question. But you need to at least establish some of these things first.
The rule of thumb is that if your raw SQL executes on a MySQL command line in a similar amount of time on subsequent attempts, the problem area is elsewhere (e.g. connection to a web server via a browser). This can be monitored in the Network tab of your browser.

MySql 8.0 Read Speed

I'm in need of some expertise here, I have a massive SQL database that I use in conjunction with my Mobile app I program. I'm getting some very long times to fetch a result from the database, at times upwards of 20 to 25 seconds. I've managed to increase the speed and it is were it is now from 40 seconds to retrieve a result. I am hoping someone may have some insight on how I can speed up the query speed and return a result faster then 20 seconds.
Main table is 4 columns + 1 for the "id" column, the database contains 15,254,543 rows of data. Currently it is setup as an InnoDB, with 4 indexes, and it about 1.3GB for size.
My server is a GoDaddy VPS, 1 CPU, with 4 GB of Ram. The is dedicated and I do not share resources with anyone else, its only purpose beside a very basic website is the SQL database.
Just to note, the database record count is not going to get any larger, I really just need to figure out a better way to return a query faster then 20 seconds.
In more detail, the Android app connects to my website via a php document to query and return the results, I have a thought that there maybe a better way to go about this and this maybe were the pitfall is. An interesting note is that when I'm in PHP My Admin, I can do a search and get a result back in under 3 seconds, which also points me to the issue might be in my php document. Here is the php document below that I wrote to do the work.
<?php
require "conn.php";
$FSC = $_POST["FSC"];
$PART_NUMBER = $_POST["NIIN"];
$mysql_qry_1 = "select * from MyTable where PART_NUMBER like '$PART_NUMBER';";
$result_1 = mysqli_query($conn ,$mysql_qry_1);
if(mysqli_num_rows($result_1) > 0) {
$row = mysqli_fetch_assoc($result_1);
$PART_NUMBER = $row["PART_NUMBER"];
$FSC = $row["FSC"];
$NIIN = $row["NIIN"];
$ITEM_NAME = $row["ITEM_NAME"];
echo $ITEM_NAME, ",>" .$PART_NUMBER, ",>" .$FSC, "" .$NIIN;
//usage stats
$sql = "INSERT INTO USAGE_STATS (ITEM_NAME, FSC, NIIN, PART_NUMBER)
VALUES ('$ITEM_NAME', '$FSC', '$NIIN', '$PART_NUMBER')";
if ($conn->query($sql) === TRUE) {
$row = mysqli_insert_id($conn);
} else {
//do nothing
}
//
} else {
echo "NO RESULT CHECK TO ENSURE CORRECT PART NUMBER WAS ENTERED ,> | ,>0000000000000";
}
$mysql_qry_2 = "select * from MYTAB where FSC like '$FSC' and NIIN like '$NIIN';";
$result_2 = mysqli_query($conn ,$mysql_qry_2);
if(mysqli_num_rows($result_2) > 0) {
$row = mysqli_fetch_assoc($result_2);
$AD_PART_NUMBER = $row["PART_NUMBER"];
if(mysqli_num_rows($result_2) > 1){
echo ",>";
while($row = mysqli_fetch_assoc($result_2)) {
$AD_PART_NUMBER = $row["PART_NUMBER"];
echo $AD_PART_NUMBER, ", ";
}
} else {
echo ",> | NO ADDITIONAL INFO FOUND | ";
}
} else {
echo ",> | NO ADDITIONAL INFO FOUND | ";
}
mysqli_close($con);
?>
So my question here is how can I improve the read speed with the available resources I have or is there an issue with my current PHP document that is causing the bottle neck here?
Instead of using LIKE you would get much faster reads by selecting a specific column that was indexed.
SELECT * FROM table_name FORCE INDEX (index_list) WHERE condition;
The other thing that speeds up Mysql greatly is the use of an SSD drive on the VPS server. A SSD drive will greatly decrease the amount of time it takes to scan a database that large.

SQL insert Or update times out if large date in loop

i have some data with userid and date.
Sometimes there is large datas i need to loop through and update the sql database but the database times out.
is there any better way i can do this please, sample code below.
foreach($time[$re->userid][$today] as $t){
if(($re->time >= $t->in_from) && ($re->time < $t->in_to)
&& md5($t->WorkDay."_in".$re->date) != $in){//in
$tble = tools::sd("{$t->WorkDay} in");
}
if(($re->time >= $t->out_from) && ($re->time < $t->out_to)
&& md5($t->WorkDay."_out".$re->date) != $out){//out
$tble = tools::sd("{$t->WorkDay} out");
if($tble =='nout'){
$re->date2 = tools::ndate($re->date . "- 1");
}
}
if(!empty($tble)){
$q = array(
"id" => $re->userid
, "dt" => $re->date2
, "{$tble}" => $re->time
);
dump($q); // insert into sql
}
}
dump function:::
function dump($d ='')
{
if(!empty($d)){
end($d);
$tble = key($d);
$d['ld'] = "{$d['dt']} {$d[$tble]}";
$r = $GLOBALS['mssqldb']->get_results("
IF NOT EXISTS (select id,ld,dt,{$tble} from clockL
WHERE id = '{$d['id']}'
AND dt ='{$d['dt']}')
INSERT INTO clockL (id,ld,dt,{$tble})
VALUES ('{$d['id']}','{$d['ld']}','{$d['dt']}'
,'{$d[$tble]}')
ELSE IF EXISTS (select id,{$tble} from clockL
WHERE id = '{$d['id']}'
AND dt ='{$d['dt']}'
AND {$tble} = 'NOC'
)
update clockL SET {$tble} ='{$d[$tble]}', ld ='{$d['ld']}' WHERE id = '{$d['id']}'
AND dt ='{$d['dt']}' AND {$tble} ='NOC'
");
//print_r($GLOBALS['mssqldb']);
}
}
Thank You.
Do the insert/update outside of the loop. Enclose it in a transaction so that you don't get an inconsistent database state if the script dies prematurely. Using one big query is usually faster than making lots of small queries. You might also set higher values for time and memory limits, but be aware of the consequencies.
Are you aware of a PHP function called set_time_limit()? You can find the detailed documentation here.
This can manipulate the execution time, which is 30 seconds default. If you set it to 0, eg set_time_limit(0), there will be no execution time limit.
May be looping is the reason for time out.
Because when your performing the insert/update operations in side the loop, the connection to the database will be in open state until the loop is terminated which may cause the time out problem.
Try doing the insert/update operation outside of the loop.

server error executing a large file

I have created a script which reads an XML file and adds it to the database. I am using XML Reader for this.
The problem is that my XML contains 500,000 products in it. This causes my page to time out. is there a way for me to achieve this?
My code below:
$z = new XMLReader;
$z->open('files/NAGardnersEBook.xml');
$doc = new DOMDocument;
# move to the first node
while ($z->read() && $z->name !== 'EBook');
# now that we're at the right depth, hop to the next <product/> until the end of the tree
while ($z->name === 'EBook')
{
$node = simplexml_import_dom($doc->importNode($z->expand(), true));
# Get the value of each node
$title = mysql_real_escape_string($node->Title);
$Subtitle = mysql_real_escape_string($node->SubTitle);
$ShortDescription = mysql_real_escape_string($node->ShortDescription);
$Publisher = mysql_real_escape_string($node->Publisher);
$Imprint = mysql_real_escape_string($node->Imprint);
# Get attributes
$isbn = $z->getAttribute('EAN');
$contributor = $node->Contributors;
$author = $contributor[0]->Contributor;
$author = mysql_real_escape_string($author);
$BicSubjects = $node->BicSubjects;
$Bic = $BicSubjects[0]->Bic;
$bicCode = $Bic[0]['Code'];
$formats = $node->Formats;
$type = $formats[0]->Format;
$price = $type[0]['Price'];
$ExclusiveRights = $type[0]['ExclusiveRights'];
$NotForSale = $type[0]['NotForSale'];
$arr[] = "UPDATE onix_d2c_data SET is_gardner='Yes', TitleText = '".$title."', Subtitle = '".$Subtitle."', PersonName='".$author."', ImprintName = '".$Imprint."', PublisherName = '".$Publisher."', Text = '".$ShortDescription."', BICMainSubject = '".$bicCode."', ExcludedTerritory='".$NotForSale."', RightsCountry='".$ExclusiveRights."', PriceAmount='".$price."', custom_category= 'Uncategorised', drm_type='adobe_drm' WHERE id='".$isbn."' ";
# go to next <product />
$z->next('EBook');
$isbns[] = $isbn;
}
foreach($isbns as $isbn){
$sql = "SELECT * FROM onix_d2c_data WHERE id='".$isbn."'";
$query = mysql_query($sql);
$count = mysql_num_rows($query);
if($count >0){
} else{
$sql = "INSERT INTO onix_d2c_data (id) VALUES ('".$isbn."')";
$query = mysql_query($sql);
}
}
foreach($arr as $sql){
mysql_query($sql);
}
Thank you,
Julian
You could use the function set_time_limit to extend the allowed script execution time or set max_execution_time in your php.ini.
You need to set these vaiables.Mare sure you have permission to change them
set_time_limit(0);
ini_set('max_execution_time', '6000');
You're executing two queries for each ISBN, just to check whether the ISBN already exists. Instead, set the ISBN column to unique (if it isn't already, it should be) then just go ahead and insert without checking. MySQL will return an error if it detects a duplicate which you can handle. This will reduce the number of queries and improve performance.
You're inserting each title with a separate call to the database. Instead, use the extended INSERT syntax to batch up many inserts in one query - see the MySQL manual for the ful syntax. Batching, say, 250 inserts will save a lot of time.
If you're not happy with batching inserts, use mysqli prepared statements which will reduce parsing time and and transmission time, so should improve your overall performance
You can probably trust Gardners list - consider dropping some of the escaping you're doing. I wouldn't recommend this for user input normally, but this is a special case.
Have you tried adding set_time_limit(0); on top of your PHP file ?
EDIT :
ini_set('memory_limit','16M');
Specify your limit there.
if you don't want to change the max_execution time as proposed by others, then you could also split up your tasks into several smaller tasks and let the server run a cron-job in several intervals.
E.g. 10.000 products each minute
Thank you all for such fast feedback. I managed to get the problem sorted by using array_chunks. Example below:
$thumbListLocal = array_chunk($isbns, 4, preserve_keys);
$thumbListLocalCount = count($thumbListLocal);
while ($i <= $thumbListLocalCount):
foreach($thumbListLocal[$i] as $index => $thumbName):
$sqlConstruct[] = "INSERT IGNORE INTO onix_d2c_data (id) VALUES ('".$thumbName."')";
endforeach;
foreach($sqlConstruct as $processSql){
mysql_query($processSql);
}
unset($thumbListLocal[$i]);
$i++;
endwhile;
I hope this helps someone.
Julian

This code needs to loop over 3.5 million rows, how can I make it more efficient?

I have a csv file that has 3.5 million codes in it.
I should point out that this is only EVER going to be this once.
The csv looks like
age9tlg,
rigfh34,
...
Here is my code:
ini_set('max_execution_time', 600);
ini_set("memory_limit", "512M");
$file_handle = fopen("Weekly.csv", "r");
while (!feof($file_handle)) {
$line_of_text = fgetcsv($file_handle);
if (is_array($line_of_text))
foreach ($line_of_text as $col) {
if (!empty($col)) {
mysql_query("insert into `action_6_weekly` Values('$col', '')") or die(mysql_error());
}
} else {
if (!empty($line_of_text)) {
mysql_query("insert into `action_6_weekly` Values('$line_of_text', '')") or die(mysql_error());
}
}
}
fclose($file_handle);
Is this code going to die part way through on me?
Will my memory and max execution time be high enough?
NB:
This code will be run on my localhost, and the database is on the same PC, so latency is not an issue.
Update:
here is another possible implementation.
This one does it in bulk inserts of 2000 records
$file_handle = fopen("Weekly.csv", "r");
$i = 0;
$vals = array();
while (!feof($file_handle)) {
$line_of_text = fgetcsv($file_handle);
if (is_array($line_of_text))
foreach ($line_of_text as $col) {
if (!empty($col)) {
if ($i < 2000) {
$vals[] = "('$col', '')";
$i++;
} else {
$vals = implode(', ', $vals);
mysql_query("insert into `action_6_weekly` Values $vals") or die(mysql_error());
$vals = array();
$i = 0;
}
}
} else {
if (!empty($line_of_text)) {
if ($i < 2000) {
$vals[] = "('$line_of_text', '')";
$i++;
} else {
$vals = implode(', ', $vals);
mysql_query("insert into `action_6_weekly` Values $vals") or die(mysql_error());
$vals = array();
$i = 0;
}
}
}
}
fclose($file_handle);
if i was to use this method what is the highest value i could set it to insert at once?
Update 2
so, ive found i can use
LOAD DATA LOCAL INFILE 'C:\\xampp\\htdocs\\weekly.csv' INTO TABLE `action_6_weekly` FIELDS TERMINATED BY ';' ENCLOSED BY '"' ESCAPED BY '\\' LINES TERMINATED BY ','(`code`)
but the issue now is that, i was wrong about the csv format,
it is actually 4 codes and then a line break,
so
fhroflg,qporlfg,vcalpfx,rplfigc,
vapworf,flofigx,apqoeei,clxosrc,
...
so i need to be able to specify two LINES TERMINATED BY
this question has been branched out to Here.
Update 3
Setting it to do bulk inserts of 20k rows, using
while (!feof($file_handle)) {
$val[] = fgetcsv($file_handle);
$i++;
if($i == 20000) {
//do insert
//set $i = 0;
//$val = array();
}
}
//do insert(for last few rows that dont reach 20k
but it dies at this point because for some reason $val contains 75k rows, and idea why?
note the above code is simplified.
I doubt this will be the popular answer, but I would have your php application run mysqlimport on the csv file. Surely it is optimized far beyond what you will do in php.
is this code going to die part way
through on me? will my memory and max
execution time be high enough?
Why don't you try and find out?
You can adjust both the memory (memory_limit) and execution time (max_execution_time) limits, so if you really have to use that, it shouldn't be a problem.
Note that MySQL supports delayed and multiple row insertion:
INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9);
http://dev.mysql.com/doc/refman/5.1/en/insert.html
make sure there are no indexes on your table, as indexes will slow down inserts (add the indexes after you've done all the inserts)
rather than create a new SQL statement in each call of the loop try and Prepare the SQL statement outside of the loop, and Execute that prepared statement with parameters inside the loop. Depending on the database this can be heaps faster.
I've done the above when importing a large Access database into Postgres using perl and got the insert time down to 30 seconds. I would have used an importer tool, but I wanted perl to enforce some rules when inserting.
You should accumulate the values and insert them into the database all at once at the end, or in batches every x records. Doing a single query for each row means 3.5 million SQL queries, each carrying quite some overhead.
Also, you should run this on the command line, where you won't need to worry about execution time limits.
The real answer though is evilclown's answer, importing to MySQL from CSV is already a solved problem.
I hope there is not a web client waiting for a response on this. Other than calling the import utility already referenced, I would start this as a job and return feedback to the client almost immediately. Have the insert loop update a percentage-complete somewhere so the end user can check the status, if you absolutely must do it this way.
2 possible ways.
1) Batch the process, then have a scheduled job import the file, while updating a status. This way, you can have a page that keeps checking the status and refresh itself if the status is not yet 100%. Users will have a live update of how much has been done. But for this you need to access to the OS to be able to set up the schedule task. And the task will be running idle when there is nothing to import.
2) Have the page handle 1000 rows (or any N number of rows... you decide), then send a java script to the browser to refresh itself with a new parameter to tell the script to handle the next 1000 rows. You can also display a status to the user while this is happening. Only problem is that if the page somehow does nor refresh, then the import stops.

Categories