This question already has answers here:
Inserting multiple rows in mysql
(5 answers)
Closed 5 years ago.
I have a PHP script and I'm inserting values into a MySQL table. This was working fine when dealing with a few thousand lines of data but as I increase the data only a part of the data is inserted into the MySQL table.
It seems to stop after only about 6000 rows of data. REally I want it to work for 40,000 lines and later it needs 160,000 line.
I have to run the script several times to get more data added into the table.
I am new to working with SQL statements and I don't think the way I have set it up is efficient.
Some of my code:
for($x=0;$x<count($array_athlete); $x++){
$checkuser=mysqli_query($link,"SELECT * FROM `Events_testing2` WHERE `location`='$location'
AND `barcode`='$array_barcode[$x]' AND `date`='$date'");
$rowcount=mysqli_num_rows($checkuser); //checks if barcode exists for that user in that location on that date. Inserts data if doesn't already exist.
if($rowcount>0){
}
else{
$queryInsertUser=mysqli_query($link, "INSERT INTO `Events_testing2` (`eventID`,`location`,`date`,`barcode`,`athlete`,`time`,`Run Points`,`Volunteer Points`,`Gender`,`Gender pos`)
VALUES (' ','$location','$date','$array_barcode[$x]','$array_athlete[$x]','$array_time[$x]','$array_score[$x]',' ','$array_gender[$x]','$array_gender_pos[$x]') ");
}
}
Any advice of how to insert more rows quickly into the database would be appreciated.
Many thanks
Insert by chunk, like insert first 1000 (1 - 1000) then next 1000 (1001 - 2000). In this way, you will not encounter errors.
Try to set the time_limit of php beforehand
<?php
ini_set('max_execution_time', '0'); // infinite
set_time_limit(0); // infinite
/// do your stuff
See:
http://php.net/manual/en/info.configuration.php#ini.max-execution-time
http://php.net/manual/en/function.set-time-limit.php
Related
I have a cron job that runs once every hour, to update a local database with hourly data from an API.
The database stores hourly data in rows, and the API returns 24 points of data, representing the past 24 hours.
Sometimes a data point is missed, so when I get the data back, I cant only update the latest hour - I also need to check if I have had this data previously, and fill in any gaps where gaps are found.
Everything is running and working, but the cron job takes at least 30 minutes to complete every time, and I wonder if there is any way to make this run better / faster / more efficiently?
My code does the following: (summary code for brevity!)
// loop through the 24 data points returned
for($i=0; $i<24; $i+=1) {
// check if the data is for today, because the past 24 hours data will include data from yesterday
if ($thisDate == $todaysDate) {
// check if data for this id and this time already exists
$query1 = "SELECT reference FROM mydatabase WHERE ((id='$id') AND (hour='$thisTime'))";
// if it doesnt exist, insert it
if ($datafound==0) {
$query2 = "INSERT INTO mydatabase (id,hour,data_01) VALUES ('$id','$thisTime','$thisData')";
}
}
}
And there are 1500 different IDs, so it does this 1500 times!
Is there any way I can speed up or optimise this code so it runs faster and more efficiently?
This does not seem very complex and it should run in few seconds. So my first guess without knowing your database is that you are missing an index on your database. So please check if there is an index on your id field. If your id field is not your unique key you should consider adding another index on 2 fields id and hour. If these aren't already there this should lead to a massive time save.
Another idea could be to retrieve all data for the last 24 hours in a single sql query, store the values in an array and do your checks if you already read that data only on your array.
This question already has answers here:
SELECT COUNT() vs mysql_num_rows();
(3 answers)
Closed 7 years ago.
I wan to know the difference between getting row count via select COUNT(*) from table fetching data AND mysqli_num_rows($ofquery)(in php)?
I tried in both ways and works well. But method is faster?
Use COUNT, internally the server will process the request differently.
When doing COUNT, the server will only allocate memory to store the result of the count.
When using mysqli_num_rows, the server will process the entire result set, allocate memory for all those results, and put the server in fetching mode, which involves a lot of different details, such as locking.
Think of it like the following pseudo scenario:
1) Hey , how many people are in the class room? (count)
2) Hey , get me a list of all the people in the classroom, ... I'll calculate the number of people myself (mysqli_num_rows)
count is the best than mysqli_num_rows
Reference from here.
This question already has answers here:
MySQL pagination without double-querying?
(9 answers)
Closed 7 years ago.
As I know a pagination structure requires minimum two sql query.
First, find total row
Second, limit your query.
Is there a way to decrease query to one. Can we use first sql query to manupulate the pagination? On first query we already fetch all necessary data. Is first query's array can handle this issue?
You can do a simple previous/next pagination with a single query.
For this i your result limit is say 25, just query for 26 and only display the 25. If you get back less than 26 results, you know you don't have any more.
However if you are wanting to accurately display links for page 1,2,3,etc.. you have to do both a query for the total number of records in the table, and a query for just the data you want to display.
This question already has answers here:
How to show the last queries executed on MySQL?
(10 answers)
Closed 8 years ago.
Is there a way to see the previous query entered in a MySQL server through PHP? I do not want the results of the query, I want the actual plaintext query entered.
No. By default PHP disconnects and closes any mysql connections when the script ends, so there will be nothing left to view when the NEXT connection comes in. And since you have to send the query over each time anyways, by definition you should have that query available to you already. If you need the query text kept, then you should put it into a variable first
e.g. instead of
$result = mysql_query('SELECT ...');
you should be doing
$sql = 'SELECT ...';
$result = mysql_query($sql);
I have a program that I use to read CSV file and insert the data into a database. I am having trouble with it because it needs to able to insert big records ( up to 10,000 rows ) of data at a time. At first I had it looping through and inserting each record one at a time. That is slow because it calls an insert function 10,000 times... Next I tried to group it together so it inserted 50 rows at a time. I figured this way it would have to connect to the database less, but it is still too slow. What is an efficient way to insert many rows of a CSV file into a database? Also, I have to edit some data(such as add a 1 to a username if two are the same) before it goes into the database.
For a text file you can use the LOAD DATA INFILE command which is designed to do exactly this. It'll handle CSV files by default, but has extensive options for handling other text formats, including re-ordering columns, ignoring input rows, and reformatting data as it loads.
So I ended up using the fputcsv to put the data I changed into a new CSV file, then I used the LOAD DATA INFILE command to put the data from the new csv file into the table. This changed it from timing out at 120 secs for 1000 entries, to taking about 10 seconds to do 10,000 entries. Thank you to everyone that replied.
I have this crazy idea: Could you run multiple parallels scripts, each one takes care of a bunch of rows from your CSV.
Some thing like this:
<?php
// this tells linux to run the import.php in background,
// and releases your caller script.
//
// do this several times, and you could increase the overal time
$cmd = "nohup php import.php [start] [end] & &>/dev/null";
exec($cmd);
Also, have you tried to increase these limit of 50 bulk inserts to 100 or 500 for example?