There is MySQL table with 24500 rows of data, and there is text file with 26000 string of data needed to be inserted into MySQL, problem is what this 26000 strings duplicates data in MySQL table, so we need to compare them, and insert only new/unique.
cadastreArray - array from text file
districtArray - mysql array
When i try to do
foreach ($cadastreArray as $cadastreValue) {
$districtExist = false;
foreach ($districtArray as $districtData) {
if ($cadastreValue[0] == $districtData['1']) {
$districtExist = true;
break;
}
}
}
if(!$districtExist) { MySQL INSERT ... }
i am getting execution time error, and even 3 minutes are not enought.
Maybe you can offer better/faster way?
May be you can set the mysql field as unique so when you will be going to insert it will not insert and will generate error number and will continue execution. So you need not to compare.
One more thing that you can do is you can increase max_execution_time in php.ini
Another option. Load your 26k text file into a temp table (LOAD DATA INFILE... will do this quickly).
Then you can do an insert based on a query that takes your temp table and LEFT JOINs that against your full table, checking that a field on the full table is NULL.
Simple example script here:-
<?php
$file = "SomeTextFile.txt";
$sql = "CREATE TEMPORARY TABLE cadastre
(
field1 INT,
field2 VARCHAR(255),
etc...
)";
if(!($db->query($sql)))
{
die($db->error());// if error, stop script
}
if(!($db->query("LOAD DATA INFILE '$file' INTO TABLE cadastre")))
{
die($db->error());// if error, stop script
}
$sql = "INSERT INTO district (field1, field2, field3, ......)
SELECT a.field1, a.field2, a.field3
FROM cadastre a
LEFT OUTER JOIN district b
ON a.field1 = b.field1
WHERE b.field1 IS NULL";
if(!($db->query($sql)))
{
die($db->error());// if error, stop script
}
?>
Make sure the temp table and the table you are inserting into have useful indexes added.
Related
I currently have a script that runs every 5 minutes and selects the data from a table on server 1 and an identical table on server2. This is a workaround for replication, essentially, since we don't have that option currently.
The script is successful but I've realized that it misses records sometimes, for whatever reason. The current script selects all records from the destination table, stores the max primary key, selects all data from the source table and then inserts anything with a greater Primary key into the dest. table.
I'd like to modify the script slightly and instead of using max id, just say "if a row has an primary key that doesn't exist in the destination table, insert that row there."
Again these are cloned tables so the structure is the same and they both use AI Primary Keys.
Here's the current working script:
$latest_result = $conn2->query("SELECT MAX(`SESSIONID`) FROM
`ambition`.`session`");
$latest_row = $latest_result->fetch_row();
$latest_session_id = $latest_row[0];
//Select All rows from the source phone database
$source_data = mysqli_query($conn, "SELECT * FROM
`cdrdb`.`session` WHERE `SESSIONID` > $latest_session_id");
// Loop on the results
while($source = $source_data->fetch_assoc()) {
// Check if row exists in destination phone database
$row_exists = $conn2->query("SELECT SESSIONID FROM
ambition.session WHERE SESSIONID = '".$source['SESSIONID']."' ") or
die(mysqli_error($conn2));
//if query returns false, rows don't exist with that new ID.
if ($row_exists->num_rows == 0){
//Insert new rows into ambition.session
$stmt = $conn2->prepare("INSERT INTO ambition.session (SESSIONID,
SESSIONTYPE,CALLINGPARTYNO,FINALLYCALLEDPARTYNO,
DIALPLANNAME,TERMINATIONREASONCODE //etc. There are a lot of columns so I
ommitted the others
Is there a way I can slightly modify this to just insert what doesn't exist rather than relying on the MAX ID?
Or is there something here that would be a culprit as to why it's missing records?
You could use INSERT INTO SELECT and check if value is already in target:
INSERT INTO trg_table (cols)
SELECT cols
FROM src_table s
WHERE NOT EXISTS (SELECT 1 FROM trg_table t WHERE t.id = s.id);
Sorry if this has been asked before, but I couldnt find anything that would relate to my case here on SE.
I am trying to import a CSV file into my Mysql database table with both the table the CSV having the exact same amount and order of columns, except that the table's column ID is not missing in the CSV file.
What I want to achieve is to import the CSV into the table while generating an ID number that automatically increases with each record. This does not seem possible as the CSV always seem to want to insert its data into the first colum in the table, but in my case I would need it to be the 2nd column.
How do I approach this and is there any reference code I can study? I currently am working off this PDO approach but am having the above mentioned difficulties.
PHP
<?php
$databasehost = "localhost";
$databasename = "test";
$databasetable = "sample";
$databaseusername="test";
$databasepassword = "";
$fieldseparator = ",";
$lineseparator = "\n";
$csvfile = "filename.csv";
if(!file_exists($csvfile)) {
die("File not found. Make sure you specified the correct path.");
}
try {
$pdo = new PDO("mysql:host=$databasehost;dbname=$databasename",
$databaseusername, $databasepassword,
array(
PDO::MYSQL_ATTR_LOCAL_INFILE => true,
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION
)
);
} catch (PDOException $e) {
die("database connection failed: ".$e->getMessage());
}
$affectedRows = $pdo->exec("
LOAD DATA LOCAL INFILE ".$pdo->quote($csvfile)." INTO TABLE `$databasetable`
FIELDS TERMINATED BY ".$pdo->quote($fieldseparator)."
LINES TERMINATED BY ".$pdo->quote($lineseparator));
echo "Loaded a total of $affectedRows records from this csv file.\n";
?>
Thank you
You can have MySQL set values for certain columns during import. If your id field is set to auto increment, you can set it to null during import and MySQL will then assign incrementing values to it.
LOAD DATA LOCAL INFILE ".$pdo->quote($csvfile)." INTO TABLE `$databasetable`
FIELDS TERMINATED BY ".$pdo->quote($fieldseparator)."
LINES TERMINATED BY ".$pdo->quote($lineseparator))."
SET id=null;
EDIT - In case the ID column is not present in CSV
The col1, col2, col3,... are names of actual columns in the DB table (without id column)
LOAD DATA LOCAL INFILE ".$pdo->quote($csvfile)." INTO TABLE `$databasetable`
FIELDS TERMINATED BY ".$pdo->quote($fieldseparator)."
LINES TERMINATED BY ".$pdo->quote($lineseparator))."
(col1, col2, col3,...)
SET id=null;
The AUTO_INCREMENT attribute can be used to generate a unique identity for new rows. Most version of mysql and engin support this. You need not worry about the ID and can use cron job to insert the needed field and AUTO_INCREMENT will take care of the id itself.
No value was specified for the AUTO_INCREMENT column, so MySQL assigned sequence numbers automatically. You can also explicitly assign 0 to the column to generate sequence numbers, unless the NO_AUTO_VALUE_ON_ZERO SQL mode is enabled. If the column is declared NOT NULL, it is also possible to assign NULL to the column to generate sequence numbers. When you insert any other value into an AUTO_INCREMENT column, the column is set to that value and the sequence is reset so that the next automatically generated value follows sequentially from the largest column value.
You can retrieve the most recent automatically generated AUTO_INCREMENT value with the LAST_INSERT_ID() SQL function or the mysql_insert_id() C API function. These functions are connection-specific, so their return values are not affected by another connection which is also performing inserts.
See example from official link :
[https://dev.mysql.com/doc/refman/5.7/en/example-auto-increment.html]
As you want to recreate the table over and over and want to manipulate the Data from the CSV, try this:
// You have to create the TABLE if not exists
$pdo->exec("TRUNCATE TABLE sample"); // No need to drop the table if columns don't change.
$csvContent = file_get_contents($csvfile); // Raw Data from file
$lines = explode("
", $csvContent); // The standard line separator is an ENTER
// Now you have each line separated
for($i = 0; $i < coount($lines); $i++) {
$col = explode(";", $lines[$i]); // Would be a comma
// Now you have each column separated
$pdo->exec("INSERT INTO sample (id, col1, col2, col3 ... coln) VALUES (NULL, '".$col[0]."', '".$col[1]."', '".$col[2]."' ... '".$col[n]."')");
}
This way you can dig into your Data and, besides setting an AUTO_INCREMENT ID, you can validate what is coming from the CSV and can correct/prevent importation errors.
I've been stuck on this for a few hours now ...
Here's my code:
$SQLQuery1 = $db_info->prepare("SELECT COUNT(ID) FROM menusize WHERE typesize=:typesize");
$SQLQuery1->bindValue(':typesize',$_POST['typesize'],PDO::PARAM_STR);
$SQLQuery1->execute();
if($SQLQuery1->fetchColumn() > 0) {
$SQLQuery2 = $db_info->prepare("INSERT INTO menucatagorysize (menucatagory_ID,menusize_ID) VALUES (:catagoryid,(SELECT ID FROM menusize WHERE typesize=:typesize))");
$SQLQuery2->bindValue(':typesize',$_POST['typesize'],PDO::PARAM_STR);
$SQLQuery2->bindValue(':catagoryid',$_POST['catagoryid'],PDO::PARAM_STR);
$SQLQuery2->execute();
} else {
$SQLQuery2 = $db_info->prepare("INSERT INTO menusize (typesize) VALUES (:typesize);
SET #menusizeid=LAST_INSERT_ID();
INSERT INTO menucatagorysize (menusize_ID,menucatagory_ID) VALUES (#menusizeid,:catagoryid)");
$SQLQuery2->bindValue(':typesize',$_POST['typesize'],PDO::PARAM_STR);
$SQLQuery2->bindValue(':catagoryid',$_POST['catagoryid'],PDO::PARAM_STR);
$SQLQuery2->execute();
}
$SQLQuery3 = $db_info->prepare("SELECT DISTINCT(menuitem_ID) FROM menuprice WHERE menucatagory_ID=:catagoryid");
$SQLQuery3->bindValue(':catagoryid',$_POST['catagoryid'],PDO::PARAM_STR);
$SQLQuery3->execute();
$rows = $SQLQuery3->fetchAll(PDO::FETCH_ASSOC);
So, it will run through the if statement fine, running $SQLQuery1 and $SQLQuery2 (Which ever one is required) without any problems, errors or warnings. But, if it runs the else { part of the code, it will not run $SQLQuery3. Any thoughts?
Thanks :D
EDIT: Got it to work by doing $SQLQuery2=NULL in the else statement ... Sucks that I still cant figure out why it wouldnt work the original way.
It appears that you're trying to enforce a uniqueness constraint over the typesize column of your menusize table from within your application code. However, the database can do this for you—which will make your subsequent operations much simpler:
ALTER TABLE menusize ADD UNIQUE (typesize)
Now, one can simply attempt to insert the posted value into the table and the database will prevent duplicates arising. Furthermore, as documented under INSERT ... ON DUPLICATE KEY UPDATE Syntax:
If a table contains an AUTO_INCREMENT column and INSERT ... ON DUPLICATE KEY UPDATE inserts or updates a row, the LAST_INSERT_ID() function returns the AUTO_INCREMENT value. Exception: For updates, LAST_INSERT_ID() is not meaningful prior to MySQL 5.1.12. However, you can work around this by using LAST_INSERT_ID(expr). Suppose that id is the AUTO_INCREMENT column. To make LAST_INSERT_ID() meaningful for updates, insert rows as follows:
INSERT INTO table (a,b,c) VALUES (1,2,3)
ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id), c=3;
Therefore, you can do:
$db_info->prepare('
INSERT INTO menusize (typesize) VALUES (:typesize)
ON DUPLICATE KEY UPDATE typesize=LAST_INSERT_ID(typesize)
')->execute(array(
':typesize' => $_POST['typesize']
));
$db_info->prepare('
INSERT INTO menucatagorysize
(menusize_ID, menucatagory_ID)
VALUES
(LAST_INSERT_ID(), :catagoryid)
')->execute(array(
':catagoryid' => $_POST['catagoryid']
));
$stmt = $db_info->prepare('
SELECT DISTINCT menuitem_ID
FROM menuprice
WHERE menucatagory_ID = :catagoryid
');
$stmt->execute(array(
':catagoryid' => $_POST['catagoryid']
));
while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
// etc.
}
(As an aside, the English word is spelled cat*e*gory, not cat*a*gory.)
Is it possible to have a table and insert data n a single row on two different occasion? I mean, I have a table with five column and on first data submission, i want to record data on only just two field on that table, and in different data submission on that same row, I would want to record data on those 3 remaining column that haven't been recorded with any data. What method should i use? INSERT or UPDATE? Or neither?
Sorry for my bad english and confusing way of asking question.
Code:
$query = ("SELECT q1 FROM grades where studentnumber = '$_POST[studentnumber]' && subjectcode = '$_POST[subjectcode]' ");
$result=mysql_query($query);
if($result)
{
if(mysql_num_rows($result) ==1)
{
$sql=mysql_query("UPDATE grades SET q1 = '$_POST[q1]' where studentnumber = '$_POST[studentnumber]' AND subjectcode = '$_POST[subjectcode]'");
if($sql)
{
echo "<script type='text/javascript'>alert('Password successfully changed'); location.href = 'cvsu-sis_grades.php';</script>";
}
}
}
else
{
echo "<script type='text/javascript'>alert('Record Does not Exist'); location.href = 'cvsu-sis_grades.php';</script>";
}
i omitted some columns just to make the coed shorter but most likely it is the same. just a series of q1, q2, ...
The first query should be an INSERT, then you can get the last inserted id and do an UPDATE query
You can use INSERT for first two columns, and get your inserted id using mysql_insert_id() (only if your primary key column name is "id") and using this you can update your remaining three columns using UPDATE
First of all you have to make sure, that when you insert the 2 fields on your first INSERT, the fields you leave empty are allowed to be NULL!
You have to INSERTthe first data into the table and later on, when you want to add the remaining fields, you have to UPDATE that row. Make sure that, when you UPDATE, you are using a WHERE-constraint (e.g. with the 2 fields already entered), otherwise all rows will be updated!
I have a scripts that retrieves huge data on a table. I want to create a mysqldump to insert data into another database table with different fields. I want the format of phpMyAdmin where it repeats the INSERT INTO Table VALUES(values1),(values2), ...(values100); if reaches certain amount of value sets depends on what you set.
ex: If I have 550 data sets and i want to be devided the data by 100 so that i will have 6 sets of INSERT INTO query.
INSERT INTO tablename VALUES(value1), (value2), .... (value100);
INSERT INTO tablename VALUES(value101), (value102), .... (value200);
INSERT INTO tablename VALUES(value201), (value202), .... (value300);
INSERT INTO tablename VALUES(value301), (value302), .... (value400);
INSERT INTO tablename VALUES(value401), (value402), .... (value500);
INSERT INTO tablename VALUES(value501), (value502), .... (value550);
If you're using mysqldump and wish to output mutiple rows into a single insert, then you need to use the --extended-insert option:
mysqldump extended insert option
I'm not sure it's possible to specify with mysqldump to specify that a specific number of rows are included in each INSERT statement generated in the dump. Rather you can set the net_buffer_length (although it's not recommended that you do change it), so the actual amount may vary depending on the data in each row.
You could use array_chunk(), something like:
$toInsert = array( '(values1)', '(values2)', '(values3)', '(values4)' ); //etc.
$sqlStart = 'INSERT INTO tablename (field1, field2, field3, etc) VALUES ';
foreach (array_chunk($toInsert, 100) as $insertSet) {
$sql = $sqlStart . implode(', ', $insertSet);
//execute $sql
}
Are you actually doing much with the data though? You might be able to do it all in SQL with INSERT INTO table (field1, field2) SELECT somefield, somefield2 FROM another table
While fetching the rows, increment a counter, and when it hits a certain value, have it create a new insert statement?
Some code that might not be correct (no PHP for a LONG time), but you will probably get the idea
$i = 0;
$insertstatements = array();
$currentinsertstatement;
while ($temp = mysql_fetch_assoc($result)) {
// do something with the data
$insertpart = "(value_xxx)";
if ($i % 100 == 0) {
// first value
if ($i != 0) $insertstatements[count($insertstatements)] = $currentinsertstatement;
$currentinsertstatement = "INSERT INTO tablename VALUES " . $insertpart;
} else {
$currentinsertstatement .= ", " . $insertpart;
// somewhere in the middle of the insert statement
}
$i++;
}
if ($i % 100 != 0) {
$insertstatements[count($insertstatements] = $currentinsertstatement;
}
You definitely should use transactions for huge inserts, if your storage engine supports them (like innoDB):
BEGIN;
INSERT INTO tablename VALUES...
INSERT INTO tablename VALUES...
COMMIT;
If something goes wrong, you can safely ROLLBACK the last operation, restart you script, etc.