I am moving data from one table to another via a INSERT INTO table1 SELECT * FROM table2 query. The data being moved contains information about employees (first name, last name...etc) as well as the path to that employee's resume. I'm now trying to split that information up into two different tables, one table for the employee info, and one table for the document (resume) info, linking the two by putting the employee ID in the document table. Both the employee ID and the document ID will be auto incremented PK values.
I understand that I can put these queries into a for loop and move one row at a time, grabbing the last insert id of the employee table before adding the document info to the document table in order to link the two. I am curious if there is a way to do this in one query, being able to take multiple rows from the original table, split up the info to be inserted into two new/different tables and and use the auto-generated id in the employee table as a value in the document table....hope this makes sense!
Sorry if I get this wrong but do you want to execute this query once with your current DB Tables?
And I guess both tables have the same amount of rows(and in order of each other)?
If you split those up you will get:
Employee table for example:
- employee_id(auto_increment)
- employee_firstname
- employee_lastname
- employee_document_id
- +whatever you want etc
Document table for example:
- document_id(auto_increment)
- document_name
- document_path
- document_employee_id
- +whatever you want etc.
If this is what you mean, than I think the following would work:
1: Setup PDO(The editor didn't work for me that's why ">")
<?php
$config['db'] = array(
'host' => 'host',
'username' => 'username',
'password' => 'password',
'dbname' => 'dbname'
);
$db = new PDO('mysql:host=' . $config['db']['host'] . ';dbname=' . $config['db']['dbname'], $config['db']['username'], $config['db']['password']);
?>
2: Setup insert queries
<?php
$select_query = "SELECT * FROM table1";
//$db is PDO example name
$select_all = $db->prepare($select_query);
$select_all->execute();
$count = $select_all->rowCount();
for(var i = 0; i =< $count; ++i) {
$insert_query1 = "INSERT INTO table1 (employee_firstname,
employee_lastname, employee_document_id)
VALUES(employee_firstnameValue, employee_lastnameValue,'"i"'";
$insert_query2 = "INSERT INTO table2 (document_name, document_path,
employee_id) VALUES(document_nameValue, document_pathValue, '"i"')"
$insert_table1 = $db->prepare($insert_query1);
$insert_table1->execute()
$insert_table2 = $db->prepare($insert_query2);
$insert_table2->execute()
}
?>
I think the above will work because you get an auto_increment starting at 1, en de ++i will occur every time. So the employee_document_id and the document_employee_id will both get ++i(which is 1) just like the auto_increment is(also 1)
But maybe this is to much thought.. Or not going to work in your model
Side notes:
Working with parameters is recommended in the query.
This is just a loose describing method which came up in my mind(maybe you can pick something up from here..)
EDIT: Another solution is to use a query like "SELECT MAX", but this is unsafe.
Related
I used INSERT INTO SELECT to copy values (multiple rows) from one table to another. Now, my problem is how do I insert rows with its corresponding IDs from different tables (since it's normalized) into a gerund table because it only outputs one row in my gerund table. What should I do to insert multiple rows and their corresponding IDs in the gerund table.
My code for the gerund table goes like this.
$insert = "INSERT INTO table1 SELECT * FROM sourcetable"; // where id1 is pk of table1.
$result =mysqli_query($conn,$insert)
$id1=mysqli_insert_id($conn);
Now table 1 has inserted multiple rows same as the other 2 tables.
Assuming id.. are the foreign keys
INSERT INTO gerundtable (pk, id1,id2,id3) VALUES ($id1,$id2,$id3);
My problem is it doesn't yield multiple rows.
According to MySql documentation:
For a multiple-row insert, LAST_INSERT_ID() and mysql_insert_id() actually return the AUTO_INCREMENT key from the first of the inserted rows. This enables multiple-row inserts to be reproduced correctly on other servers in a replication setup.
So, grab the number of records being copied, and the LAST_INSERT_ID() and you should be able to map exact IDs with each copied row.
In the lines of:
$mysqli->query("Insert Into dest_table Select * from source_table");
$n = $mysqli->affected_rows; // number of copied rows
$id1 = $mysqli->insert_id; // new ID of the first copied row
$id2 = $mysqli->insert_id + 1; // new ID of the second copied row
$id3 = $mysqli->insert_id + 2; // new ID of the third copied row
...
$mysqli->query("INSERT INTO gerundtable (pk, id1,id2,id3) VALUES ($id1,$id2,$id3)");
Thank you for trying to understand and also answering my question. I resolved my own code. I used while loop to get the ids of every row and didn't use INSERT INTO SELECT.
Here is the run down. SInce I'm just using my phone bare with my way posting.
$sqlselect = SELECT * FROM table1;
While($row=mysqli_fetch_array(table1){
$insertquery...
$id1=mysqli_insert_id($conn)
$insertgerundtable = INSERT INTO gerundtable VALUES ( $id1, $id2);
}
Okay so this is my first question and I really have no idea how to ask it so I'm going to try and be as specific as possible. My website is an online game and for user inventories when it inserts a new item into the database
Table name "inventory"
Column names "inv_id", "inv_itemid", "inv_userid", "inv_qty"
and it does not add to the column inv_qty and populate properly instead it creates a new inv_id identifier and row for each item. I was wondering if there was a way for me to create a merge function via php to merge all items with the same inv_itemid and inv_userid while adding to the inv_qty colum and populating the inv_id
In my inventory.php file the inv_id column is used to let the user either equip the item or use it as the main variable.
I have seen this done and have tried many times and I just can't get it to work.
If it were a single key to check then you could have used 'ON DUPLICATE KEY UPDATE' of mysql like the following:
INSERT INTO table(field1, field2, field3, ..)
VALUES (val1, val2, val3, ...)
ON DUPLICATE KEY
UPDATE field3='*'
But in your case there is a combination to consider.
If "inv_id", "inv_itemid", "inv_userid" mathces then UPDATE, otherwise INSERT.
One way to achieve this using only mysql in a single query is to create & use a Stored Procedure.
But using php you can achieve this in 2 query. First query is to determine if the combination exists. Then based on this run the next Insert or Update query.
Please check the following example:
$sql1 = SELECT * FROM inventory WHERE inv_id='$inv_id', inv_itemid='$inv_itemid', inv_userid='$inv_userid'
// Execute $sql1 and get the result.
IF result empty, then INSERT:
$sql2 = INSERT INTO inventory ....
otherwise UPDATE.
$sql2 = UPDATE inventory SET inv_qty=(inv_qty + $update_qty) WHERE inv_id='$inv_id', inv_itemid='$inv_itemid', inv_userid='$inv_userid'
About:
Would there be a way to write a php function at the top of the inventory page for my users to click to merge them
Please check with the following php function.
By calling with param: UserID, it will create a new entry with sum of the inv_qty, for each (inv_itemid + inv_userid) combination and removes the previous duplicate entries of (inv_itemid + inv_userid) leaving the newly enterd: (inv_itemid + inv_userid + (SUM of inv_qty)).
Important, please keep a back up of the DB Table Data before running the function.
Please check the comments in the function and update where necessary based on your system, Like getting the last inserted inv_id.
function merger_fnc($user_id) {
// For Each Combination of: inv_itemid + inv_userid
// This function will Insert a new row in the inventory with the SUM of inv_qty
// And then will remove the previous single rows of: inv_itemid + inv_userid + inv_qty
// First get the distinct Items of the User(by UserID);
$inv_itemids = $db->query("SELECT DISTINCT(inv_itemid) FROM inventory WHERE inv_userid=".$user_id);
// Here $inv_itemids will hold all the distinct ItemIDs for the UserID;
foreach ($inv_itemids as $inv_item) {
// We will Insert A new row which will have the sum of 'inv_qty' for the inv_userid & inv_itemid;
$inv_itemid = $inv_item['inv_itemid'];
// I am not sure what type of result set your $db->query(...) returns. So I assumed it is associative array.
// If the result is an Array of objects, then please use: $inv_itemid = $inv_item->inv_itemid;
$insert_sql = "INSERT INTO inventory (inv_itemid, inv_userid, inv_qty) VALUES ('".$inv_itemid."', '".$user_id."', (SELECT SUM(inv_qty) FROM FROM inventory WHERE inv_userid=".$user_id."))";
$inv_itemids = $db->query($insert_sql);
$inserted_new_inventory_id = $db->insert_id;
// Please check the appropriate method for it in your $db class here.
// In mysqli, it is: mysqli_insert_id($db_conn); In PDO it is: $db_conn->lastInsertId();
// Last we remove the previous data of combination(inv_userid & inv_itemid) but leaving our last inserted row.
$delete_sql = "DELETE FROM inventory WHERE inv_id!='".$inserted_new_inventory_id."' AND inv_userid='".$user_id."' AND inv_itemid='".$inv_itemid."'";
$db->query($delete_sql);
}
}
If getting the last inserted inv_id is troublesome from $db(like inv_id is not defined as key in the table), you can try another approach:
Do another query and save the previous inv_id in an array, before the insertion.
After the insertion of the new entry with sum of qty, run a delete query to delete the previous single qty entries, like the following:
DELETE FROM inventory WHERE inv_id IN (3, 4, 7,...)
Here (3, 4, 7,...) are the previous inv_id for (inv_itemid + inv_userid) combination.
I have php page that will track user on log in and store the information in table named 'logins' on phpmyadmin
now the php code for that looks like this
$numlogin = mysql_num_rows($get_client);
if($numlogin==1) {
$thisid = mysql_result($get_client,0,"id");
$thisfore = mysql_result($get_client,0,"forename");
$thissur = mysql_result($get_client,0,"surname");
$thiscomp = mysql_result($get_client,0,"companyname");
$ip = $_SERVER['REMOTE_ADDR'];
$logdate = date('Y-m-d H:i:s');
$logyear = date('Y');
$logmonth = date('m');
$logqtr = ceil($logmonth/3);
$reslog = mysql_query("insert into logins values ('','$myid','$loginemail','$thisfore','$thissur','$thiscomp',
'$ip','$logdate','$logyear','$logmonth','$logqtr')") or die("Error 91");
but if I want to add ipinfo to track their location then I have to add new column to table to store iplocation
What should I do?
Can I alter table on phpmyadmin and write some code to php later?
or what is the correct way to do?
No, you can't safely add a new column to the logins table with the code you've written. Your INSERT statement doesn't list the columns that it's inserting into, so that means you have to provide values for all the columns. If you change the table columns, the query will get an error because the number of values doesn't match the number of columns.
This is why you should always be explicit in your INSERT queries. Change it to:
$reslog = mysql_query("insert into logins (id, userid, email, forname, surname, comp, ip, logdate, logyear, logmonth, logqtr)
values ('','$myid','$loginemail','$thisfore','$thissur','$thiscomp',
'$ip','$logdate','$logyear','$logmonth','$logqtr')") or die("Error 91");
Of course, replace the column names I used with the actual column names of your table.
Once you've done this you should be able to add new columns to the table without causing the code to get an error.
I need to create a new table with certain data from another table but update the original table with the ID of the newly inserted record from the new table. Like so:
NEW_TABLE
----------------
id
-- other data --
ORIGINAL_TABLE
----------------
id
new_table_id
-- other data --
However, the added records to new_table will be grouped to get rid of duplicates. So, it won't be a 1-to-1 insert. The query needs to update matching records, not just the copied record.
Can I do this in one query? I've tried doing a separate UPDATE on original_table but it's not working.
Any suggestions?
You are going to be doing 3 seperate queries as I see it.
$db = new PDO("...");
$stmt = $db->prepare("SELECT * FROM table");
$stmt->execute();
$results = $stmt->fetchAll();just iterate o
foreach ($results as $result) {
$stmt = "INSERT INTO new_table (...) VALUES (...)";
$stmt = $pdo->prepare($stmt);
$data = $stmt->execute();
$insert_id = $pdo->lastInsertId();
// Update first table
$stmt = "UPDATE table SET id=:last WHERE id=:id";
$stmt = $pdo->prepare($stmt);
$data = $stmt->execute(array('last' => $insert_id, 'id' => $result['id']));
}
The above is a global example of your workflow.
You can use temporary tables or create a view for NEW_TABLE.
Temporary Tables
You can use the TEMPORARY keyword when creating a table. A TEMPORARY table is visible only to the current session, and is dropped automatically when the session is closed. This means that two different sessions can use the same temporary table name without conflicting with each other or with an existing non-TEMPORARY table of the same name. (The existing table is hidden until the temporary table is dropped.) To create temporary tables, you must have the CREATE TEMPORARY TABLES privilege.
--Temporary Table
create temporary table NEW_TABLE as (select * from ORIGINAL_TABLE group by id);
Views
Views (including updatable views) are available in MySQL Server 5.0. Views are stored queries that when invoked produce a result set. A view acts as a virtual table. Views are available in binary releases from 5.0.1 and up.
--View
create view NEW_TABLE as select * from ORIGINAL_TABLE group by id;
The view will always be updated with the values in ORIGINAL_TABLE and you will not have to worry about having duplicate information in your database.
If you do not want to use the view, I believe you can only perform an insert on one table at a time unless you have some sort of view that would allow you to do both, but you probably want to do it as two steps in a transaction
First you will have to tell the database that you want to start a transaction. Then you will perform your operations and check to see if they were successful. You can get the id of last inserted row (this assumes you have an auto_increment field) to use in the second statement. If both statement seem to work fine, you can commit the changes, or if not, rollback the changes.
Example:
//Assume it will be okay
$success = true;
//Start the transaction (assuming you have a database handle)
$dbh->beginTransaction();
//First Query
$stmt = "Insert into ....";
$sth = $dbh->prepare($stmt);
//See if it works
if (!$sth->execute())
$success = false;
$last_id = $dbh->lastInsertId();
//Second Query
$stmt = "Insert into .... (:ID ....)";
$sth = $dbh->prepare($stmt);
$sth->bindValue(":ID", $last_id);
//See if it works
if (!$sth->execute())
$success = false;
//If all is good, commit, otherwise, rollback
if ($success)
$dbh->commit();
else
$dbh->rollBack();
Background:
I am parsing a 330 meg xml file into a DB (netflix catalog) using PHP script from the console.
I can successfully add about 1,500 titles every 3 seconds until i addd the logic to add actors, genre and formats. These are separate tables linked by an associative table.
right now I have to run many, many queries for each title, in this order ( i truncate all tables first, to eliminate old titles, genres, etc)
add new title to 'titles' and capture insert id
check actor table for exising actor
if present, get id, if not insert
actor and get insert id
insert title id and actor id into
associative table
(steps 2-4 are repeated for genres too)
This drops my speed don to about 10 per 3 seconds. which would take eternitty to add the ~250,00 titles.
so how would I combine the 4 queries into a single query, without adding duplicate actors or genres
My goal is to just write all queries into a data file, and do a bulk insert.
I started by writing all associative queries into a data file, but it didn't do much for performance.
I start by inserting th etitle, and saving ID
function insertTitle($nfid, $title, $year){
$query="INSERT INTO ".$this->titles_table." (nf_id, title, year ) VALUES ('$nfid','$title','$year')";
mysql_query($query);
$this->updatedTitleCount++;
return mysql_insert_id();
}
that is then used in conjunction with each actor's name to create the association
function linkActor($value, $title_id){
//check if we already know value
$query="SELECT * FROM ".$this->persons_table." WHERE person = '$value' LIMIT 0,1";
//echo "<br>".$query."<br>";
$result=mysql_query($query);
if($result && mysql_num_rows($result) != 0){
while ($row = mysql_fetch_assoc($result)) {
$value_id=$row['id'];
}
}else{
//no value known, add to persons table
$query="INSERT INTO ".$this->persons_table." (person) VALUES ('$value')";
mysql_query($query);
$value_id=mysql_insert_id();
}
//echo "linking title:".$title_id." with rel:".$value_id;
$query = " INSERT INTO ".$this->title_persons_table." (title_id,person_id) VALUE ('$title_id','$value_id');";
//mysql_query($query);
//write query to data file to be read in bulk style
fwrite($this->fh, $query);
}
This is a perfect opportunity for using prepared statements.
Also take a look at the tips at http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html, e.g.
To speed up INSERT operations that are performed with multiple statements for nontransactional tables, lock your tables
You can also decrease the number of queries. E.g. you can eliminate the SELECT...FROM persons_table to obtain the id by using INSERT...ON DUPLICATE KEY UPDATE and LAST_INSERT_ID(expr).
( sorry, running out of time for a lengthy description, but I wrote an example before noticing the time ;-) If this answer isn't downvoted too much I can hand it in later. )
class Foo {
protected $persons_table='personsTemp';
protected $pdo;
protected $stmts = array();
public function __construct($pdo) {
$this->pdo = $pdo;
$this->stmts['InsertPersons'] = $pdo->prepare('
INSERT INTO
'.$this->persons_table.'
(person)
VALUES
(:person)
ON DUPLICATE KEY UPDATE
id=LAST_INSERT_ID(id)
');
}
public function getActorId($name) {
$this->stmts['InsertPersons']->execute(array(':person'=>$name));
return $this->pdo->lastInsertId('id');
}
}
$pdo = new PDO("mysql:host=localhost;dbname=test", 'localonly', 'localonly');
$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
// create a temporary/test table
$pdo->exec('CREATE TEMPORARY TABLE personsTemp (id int auto_increment, person varchar(32), primary key(id), unique key idxPerson(person))');
// and fill in some data
foreach(range('A', 'D') as $p) {
$pdo->exec("INSERT INTO personsTemp (person) VALUES ('Person $p')");
}
$foo = new Foo($pdo);
foreach( array('Person A', 'Person C', 'Person Z', 'Person B', 'Person Y', 'Person A', 'Person Z', 'Person A') as $name) {
echo $name, ' -> ', $foo->getActorId($name), "\n";
}
prints
Person A -> 1
Person C -> 3
Person Z -> 5
Person B -> 2
Person Y -> 6
Person A -> 1
Person Z -> 5
Person A -> 1
(someone might want to start a discussion whether a getXYZ() function should perform an INSERT or not ...but not me, not now....)
Your performance is glacially slow; something is very Wrong. I assume the following
You run your dedicated, otherwise-idle database server on respectable hardware
You have tuned it to some extent (i.e. at least configure it to use a few gigs of ram properly) - engine-specific optimisations will be required
You may be being stung by doing lots of tiny operations with autocommit on; this is a mistake as it generates an unreasonable number of disc IO operations. You should do a large amount of work (100, 1000 records etc) in a single transaction then commit it.
The lookups may be slowing things down because of the simple overhead of doing the queries (the queries themselves will be really easy as you'll have an index on actor name).
I also question your method of assuming that no two actors have the same name - surely your original database contains a unique actor ID, so you don't get them mixed up?
Can you use a language other than PHP? If not, are you running this as a PHP stand-alone script or through a webserver? The webserver is probably adding a lot of overhead you don't need.
I do something very similar at work, using Python, and can insert a couple thousand rows (with associative table lookups) per second on your standard 3.4 GHz, 3GB RAM, machine. MySQL database isn't hosted locally but within the LAN.