I'm using Laravel 4, and I need to insert some rows into a MySQL table, and I need to get their inserted IDs back.
For a single row, I can use ->insertGetId(), however it has no support for multiple rows. If I could at least retrieve the ID of the first row, as plain MySQL does, would be enough to figure out the other ones.
It's mysql behavior of
last-insert-id
Important
If you insert multiple rows using a single INSERT statement, LAST_INSERT_ID() returns the value generated for the first inserted row only. The reason for this is to make it possible to reproduce easily the same INSERT statement against some other server.
u can try use many insert and take it ids or after save, try use $data->id should be the last id inserted.
If you are using INNODB, which supports transaction, then you can easily solve this problem.
There are multiple ways that you can solve this problem.
Let's say that there's a table called Users which have 2 columns id, name and table references to User model.
Solution 1
Your data looks like
$data = [['name' => 'John'], ['name' => 'Sam'], ['name' => 'Robert']]; // this will insert 3 rows
Let's say that the last id on the table was 600. You can insert multiple rows into the table like this
DB::begintransaction();
User::insert($data); // remember: $data is array of associative array. Not just a single assoc array.
$startID = DB::select('select last_insert_id() as id'); // returns an array that has only one item in it
$startID = $startID[0]->id; // This will return 601
$lastID = $startID + count($data) - 1; // this will return 603
DB::commit();
Now, you know the rows are between the range of 601 and 603
Make sure to import the DB facade at the top using this
use Illuminate\Support\Facades\DB;
Solution 2
This solution requires that you've a varchar or some sort of text field
$randomstring = Str::random(8);
$data = [['name' => "John$randomstring"], ['name' => "Sam$randomstring"]];
You get the idea here. You add that random string to a varchar or text field.
Now insert the rows like this
DB::beginTransaction();
User::insert($data);
// this will return the last inserted ids
$lastInsertedIds = User::where('name', 'like', '%' . $randomstring)
->select('id')
->get()
->pluck('id')
->toArray();
// now you can update that row to the original value that you actually wanted
User::whereIn('id', $lastInsertedIds)
->update(['name' => DB::raw("replace(name, '$randomstring', '')")]);
DB::commit();
Now you know what are the rows that were inserted.
As user Xrymz suggested, DB::raw('LAST_INSERT_ID();') returns the first.
According to Schema api insertGetId() accepts array
public int insertGetId(array $values, string $sequence = null)
So you have to be able to do
DB::table('table')->insertGetId($arrayValues);
Thats speaking, if using MySQL, you could retrive the first id by this and calculate the rest. There is also a DB::getPdo()->lastInsertId(); function, that could help.
Or if it returened the last id with some of this methods, you can calculate it back to the first inserted too.
EDIT
According to comments, my suggestions may be wrong.
Regarding the question of 'what if row is inserted by another user inbetween', it depends on the store engine. If engine with table level locking (MyISAM, MEMORY, and MERGE) is used, then the question is irrevelant, since thete cannot be two simultaneous writes to the table.
If row-level locking engine is used (InnoDB), then, another possibility might be to just insert the data, and then retrieve all the rows by some known field with whereIn() method, or figure out the table level locking.
$result = Invoice::create($data);
if ($result) {
$id = $result->id;
it worked for me
Note: Laravel version 9
Related
this is my PHP code, How can I get randomly row from 'featured'=>1 in table of database in MySQL?
which part of my below code should change?
$featured_movie = $this->db->get_where('movie', array('featured'=>1))->row();
You need to order the query by rand(), and limit the query to a single return to speed it up (this is done by using the 3rd parameter of get_where).
$featured_movie = $this->db->order_by('featured', 'RANDOM')->get_where('movie', ['featured' => 1], 1)->row();
insert_batch() is a codeigniter build-in function which inserts 100 data of rows at a time. That's why it is so much faster for inserting large amount of data.
Now I want to delete large number of data like insert_batch() function does.
Is their any way to do it?
Already I am using where_in function but it is not that much faster like insert_batch() and that's why timeout error occur often.
I want to know specially that can i make a function like insert_batch() or insert_update() in codeigniter system/database/db_query_builder ?
If I can how to do it. or any other suggestion please ?
If we are talking about alot of rows, it may be easier to perform a table 'switch' by inserting and dropping the the original table.
However one drawback to this will mean any Auto Increment IDs will be lost.
<?php
// Create a Copy of the Same Data structure
$this->db->query( "CREATE TABLE IF NOT EXISTS `TB_TMP` .. " );
// Select the data you wish to -Keep- by excluding the rows by some condition.
// E.g. Something that is no longer active.
$recordsToKeep =
$this->db
->select( "*" )
->get_where( "TB_LIVETABLE", [ "ACTIVE" => 1 ])
->result();
// Insert the Excluded Rows.
$this->db->insert_batch( "TB_TMP", $recordsToKeep );
// Perform the Switch
$this->db->query( "RENAME TABLE TB_LIVETABLE TO TB_OLDTABLE" );
$this->db->query( "RENAME TABLE TB_TMP TO TB_LIVETABLE " );
$this->db->query( "DROP TABLE TB_OLDTABLE" );
I want to update last record of matching condition.i am getting more than one result in code so i did it different way.
$u = user_payments::where('id', $id)->where('staff_id', $staffId)->orderBy('id','desc')->first();
$up = user_payments::where('id', $u->id)->update(['transaction_id' => $transaction_id]);
here i have to query two times so it call DB two times as i want to optimise the query, I tries as above but i want to it in single query... How can i do this ? Thanks.
This should do what you need. Ultimately only one single update script runs on the database.
$result = user_payments::where('id', $id)
->where('staff_id', $staffId)
->orderBy('id','desc')
->take(1)
->update(['transaction_id' => $transaction_id]);
I have two externally hosted third-party .txt files that are updated on an irregular basis by someone other than myself. I have written a script that pulls this information in, manipulates it, and creates a merged array of data suitable for use in a database. I'm not looking for exact code but rather a description of a good process that will work efficiently in inserting a new row from this array if it doesn't already exist, updating a row in the table if any values have changed, or deleting a row in the table if it no longer exists in the array of data.
The data is rather simple and has the following structure:
map (string) | route (string) | time (decimal) | player (string) | country (string)
where a map and route combination must be unique.
Is there any way to do all needed actions without having to loop through all of the external data and all of the data from the table in my database? If not, what would be the most efficient method?
Below is what I have written. It takes care of all but the delete part:
require_once('includes/db.php');
require_once('includes/helpers.php');
$data = array_merge(
custom_parse_func('http://example1.com/ex.txt'),
custom_parse_func('http://example2.com/ex.txt')
);
try {
$dsn = "mysql:host=$dbhost;dbname=mydb";
$dbh = new PDO($dsn, $dbuser, $dbpass);
$dbh->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
foreach ($data as $value) {
$s = $dbh->prepare('INSERT INTO table SET map=:map, route=:route, time=:time, player=:player, country=:country ON DUPLICATE KEY UPDATE map=:map2, route=:route2, time=:time2, player=:player2, country=:country2');
$s->execute(array(
':map' => $value['map'],
':route' => $value['route'],
':time' => $value['time'],
':player' => $value['player'],
':country' => $value['country'],
':map2' => $value['map'],
':route2' => $value['route'],
':time2' => $value['time'],
':player2' => $value['player'],
':country2' => $value['country']
));
}
} catch(PDOException $e) {
echo $e;
}
You mention that you're using MySQL, which has a handy INSERT ... ON DUPLICATE KEY UPDATE ... statement (documentation here). You will have to iterate over your collection of data (but not the existing table). I would handle it a little differently than #Tim B does...
create a temporary table to hold the new data.
loop through your new data and insert it into the new table
run an INSERT ... ON DUPLICATE KEY UPDATE ... statement inserting from the temporary table into the existing table - that takes care of both inserting new records and updated changed records.
run a DELETE FROM [existing table] t1 LEFT JOIN [temporary table] t2 ON [whatever key(s) you have] WHERE t2.id IS NULL - this will delete everything from the existing table that does not appear in the temporary table.
The nice thing about temporary tables is that they are automatically dropped when the connection closes (as well has having some other nice features like being invisible to other connections).
The other nice thing about this method is that you can do some (or all) of your data manipulation in the database after you insert it into a table in step 1. It is often faster and simpler to do this kind of thing through SQL instead of looping through and changing values in your array.
The simplest way would be to truncate the table and then insert all the values. This will handle all of your requirements.
Assuming that is not viable though then you need to remember which rows have been modified, that can be done using a flag, a version number, or a timestamp. For example:
Update the table, set the "updated" flag to 0 on every row
Loop through doing an upsert for every item (http://dev.mysql.com/doc/refman/5.6/en/insert-on-duplicate.html). Set the flag to 1 in each upsert.
Delete every entry from the database with the flag set to 0.
I have pulled in the data from a mysql database using select * with the intention of using the data several times without doing repeated sql enquiries using WHERE.
Using this data I am extracting rows that contain a search element using
while($row=mysql_fetch_array($query_result)){ <<<if match add to new array>>> }
As there are thousands of rows this is taking a longer time than I want.
I am trying to use:
$row=mysql_fetch_array($query_result);
$a = array_search($word_to_check, $row);
echo $a;
This extracts the correct sql headings but not the row number. What I want to achieve is
if $word is found in mysql_fetch_array($query_result) the add the row where it was found into the new array for processing.
Any thoughts? Thanks in advance.
Don't use mysql_* functions they are depracated. Use mysqli or pdo instead.
It's not wise to search in array of mysql results in php while it can be done in mysql. Let's say you have table and you want to find all numbers in number column that are greater than 5
SELECT FROM table_name WHERE number>5
to find text you can use simple clause
SELECT FROM table_name WHERE name = 'username'
You can also create more complex conditions.
From MYSQL manual:
WHERE clause, if given, indicates the condition or conditions that rows must satisfy to be selected. where_condition is an expression that evaluates to true for each row to be selected. The statement selects all rows if there is no WHERE clause
Check this link
If you want to limit the query to only once, fetch all the results into temporary array and do the search from it like below
<?php
$all_rows=array();
$match_rows=array();
$i=0;
$limit=100000;
while($row=mysql_fetch_array($query_result)){
$all_rows[]=$row;
if($i % $limit == 0){ // this part only functions every 100,000 cycles.
foreach($all_rows as $search_row){
if(array_search($word_to_check, $search_row)
$match_rows[]=$search_row;
}
$all_rows=array();//reset temporary array
}
$i++;
}
//This solution assumes the required word can be found in mulitple columns