How to Update Rows (with different data) in one DB Query Laravel - php

I'm making a gallery with sortable photos with Laravel and jQuery UI Sortable.
My function in the controller gets a nice array:
$items = [0 => 22, 1 => 25, 2 => 45];
But there will be approx 150 - 200 photos in one gallery. Is there any chance to set one DB Query instead 150 - 200? Because my controller makes this at the moment...
<?php
foreach($photos['item'] as $position => $id){
Photo::where('id', $id)->update(['position' => $position]);
}
But it creates approx 150 - 200 DB queries, which is awful.
Edit #1
Basically I need something like this (two corresponding arrays with ids and positions):
$ids = [22, 24, 25, 34];
$positions = [0, 1, 2, 3];
Photos::where('id', $ids)->update(['position'] => $positions);
But I can't find anything about this approach.

Take a look here: Eloquent model mass update.
Basically, you are looking for a mass or bulk update.

Related

How to optimize Postgres for bulk insert data?

I have a table that must be optimized for bulk insert data over 100 00 rows for execution time.
Table columns
I try to insert data using PHP, where each row is element of array:
$dataset = [["columnindex" => 1, "rowindex" => 2, "type" => "num", "value" => 400], ...]
Problem is when I try to insert array with 100 rows the Postgres does not work, also PDO does not return any errors.
I use insert from Laravel:
SessionPrepared::insert($dataset);
If to slice array it is adeed to db:
$dataset = array_slice($dataset, 0, 10);

Most efficient way to query with two keys in either order?

I have some data I receive of the following format:
{ gameId: 1, playerId: "john", score: .12 }
{ gameId: 1, playerId: "mary", score: .75 }
{ gameId: 1, playerId: "jane", score: .32 }
{ gameId: 2, playerId: "john", score: .89 }
{ gameId: 2, playerId: "mary", score: .91 }
{ gameId: 2, playerId: "jane", score: .99 }
And I want to expose these endpoints:
GET games => get a list of all games
GET games/{id} => list all the games for the gameId and the average score among all players
GET players => get a list of all games
GET players/{id} => list all the scores for the player and their average score across all games
So setting DBs aside for a moment, I thought of a hash map approach where I basically have two maps:
gameScores[gameId][playerId] = score
playerScores[playerId][gameId] = score
This way I could very efficiently return results as the following:
GET games => array_keys(gameScores)
GET games/{id} => ['average' => avg(gameScores[id]), 'games' => gameScores[id]]
GET players => array_keys(playerScores)
GET players/{id} => ['average' => avg(playerScores[id]), 'games' => playerScores[id]]
This seems like a very efficient way to return results in near O(1) time, but is it too great a drawback to be duplicating the dataset by 2? Imagine if score was some very large object instead.
I'm doing this in PHP, so not using something like a Python tuple solution here, but I feel like this problem is very generalizable (hash map with 2 keys) and I'm wondering if there's a better way to approach this rather than duplicating the hash map for both key orders.
Is using a database the only more optimal approach here? If so, would it be best to enter the data into a single table with those 3 columns, or should I be splitting the tables?
You only need one array and search it with
array_search($search, array_column($array, $field));

project the sum of values in a mongo subdocument

I have a Mongo Collection that I'm trying to aggregate in which I need to be able to filter the results based on a sum of values from a subdocument. Each of my entries has a subdocument that looks like this
{
"_id": <MongoId>,
'clientID': 'some ID',
<Other fields I can filter on normally>
"bidCompData": [
{
"lineItemID": "210217",
"qtyBid": 3,
"priceBid": 10.25,
"qtyComp": 0
"description": "Lawn Mowed"
"invoiceID": 23
},
{
<More similar entries>
}
]
}
What I'm trying to do is filter on the sum of qtyBid in a given record. For example, my user could specify that they only want records that have a total qtyBid across all of the bidCompData that's greater than 5. My research shows that I can't use $sum outside of the $group stage in the pipeline but I need to be able to sum just the qtyBid values for each individual record. Presently my pipeline looks like this.
array(
array('$project' => $basicProjection), //fields to project calculated earlier using the input parameters.
array('$match' => $query),
array('$group' => array(
'_id' =>
array('clientID' => '$clientID'),
'count' => array('$sum' => 1)
)
)
I tried having another group and an unwind before the group I presently have in my pipeline so that I could get the sum there but it doesn't let me keep my fields besides the id and the sum field. Is there a way to do this without using $where? My database is large and I can't afford the speed hit from the JS execution.

MongoDB - Aggregation Framework, PHP and averages

First time here - please go easy… ;)
I'm starting off with MongoDB for the first time - using the offical PHP driver to interact with an application.
Here's the first problem I've ran into with regards to the aggregation framework.
I have a collection of documents, all of which contain an array of numbers, like in the following shortened example...
{
"_id": ObjectId("51c42c1218ef9de420000002"),
"my_id": 1,
"numbers": [
482,
49,
382,
290,
31,
126,
997,
20,
145
],
}
{
"_id": ObjectId("51c42c1218ef9de420000006"),
"my_id": 2,
"numbers": [
19,
234,
28,
962,
24,
12,
8,
643,
145
],
}
{
"_id": ObjectId("51c42c1218ef9de420000008"),
"my_id": 3,
"numbers": [
912,
18,
456,
34,
284,
556,
95,
125,
579
],
}
{
"_id": ObjectId("51c42c1218ef9de420000012"),
"my_id": 4,
"numbers": [
12,
97,
227,
872,
103,
78,
16,
377,
20
],
}
{
"_id": ObjectId("51c42c1218ef9de420000016"),
"my_id": 5,
"numbers": [
212,
237,
103,
93,
55,
183,
193,
17,
346
],
}
Using the aggregation framework and PHP (which I think is the correct way), I'm trying to work out the average amount of times a number doesn't appear in a collection (within the numbers array) before it appears again.
For example, the average amount of times the number 20 doesn't appear in the above example is 1.5 (there's a gap of 2 collections, followed by a gap of 1 - add these values together, divide by number of gaps).
I can get as far as working out if the number 20 is within the results array, and then using the $cond operator, passing a value based on the result. Here’s my PHP…
$unwind_results = array(
'$unwind' => '$numbers'
);
$project = array (
'$project' => array(
'my_id' => '$my_id',
'numbers' => '$numbers',
'hit' => array('$cond' => array(
array(
'$eq' => array('$numbers',20)
),
0,
1
)
)
)
);
$group = array (
'$group' => array(
'_id' => '$my_id',
'hit' => array('$min'=>'$hit'),
)
);
$sort = array(
'$sort' => array( '_id' => 1 ),
);
$avg = $c->aggregate(array($unwind_results,$project, $group, $sort));
What I was trying to achieve, was to setup up some kind of incremental counter that reset everytime the number 20 appeared in the numbers array, and then grab all of those numbers and work out the average from there…But im truly stumped.
I know I could work out the average from a collection of documents on the application side, but ideally I’d like Mongo to give me the result I want so it’s more portable.
Would Map/Reduce need to get involved somewhere?
Any help/advice/pointers greatly received!
As Asya said, the aggregation framework isn't usable for the last part of your problem (averaging gaps in "hits" between documents in the pipeline). Map/reduce also doesn't seem well-suited to this task, since you need to process the documents serially (and in a sorted order) for this computation and MR emphasizes parallel processing.
Given that the aggregation framework does process documents in a sorted order, I was brainstorming yesterday about how it might support your use case. If $group exposed access to its accumulator values during the projection (in addition to the document being processed), we might be able to use $push to collect previous values in a projected array and then inspect them during a projection to compute these "hit" gaps. Alternatively, if there was some facility to access the previous document encountered by a $group for our bucket (i.e. group key), this could allow us to determine diffs and compute the gap span as well.
I shared those thoughts with Mathias, who works on the framework, and he explained that while all of this might be possible for a single server (were the functionality implemented), it would not work at all on a sharded infrastructure, where $group and $sort operations are distributed. It would not be a portable solution.
I think you're best option is to run the aggregation with the $project you have, and then process those results in your application language.

Checking users friendship

Continuing this question,
in my web app, I want to allow users to add friends, like facebook, in my previous question, I finally decided to have the database structure as #yiding said:
I would de-normalize the relation such that it's symmetric. That is,
if 1 and 2 are friends, i'd have two rows (1,2) and (2,1).
The disadvantage is that it's twice the size, and you have to do 2
writes when forming and breaking friendships. The advantage is all
your read queries are simpler. This is probably a good trade-off
because most of the time you are reading instead of writing.
This has the added advantage that if you eventually outgrow one
database and decide to do user-sharding, you don't have to traverse
every other db shard to find out who a person's friends are.
So, now if user 1 adds user 2, and user 5 adds 2, something like this will go into the db:
ROW_ID USER_ID FRIEND_ID STATUS
1 1 2 0
2 2 1 0
3 5 2 0
4 2 5 0
As you see, we insert the row of the "REQUEST SENDER" first, so now imagine that user 5 is logged in, and we want to show him the friendship requests, here is my query:
$check_requests = mysql_query("SELECT * FROM friends_tbl WHERE FRIEND_ID = '5'");
the above query, will fetch ROW_ID = 4, this means with the above query shows us that user 2 has added 5, but he has NOT, actually the user 5 added user 2, so here we should not show any friendship requests for user 5, instead we need to show it for user 2.
How I'm supposed to check this correctly?
This is an edited answer.
Your SQL query should look like this:
SELECT USER_ID, FRIEND_ID FROM friends_tbl WHERE FRIEND_ID = '5' OR USER_ID = '5'
Then you have to parse your result in this way. Assuming you have got a php array like this:
$result = array(
0 => array(
'USER_ID' => 5,
'FRIEND_ID' => 2
),
1 => array(
'USER_ID' => 2,
'FRIEND_ID' => 5
)
2 => array(
'USER_ID' => 5,
'FRIEND_ID' => 8
),
3 => array(
'USER_ID' => 8,
'FRIEND_ID' => 5
)
)
You just have to get the even rows:
$result_final = array();
for($i = 0; $i < count($result); $i++) {
if($i % 2 == 0) $result_final[] = $result[$i];
}
Then you will have an array like this:
$result = array(
0 => array(
'USER_ID' => 5,
'FRIEND_ID' => 2
),
1 => array(
'USER_ID' => 5,
'FRIEND_ID' => 8
)
)
Alternative method: Make your SQL look like this:
SELECT FRIEND_ID FROM friends_tbl WHERE USER_ID = '5'
That's all.
Friendship query notifies should be placed in something like message inbox. Relation you described is meant to hold, well, friendship relations, not the fact of the event happening itself. You should consider create relation to hold notifies and fill it properly alongside with two inserts on friends_tbl
You'll need to hold a temporary table (or fixed - for data mining) which has all the requests made from one user to another, for example:
table: friendRequest
inviterId inviteeId status tstamp
2 5 0 NOW()
5 8 0 NOW()
assuming that 0 is unapproved.
Than you'll query for all pending requests
SELECT * FROM friendRequest WHERE invitee_id = :currentLoggedUserId AND status = 0
Once a user approved a user, you'll create a transaction, describing this newly formed relation and updating the friendRequests table
You could also query this way assymetric relations, where a user has many followers, by looking for un-mutual friendships.

Categories