Calculating the relevance of a User based on Specific data - php

I am currently in the process of trying to form an algorithm that will calculate the relevance of a user to another user based on certain bits of data.
Unfortunately, my Maths skills have deteriorated since leaving school almost a decade ago, and as such, I am very much struggling with this. I have found an algorithm online that pushes 'hot' posts to the top of a newsfeed and figure this is a good place to start. This is the algorithm/calculation I found online (in MySQL):
LOG10(ABS(activity) + 1) * SIGN(activity) + (UNIX_TIMESTAMP(created_at) / 300000)
What I am hoping to do is adapt the above concept to work with the data and models I have in my own application. Consider this user object (trimmed down):
{
"id": 1
"first_name": "Joe",
"last_name": "Bloggs",
"counts": {
"connections": 21,
"mutual_connections": 16
},
"mutual_objects": [
{
"created_at": "2017-03-26 13:30:47"
},
{
"created_at": "2017-03-26 14:25:32"
}
],
"last_seen": "2017-03-26 14:25:32",
}
There are three bits of relevant information above that need to be considered in the algorithm:
mutual_connections
mutual_objects but taking into account that older objects should not drive up the relevance as much as newer objects, hence the created_at field.
last_seen
Can anyone suggest a fairly simple (if that's possible) way of doing this?
This was my idea, but in all honesty, I have no idea what it is doing so I cannot be sure if it is a good solution and I have also missed out last_seen as I could not find a way to add this:
$mutual_date_sum = 0;
foreach ($user->mutual_objects as $mutual_object) {
$mutual_date_sum =+ strtotime($mutual_object->created_at);
}
$mutual_date_thing = $mutual_date_sum / (300000 * count($user->mutual_objects));
$relevance = log10($user->counts->mutual_connections + 1) + $mutual_date_thing;
Just to be clear, I am not looking to implement some sort of government level AI, 50,000 line algorithm from a mathematical genius. I am merely looking for a relatively simple solution that will do the trick for the moment.
UPDATE
I have had a little play and have managed to build the following test. It seems the mutual_objects very much carries the weight in this particular algorithm as I would expect to see users 4 and 5 higher up the results list given their large number of mutual_connections.
I don't know if this makes it easier to amend/play with, but this is probably the best I can do. Please help if you have any suggestions :-)
$users = [
[
'id' => 1,
'mutual_connections' => 15,
'mutual_objects' => [
[
'created_at' => '2017-03-26 14:25:32'
],
[
'created_at' => '2017-03-26 14:25:32'
],
[
'created_at' => '2017-02-26 14:25:32'
],
[
'created_at' => '2017-03-15 14:25:32'
],
[
'created_at' => '2017-01-26 14:25:32'
],
[
'created_at' => '2017-03-26 14:25:32'
],
[
'created_at' => '2016-03-26 14:25:32'
],
[
'created_at' => '2017-03-26 14:25:32'
]
],
'last_seen' => '2017-03-01 14:25:32'
],
[
'id' => 2,
'mutual_connections' => 2,
'mutual_objects' => [
[
'created_at' => '2016-03-26 14:25:32'
],
[
'created_at' => '2015-03-26 14:25:32'
],
[
'created_at' => '2017-02-26 14:25:32'
],
[
'created_at' => '2017-03-15 14:25:32'
],
[
'created_at' => '2017-01-26 14:25:32'
],
[
'created_at' => '2017-03-26 14:25:32'
],
[
'created_at' => '2016-03-26 14:25:32'
],
[
'created_at' => '2016-03-26 14:25:32'
],
[
'created_at' => '2016-03-26 14:25:32'
],
[
'created_at' => '2017-03-15 14:25:32'
],
[
'created_at' => '2017-02-26 14:25:32'
],
[
'created_at' => '2017-03-15 14:25:32'
],
[
'created_at' => '2017-01-26 14:25:32'
],
[
'created_at' => '2017-03-12 14:25:32'
],
[
'created_at' => '2016-03-13 14:25:32'
],
[
'created_at' => '2017-03-17 14:25:32'
]
],
'last_seen' => '2015-03-25 14:25:32'
],
[
'id' => 3,
'mutual_connections' => 30,
'mutual_objects' => [
[
'created_at' => '2017-02-26 14:25:32'
],
[
'created_at' => '2017-03-26 14:25:32'
]
],
'last_seen' => '2017-03-25 14:25:32'
],
[
'id' => 4,
'mutual_connections' => 107,
'mutual_objects' => [],
'last_seen' => '2017-03-26 14:25:32'
],
[
'id' => 5,
'mutual_connections' => 500,
'mutual_objects' => [],
'last_seen' => '2017-03-26 20:25:32'
],
[
'id' => 6,
'mutual_connections' => 5,
'mutual_objects' => [
[
'created_at' => '2017-03-26 20:55:32'
],
[
'created_at' => '2017-03-25 14:25:32'
]
],
'last_seen' => '2017-03-25 14:25:32'
]
];
$relevance = [];
foreach ($users as $user) {
$mutual_date_sum = 0;
foreach ($user['mutual_objects'] as $bubble) {
$mutual_date_sum =+ strtotime($bubble['created_at']);
}
$mutual_date_thing = empty($mutual_date_sum) ? 1 : $mutual_date_sum / (300000 * count($user['mutual_objects']));
$relevance[] = [
'id' => $user['id'],
'relevance' => log10($user['mutual_connections'] + 1) + $mutual_date_thing
];
}
$relevance = collect($relevance)->sortByDesc('relevance');
print_r($relevance->values()->all());
This prints out:
Array
(
[0] => Array
(
[id] => 3
[relevance] => 2485.7219150272
)
[1] => Array
(
[id] => 6
[relevance] => 2484.8647045837
)
[2] => Array
(
[id] => 1
[relevance] => 622.26175831599
)
[3] => Array
(
[id] => 2
[relevance] => 310.84394042139
)
[4] => Array
(
[id] => 5
[relevance] => 3.6998377258672
)
[5] => Array
(
[id] => 4
[relevance] => 3.0334237554869
)
)

This problem is a candidate for machine learning. Look for an introductory book, because I think that it is not very complex and you could do it. If not, depending on the income you make with your website, you might consider hiring someone who does it for you.
If you prefer to do it "manually"; you will build your own model with specific weights to different factors. Be aware that our brains deceive us very often and what you think is a perfect model might be far from optimal.
I would suggest you to start right away storing data on which users each user interacts more with; so you can compare your results with real data. Also, in the future you will have a foundation to build a proper machine learning system.
Having said that, here is my proposal:
In the end, you want a list like this (with 3 users):
A->B: relevance
----------------
User1->User2: 0.59
User1->User3: 0.17
User2->User1: 0.78
User2->User3: 0.63
User3->User1: 0.76
User3->User2: 0.45
1) For each user
1.1) Compute and cache the age of every user's 'last_seen', in days, integer rounding down (floor).
1.2) Store max(age(last_seen)) -let's call it just max-. This is one value, not one per user. But you can only compute it once you have previously computed the age of every user
1.3) For each user, change the stored age value with the result of (max-age)/max to get a value between 0 and 1.
1.4) Compute and cache also every object's 'created_at', in days.
2) For each user, comparing with every other user
2.1) Regarding mutual connections, think of this: if A has 100 connections, 10 of them shared with B, and C has 500 connections, 10 of them shared with D, do you really take 10 as the value for the calculation in both cases? I would take the percentage. For A->B it would be 10 and for C->D it would be 2. And then /100 to have a value between 0 and 1.
2.2) Pick a maximum age for mutual objects to be relevant. Let's take 365 days.
2.3) In user A, remove objects older than 365 days. Do not really remove them, just filter them out for the sake of these calculations.
2.4) From the remaining objects, compute the percentage of mutual objects with each of the other users.
2.5) For each one of these other users, compute the average age of the objects in common from the previous step. Take the maximum age (365), subtract the computed average and /365 to have a value between 0 and 1.
2.6) Retrieve the age value of the other user.
So, for each combination of A->B, you have four values between 0 and 1:
MC: mutual connections A-B
MO: mutual objects A-B
OA: avg mutual object age A-B
BA: age of B
Now you have to assign weights to each one of them in order to find the optimal solution. Assign percentages which sum 100 to make your life easier:
Relevance = 40 * MC + 30 * MO + 10 * OA + 20 * BA
In this case, since OA is so related to MO, you can mix them:
Relevance = 40 * MC + 20 * MO + 20 * MO * OA + 20 * BA
I would suggest running this overnight, every day. There are many ways to improve and optimize the process... have fun!

Related

How to get rows grouped by with total count (MySQL, Laravel)

Background
We have this table lead_activity in our mysql database, with following fields
1. id
2. lead_id
3. activity
example of rows:
id
lead_id
activity
1
5
Called
2
5
Selled
3
6
Contacted
4
9
Contacted
In Laravel, I have got following query:
$this->data['lead_activities'] = LeadActivity::select(DB::Raw('count(*) as total'), 'activity')->groupBy('activity')->get();
With this result:
[
0 => [
'total' => 1,
'activity' => 'Called'
],
1 => [
'total' => 1,
'activity' => 'Selled'
],
2 => [
'total' => 2,
'activity' => 'Contacted'
],
]
Request
How can I build this query (whether Eloquent or raw SQL) so have something similar to these results within just one Query, without any for each after:
[
0 => [
'total' => 1,
'activity' => 'Called',
'lead_ids' => [5]
],
1 => [
'total' => 1,
'activity' => 'Selled'
'lead_ids' => [5],
],
2 => [
'total' => 2,
'activity' => 'Contacted',
'lead_ids' => [6,9]
],
]
Have a look at the MySQL GROUP_CONCAT() function.
I think that should solve your problem. Its not returning you an array of lead_ids but a concatenation (string) of all lead_ids, you can work with (transform to array eventually) afterwards.
$list = LeadActivity::select(DB::Raw('count(*) as total'), 'activity','GROUP_CONCAT(lead_id) as lead_id_aggr')
->groupBy('activity')
->get();
reference: https://dev.mysql.com/doc/refman/8.0/en/aggregate-functions.html#function_group-concat
You can try this:
LeadActivity::query()
->select(
'activity',
DB::raw('GROUP_CONCAT(lead_id) as leads'),
DB::raw('COUNT(*) as activities_count'),
)->groupBy('activity')
->get();
which would produce this result:
Illuminate\Database\Eloquent\Collection {#2057
all: [
App\Models\LeadActivity {#2059
activity: "Called",
leads: "5",
activities_count: 1,
},
App\Models\LeadActivity {#2060
activity: "Contacted",
leads: "6,9",
activities_count: 2,
},
App\Models\LeadActivity {#2061
activity: "Sold",
leads: "5",
activities_count: 1,
},
],
}

Laravel unique identifiers for latest foreign key in collection

I have a Laravel collection with record IDs and foreign keys:
{id=1, foreign_id=1},
{id=2, foreign_id=1},
{id=3, foreign_id=2},
{id=4, foreign_id=3},
{id=5, foreign_id=2}
I expect:
{id=2, foreign_id=1},
{id=5, foreign_id=2},
{id=4, foreign_id=3}
I want to search Laravel query builder for unique values ​​for foreign_id if id in collection occurs more than 1 time.
I want then to give latest foreign_id.
Try $collection->unique('foreign_id');
Here I'm giving an example, You can check by yours,
$a = collect([
[
'id' => 1,
'foreign_id' => 2
],
[
'id' => 2,
'foreign_id' => 1
],
[
'id' => 3,
'foreign_id' => 2
],
[
'id' => 4,
'foreign_id' => 3
],
[
'id' => 5,
'foreign_id' => 2
],
]);
$a->unique('foreign_id');
The easiest way to do it is to sort collection by "id" in descending order and than use unique method by "foreign_id"
$myCollection->sortByDesc('id')->unique('foreign_id')

How can I manage some irregular array to regular array?

I write code with some array that have different structure, but I must extract the data to do something else. How can I manager these array?
The array's structure are as follow:
$a = [
'pos1' => 'somedata',
'pos2' => ['data2', 'data3'],
'pos3' => '';
];
$b = [
[
'pos1' => ['data1', 'data2', ['nest1', 'nest2']],
'pos2' => ['data1', 'data2', 'data3'],
],
['data1', 'data2'],
'data4',
];
The array's Index can be a key or a position, and the value of the corresponding index may be a array with the same structure. More tough problem is that the subarray can be nesting, and the time of the nesting has different length.
Fortunately, every array has it's owe fixed structure.
I want to convert the these array to the format as follow. When the index is a value, change it to the keyword; and if the index is a keyword, nothing changed.
$a = [
'pos1' => 'somedata',
'pos2' => [
'pos2_1' => 'data2',
'pos2_2' => 'data3'
],
'pos3' => '';
];
$b = [
'pos1' => [
'pos1_1' => [
'pos1_1_1' => 'data1',
'pos1_1_2' => 'data2',
'pos1_1_3' => [
'pos1_1_3_1' => 'nest1',
'pos1_1_3_2' => 'nest2',
],
],
'pos1_2' => [
'pos1_2_1' => 'data1',
'pos1_2_2' => 'data2',
'pos1_2_3' => 'data3',
],
],
'pos2' => ['data1', 'data2'],
'pos3' => 'data4',
];
My first solution is for every array, write the function to convert the format(the keyword will specify in function). But it is a huge task and diffcult to manage.
The second solution is write a common function, with two argument: the source array and the configuration that specify the keyword to correspondent value index. For example:
$a = [0, ['pos10' => 1]];
$conf = [
// It means that when the value index is 0, it will change it into 'pos1'
'pos1' => 0,
'pos2' => 1,
];
The common funciton will generate the result of:
$result = [
'pos1' => 0,
'pos2' => ['pos10' => 1],
]
But this solution will lead to a problem: the config is diffcult to understand and design, and other people will spend a lot of time to understand the format after conversion.
Is there are some better solution to manage these array that other people can easy to use these array?
Thanks.

Merging doc_count result from keyed buckets

I have a query like
'aggs' => [
'deadline' => [
'date_histogram' => [
'field' => 'deadline',
'interval' => 'month',
'keyed' => true,
'format' => 'MMM'
]
]
]
the result I am getting are buckets with keys as month names.
The problem I am facing is the buckets with the month names as keys for a previous year are over written by another month of the next year (because obviously the key is same).
I want results where doc-count of buckets of previous which are over written merge with the doc_count of the next.
You can either add a separate month field during indexing and perform aggregation on it or use below script
{
"size": 0,
"aggs": {
"deadline": {
"histogram": {
"script": { "inline" : "return doc['deadline'].value.getMonthOfYear()" },
"interval": 1
}
}
}
}
Creating a separate month field will have better performance
Replace the format from MMM to YYYY-MMM as below:
'aggs' => [
'deadline' => [
'date_histogram' => [
'field' => 'deadline',
'interval' => 'month',
'keyed' => true,
'format' => 'YYYY-MMM'
]
]
]
After this you can handle the merging process at your application level

PHP MongoDB aggregation : how to $sum only when value is greater than 0?

I am using PHP to access a MongoDB collection in which I have recorded players of a game :
{username: "John", stats: {games_played: 79, boosters_used: 1, crystals: 5}},
{username: "Bill", stats: {games_played: 0, boosters_used: 0, crystals: 20}},
{username: "Jane", stats: {games_played: 154, boosters_used: 14, crystals: 37}},
{username: "Sarah", stats: {games_played: 22, boosters_used: 0, crystals: 0}},
{username: "Thomas", stats: {games_played: 0, boosters_used: 0, crystals: 20}},
In my PHP script I am doing this to get sums and averages :
$filter = [
['$group' => [
'Players count' => ['$sum' => 1],
'avgGamesPlayed' => [
'$avg' => '$stats.games_played'
],
'TotalGamesPlayed' => [
'$sum' => '$stats.games_played'
],
]],
['$sort' => [get('sort', 'count') => (int) get('sort_order', -1)]],
];
$options = [];
$m->aggregate($filter, $options);
If I echo the result I'll obtain :
Players count = 5;
avgGamesPlayed = 51;
TotalGamesPlayed = 255;
What I would like to do is to get the $sum of players where stats.games_played is greater than 0. In this particular case the result would be 3.
I know that there is a possibility to do this if I use find and '$gt' => 0 but I really need to stick with aggregate. Si I am trying to do something like this :
'Players count' => ['$sum' => ['$gt' => 0]],
But it doesn't work and I'm stuck here for weeks now, I read the doc but I'm not that familiar with MongoDB, that's why I'm calling for your knowledges.
If you have the answer to this question I'd appreciate it a lot, thank you.
This is where the $cond operator fits nicely. You can use it as an expression within the $sum operator's document that will evaluate the logic and returns 0 when the $stats.games_played field evaluates to 0 and 1 otherwise (that is > 0).
Following this approach will yield the desire outcome:
$pipeline = [
['$group' => [
'Players count' => [
'$sum' => [
'$cond' => [ [ '$gt' => [ '$stats.games_played', 0 ] ], 1, 0 ]
]
],
'avgGamesPlayed' => [
'$avg' => '$stats.games_played'
],
'TotalGamesPlayed' => [
'$sum' => '$stats.games_played'
],
]],
['$sort' => [get('sort', 'count') => (int) get('sort_order', -1)]],
];
$options = [];
$m->aggregate($pipeline, $options);

Categories