Hope all are doing well. I need to print an array as time slots.
Assume that there are 2 orders for 2021.11.15 on 11:30am to 12:00pm and 2:00pm to 4:15pm.
My order needs 1h 30m to complete. Therefore time slots should be in between 8:00am and 6:00pm skipping those times for already exist orders.
My expected results should be:
array:2 [
0 => array:2 [
"start" => "08:00:00"
"end" => "9:30:00"
]
1 => array:2 [
"start" => "09:30:00"
"end" => "11:00:00"
]
2 => array:2 [
"start" => "12:00:00"
"end" => "13:30:00"
]
3 => array:2 [
"start" => "16:15:00"
"end" => "17:45:00"
]
]
Following line is used to get exist orders object with their start and end times.
$existOrders = $this->orderHasPropartnerService->getOrderExistForDateProPartner($proPartnerDefaultLocation->id, $selectedDateRecord->date);
Then I just looped it.
if ($existOrders->count() > 0) {
$dateStartTime = $selectedDateRecord->time_from;
$x = 0;
$firstEndingTime = Carbon::parse($dateStartTime)->addMinutes($totalTimeToOrder)->format('H:i:s');
foreach ($existOrders as $key1 => $existOrder1) {
if ($existOrder1->order->time_slot_from < $firstEndingTime && $existOrder1->order->time_slot_to >= $firstEndingTime) {
$timeCheckArray[$x]['start'] = $existOrder1->order->time_slot_to;
$timeCheckArray[$x]['end'] = Carbon::parse($existOrder1->order->time_slot_to)->addMinutes($totalTimeToOrder)->format('H:i:s');
} else {
$timeSlotArray[$x]['start'] = $dateStartTime;
$timeSlotArray[$x]['end'] = $firstEndingTime;
$timeCheckArray[$x]['start'] = $firstEndingTime;
$timeCheckArray[$x]['end'] = Carbon::parse($firstEndingTime)->addMinutes($totalTimeToOrder)->format('H:i:s');
}
if (isset($existOrders[$key1+1])) {
if ($existOrders[$key1+1]->order->time_slot_from < $timeCheckArray[$x]['end'] && $existOrders[$key1+1]->order->time_slot_to >= $timeCheckArray[$x]['end']) {
} else {
$timeSlotArray[$x+1]['start'] = $timeCheckArray[$x]['start'];
$timeSlotArray[$x+1]['end'] = $timeCheckArray[$x]['end'];
}
}
}
}
As for the above example $dateStartTime will be 8:00am. Value of $totalTimeToOrder will be 1h 30m.
When I try to print $timeSlotArray it'll result as follows:
array:2 [
0 => array:2 [
"start" => "08:00:00"
"end" => "09:30:00"
]
1 => array:2 [
"start" => "08:00:00"
"end" => "09:30:00"
]
]
It is really appreciated if someone point me out where I did mistakes in this logic. Thank you so much guys for your valuable time for a problem of mine.
The best approach would be to run a for loop to check that the New order does not fall between the booked times .
Try something similar to this by converting it to a PHP Date object
$NewOrder= "4:59 pm";
$start= "5:42 am";
$end= "6:26 pm";
$date1 = DateTime::createFromFormat('h:i a', $NewOrder);
$date2 = DateTime::createFromFormat('h:i a', $start);
$date3 = DateTime::createFromFormat('h:i a', $end);
if ($date1 > $date2 && $date1 < $date3)
{
echo 'Not safe to add it ';
}
To filter by date, I use the following queries:
'body' => [
'query' => [
'bool' => [
'filter' => [
'range' => [
'expire_at' => [
'gte' => now()
]
]
]
]
]
]
UPD: All records have another date field - last_checked. The question is how to select records in which, for example, (expire_at - 7 days) > last_checked?
Check the range query documentation: you can use date math in the range parameters. E.g., expire_at - 7 days < now() means that the expiration will be within the next 7 days. Then you can do:
"range": {
"expire_at": {
"lt": "now+7d/d"
}
}
Note that this will include also already expired items. If you want to avoid that, you can add the condition that the expiration date is not met yet:
"range": {
"expire_at": {
"lt": "now+7d/d",
"gte": "now/d"
}
}
use this code
"expire_at" => array(
"lt" => "now+7d/d"
)
I have a query like
'aggs' => [
'deadline' => [
'date_histogram' => [
'field' => 'deadline',
'interval' => 'month',
'keyed' => true,
'format' => 'MMM'
]
]
]
the result I am getting are buckets with keys as month names.
The problem I am facing is the buckets with the month names as keys for a previous year are over written by another month of the next year (because obviously the key is same).
I want results where doc-count of buckets of previous which are over written merge with the doc_count of the next.
You can either add a separate month field during indexing and perform aggregation on it or use below script
{
"size": 0,
"aggs": {
"deadline": {
"histogram": {
"script": { "inline" : "return doc['deadline'].value.getMonthOfYear()" },
"interval": 1
}
}
}
}
Creating a separate month field will have better performance
Replace the format from MMM to YYYY-MMM as below:
'aggs' => [
'deadline' => [
'date_histogram' => [
'field' => 'deadline',
'interval' => 'month',
'keyed' => true,
'format' => 'YYYY-MMM'
]
]
]
After this you can handle the merging process at your application level
Below is my sample mongodb collection
{
"_id" : ObjectId("57ed32f4070577ec56a56b9f"),
"log_id" : "180308",
"issue_id" : "108850",
"author_key" : "priyadarshinim_contus",
"timespent" : NumberLong(18000),
"comment" : "Added charts in the dashboard page of the application.",
"created_on" : "2017-08-16T18:22:04.816+0530",
"updated_on" : "2017-08-16T18:22:04.816+0530",
"started_on" : "2017-08-16T18:21:39.000+0530",
"started_date" : "2017-08-02",
"updated_date" : "2017-08-02",
"role" : "PHP",
"updated_at" : ISODate("2017-09-29T15:27:48.069Z"),
"created_at" : ISODate("2017-09-29T15:27:48.069Z"),
"status" : 1.0
}
I need to get records with help of started_date , by default I will give two dates in that i will check $gt and $lt of started date .
$current_date = '2017-08-31';
$sixmonthfromcurrent ='2017-08-01';
$worklogs = Worklog::raw ( function ($collection) use ($issue_jira_id, $current_date, $sixmonthfromcurrent) {
return $collection->aggregate ( [
['$match' => ['issue_id' => ['$in' => $issue_jira_id],
'started_date' => ['$lte' => $current_date,'$gte' => $sixmonthfromcurrent]
]
],
['$group' => ['issue_id' => ['$push' => '$issue_id'],
'_id' => ['year' => ['$year' => '$started_date'],
'week' => ['$week' => '$started_date'],'resource_key' => '$author_key'],
'sum' => array ('$sum' => '$timespent')]
],
[ '$sort' => ['_id' => 1]
]
] );
} );
If I run this query I am getting this type of error:
Can't convert from BSON type string to Date
How to rectify this error?
The only field in your $group that I see as troubling is the field week.
The year you could extract by doing a $project before your $group aggregation:
$project: {
year: { $substr: [ "$started_date", 0, 4 ] },
issue_id: 1,
author_key: 1,
timespent: 1
}
if you know that the date string will always come at this format. Of course you cannot do a substr operation for finding out the week.
It would be easy though if your field started_date would be an actual ISODate(), then you could use exactly what you wrote as you probably already saw in the documentation.
If you need the field week very bad, which I imagine you do, then I'd suggest you convert your field started_date to an ISODate().
You can do that with a bulkWrite:
db = db.getSiblingDB('yourDatabaseName');
var requests = [];
db.yourCollectionName.find().forEach(doc => {
var date = yourFunctionThatConvertsStringToDate(doc.started_date);
requests.push( {
'updateOne': {
'filter': { '_id': doc._id },
'update': { '$set': {
"started_date": date
} }
}
});
if (requests.length === 500) {
db.yourCollectionName.bulkWrite(requests);
requests = [];
}
});
if(requests.length > 0) {
db.yourCollectionName.bulkWrite(requests);
}
Load this script directly on your mongodb server and execute there.
Hope this helps.
I am currently in the process of trying to form an algorithm that will calculate the relevance of a user to another user based on certain bits of data.
Unfortunately, my Maths skills have deteriorated since leaving school almost a decade ago, and as such, I am very much struggling with this. I have found an algorithm online that pushes 'hot' posts to the top of a newsfeed and figure this is a good place to start. This is the algorithm/calculation I found online (in MySQL):
LOG10(ABS(activity) + 1) * SIGN(activity) + (UNIX_TIMESTAMP(created_at) / 300000)
What I am hoping to do is adapt the above concept to work with the data and models I have in my own application. Consider this user object (trimmed down):
{
"id": 1
"first_name": "Joe",
"last_name": "Bloggs",
"counts": {
"connections": 21,
"mutual_connections": 16
},
"mutual_objects": [
{
"created_at": "2017-03-26 13:30:47"
},
{
"created_at": "2017-03-26 14:25:32"
}
],
"last_seen": "2017-03-26 14:25:32",
}
There are three bits of relevant information above that need to be considered in the algorithm:
mutual_connections
mutual_objects but taking into account that older objects should not drive up the relevance as much as newer objects, hence the created_at field.
last_seen
Can anyone suggest a fairly simple (if that's possible) way of doing this?
This was my idea, but in all honesty, I have no idea what it is doing so I cannot be sure if it is a good solution and I have also missed out last_seen as I could not find a way to add this:
$mutual_date_sum = 0;
foreach ($user->mutual_objects as $mutual_object) {
$mutual_date_sum =+ strtotime($mutual_object->created_at);
}
$mutual_date_thing = $mutual_date_sum / (300000 * count($user->mutual_objects));
$relevance = log10($user->counts->mutual_connections + 1) + $mutual_date_thing;
Just to be clear, I am not looking to implement some sort of government level AI, 50,000 line algorithm from a mathematical genius. I am merely looking for a relatively simple solution that will do the trick for the moment.
UPDATE
I have had a little play and have managed to build the following test. It seems the mutual_objects very much carries the weight in this particular algorithm as I would expect to see users 4 and 5 higher up the results list given their large number of mutual_connections.
I don't know if this makes it easier to amend/play with, but this is probably the best I can do. Please help if you have any suggestions :-)
$users = [
[
'id' => 1,
'mutual_connections' => 15,
'mutual_objects' => [
[
'created_at' => '2017-03-26 14:25:32'
],
[
'created_at' => '2017-03-26 14:25:32'
],
[
'created_at' => '2017-02-26 14:25:32'
],
[
'created_at' => '2017-03-15 14:25:32'
],
[
'created_at' => '2017-01-26 14:25:32'
],
[
'created_at' => '2017-03-26 14:25:32'
],
[
'created_at' => '2016-03-26 14:25:32'
],
[
'created_at' => '2017-03-26 14:25:32'
]
],
'last_seen' => '2017-03-01 14:25:32'
],
[
'id' => 2,
'mutual_connections' => 2,
'mutual_objects' => [
[
'created_at' => '2016-03-26 14:25:32'
],
[
'created_at' => '2015-03-26 14:25:32'
],
[
'created_at' => '2017-02-26 14:25:32'
],
[
'created_at' => '2017-03-15 14:25:32'
],
[
'created_at' => '2017-01-26 14:25:32'
],
[
'created_at' => '2017-03-26 14:25:32'
],
[
'created_at' => '2016-03-26 14:25:32'
],
[
'created_at' => '2016-03-26 14:25:32'
],
[
'created_at' => '2016-03-26 14:25:32'
],
[
'created_at' => '2017-03-15 14:25:32'
],
[
'created_at' => '2017-02-26 14:25:32'
],
[
'created_at' => '2017-03-15 14:25:32'
],
[
'created_at' => '2017-01-26 14:25:32'
],
[
'created_at' => '2017-03-12 14:25:32'
],
[
'created_at' => '2016-03-13 14:25:32'
],
[
'created_at' => '2017-03-17 14:25:32'
]
],
'last_seen' => '2015-03-25 14:25:32'
],
[
'id' => 3,
'mutual_connections' => 30,
'mutual_objects' => [
[
'created_at' => '2017-02-26 14:25:32'
],
[
'created_at' => '2017-03-26 14:25:32'
]
],
'last_seen' => '2017-03-25 14:25:32'
],
[
'id' => 4,
'mutual_connections' => 107,
'mutual_objects' => [],
'last_seen' => '2017-03-26 14:25:32'
],
[
'id' => 5,
'mutual_connections' => 500,
'mutual_objects' => [],
'last_seen' => '2017-03-26 20:25:32'
],
[
'id' => 6,
'mutual_connections' => 5,
'mutual_objects' => [
[
'created_at' => '2017-03-26 20:55:32'
],
[
'created_at' => '2017-03-25 14:25:32'
]
],
'last_seen' => '2017-03-25 14:25:32'
]
];
$relevance = [];
foreach ($users as $user) {
$mutual_date_sum = 0;
foreach ($user['mutual_objects'] as $bubble) {
$mutual_date_sum =+ strtotime($bubble['created_at']);
}
$mutual_date_thing = empty($mutual_date_sum) ? 1 : $mutual_date_sum / (300000 * count($user['mutual_objects']));
$relevance[] = [
'id' => $user['id'],
'relevance' => log10($user['mutual_connections'] + 1) + $mutual_date_thing
];
}
$relevance = collect($relevance)->sortByDesc('relevance');
print_r($relevance->values()->all());
This prints out:
Array
(
[0] => Array
(
[id] => 3
[relevance] => 2485.7219150272
)
[1] => Array
(
[id] => 6
[relevance] => 2484.8647045837
)
[2] => Array
(
[id] => 1
[relevance] => 622.26175831599
)
[3] => Array
(
[id] => 2
[relevance] => 310.84394042139
)
[4] => Array
(
[id] => 5
[relevance] => 3.6998377258672
)
[5] => Array
(
[id] => 4
[relevance] => 3.0334237554869
)
)
This problem is a candidate for machine learning. Look for an introductory book, because I think that it is not very complex and you could do it. If not, depending on the income you make with your website, you might consider hiring someone who does it for you.
If you prefer to do it "manually"; you will build your own model with specific weights to different factors. Be aware that our brains deceive us very often and what you think is a perfect model might be far from optimal.
I would suggest you to start right away storing data on which users each user interacts more with; so you can compare your results with real data. Also, in the future you will have a foundation to build a proper machine learning system.
Having said that, here is my proposal:
In the end, you want a list like this (with 3 users):
A->B: relevance
----------------
User1->User2: 0.59
User1->User3: 0.17
User2->User1: 0.78
User2->User3: 0.63
User3->User1: 0.76
User3->User2: 0.45
1) For each user
1.1) Compute and cache the age of every user's 'last_seen', in days, integer rounding down (floor).
1.2) Store max(age(last_seen)) -let's call it just max-. This is one value, not one per user. But you can only compute it once you have previously computed the age of every user
1.3) For each user, change the stored age value with the result of (max-age)/max to get a value between 0 and 1.
1.4) Compute and cache also every object's 'created_at', in days.
2) For each user, comparing with every other user
2.1) Regarding mutual connections, think of this: if A has 100 connections, 10 of them shared with B, and C has 500 connections, 10 of them shared with D, do you really take 10 as the value for the calculation in both cases? I would take the percentage. For A->B it would be 10 and for C->D it would be 2. And then /100 to have a value between 0 and 1.
2.2) Pick a maximum age for mutual objects to be relevant. Let's take 365 days.
2.3) In user A, remove objects older than 365 days. Do not really remove them, just filter them out for the sake of these calculations.
2.4) From the remaining objects, compute the percentage of mutual objects with each of the other users.
2.5) For each one of these other users, compute the average age of the objects in common from the previous step. Take the maximum age (365), subtract the computed average and /365 to have a value between 0 and 1.
2.6) Retrieve the age value of the other user.
So, for each combination of A->B, you have four values between 0 and 1:
MC: mutual connections A-B
MO: mutual objects A-B
OA: avg mutual object age A-B
BA: age of B
Now you have to assign weights to each one of them in order to find the optimal solution. Assign percentages which sum 100 to make your life easier:
Relevance = 40 * MC + 30 * MO + 10 * OA + 20 * BA
In this case, since OA is so related to MO, you can mix them:
Relevance = 40 * MC + 20 * MO + 20 * MO * OA + 20 * BA
I would suggest running this overnight, every day. There are many ways to improve and optimize the process... have fun!