This is kind of a neat problem and I've enjoyed thinking it through...
Assume that you run a "Widget Rental" website, and on your application and you want to allow prospective purchasers to sort the widgets based on prices. (Low to high or high to low).
Each widget can have a different price based on the time of year. Some widgets will have dozens of different prices depending on the season as you get "high" seasons and "low" seasons.
However, the sellers of the "Widgets" are especially mischievous, and have realised that if they set their widget to be really expensive for one day of the year, and also really cheap one day of the year, then they can easily appear at the low and high sort ranges.
Currently, I took a very naive solution in order to calculate the "lowest price" for a Widget, which is to just take the lowest( N ) value from a dataset.
What I would like to is to get a "lowest from price" for a widget, which accurately portrays the price which it could be rented from.. and remove the lower/higher-band outliers.
Take a look at this chart... with values...
X Axis - Time (each significant interval is a day)
Y Axis - Price
The X axis is time, and the Y axis is the price. Now, this contains a normal distribution, and there aren't any real statistical outliers in that dataset. It's common to see the price between the lowest value and the upper value to fluctuate as much as 200%.
However, take a look at this second chart... It contains a single day tariff, which is only 20 ēuros...
I've played around with using Grubbs test and it seems to work quite well.
The important thing is that I want to get a "from price". That is to say, I want to be able to say, "You can rent this widget from XXXX". So it should be reflect the overall pricing taken as a whole and ignore clear outliers.
PHP bonus points if you point me in the direction of anything that already exists. (But I'm happy to code this myself in PHP).
One issue is that there are multiple definitions for what an outlier actually is. However, for this purpose a straight forward solution seems sufficient.
You could remove outliers by limiting the range of values to either +- some percentage or +- some number of standard deviations (probably one or two, but it could very) from the average price. Likely you'd probably want to use a combination of both, as if the prices don't very much, then a discount could be viewed as an outlier, which may or may not be appropriate. In any case, you'd likely have to do some experimenting to determine how sensitive it is. Chances are you'd probably want to set it so outliers must be at least some percentage away from the mean even if it's only 5-20 percent. Below are a few percentage based limits based on an average of $500.
90%: $50 to $950
75%: $125 to $875
50%: $250 to $750
30%: $350 to $650
25%: $375 to $650
If multiple passes are used, then it would be easier to sort the prices, then remove the price that is farthest from the average (perhaps considering the highest price as well as the lowest price) as long as it exceeds the range. This ends up being O(N*D log D) to obtain the result of continuous single passes until they have no effect, instead of O(N*D) for a single pass, where N is the number of items to rent and D is the number of days considered.
You also might find the Ramer–Douglas–Peucker algorithm useful for finding points of interest after a bit of experimenting with how to define the value of epsilon.
Related
I'm trying to figure out how to build a specific algorithm (ultimately implemented in PHP, but that's less important), but I'm having a hard time wrapping my head around the best way to do the math. Instead of defining a complex industry-specific process, I'll use a crazy metaphor here (the math is what's important). Imagine you're trying to identify the percent chance a specific make of car is parked in a store's parking lot based on the items sold within the store. To begin you take a physical survey of 100,000 store parking lots, recording each unique car make spotted outside, each unique item sold within the store, and a fixed percent relevance that item has to the store (ex: lumber has an 89% relevance to Home Depot, but pencils only have a 23% relevance to Walmart).
There are two parts to what I’m trying to solve. First, I’m trying to figure out the best way to roll-up this data to a specific item, while respecting each relevance percent and the number of confirmed observations (so one spotting doesn’t equal 100% chance, similar to http://www.evanmiller.org/how-not-to-sort-by-average-rating.html ). In other words, if a brand new, never-before-seen store is selling Waterford glasses and cashmere sweaters, from those items we can predict there’s an 89% chance a Mercedes is in the parking lot.
So to recap:
Each item has been seen a specific number of times in a store. For each of those times, there is a different product/store relevance percentage and a list of all car makes in the parking lot. How do I best mathematically calculate the percent chance a specific make is in the parking lot of a brand new store, only based on the items within?
Now the second part of this is getting a bit more complicated by adding another layer of abstraction. If a single person visits 50 stores, and we aggregate all the items in all those stores, we can predict what type of car they drive (ex: lots of camping and hiking stores, so they have a 67% chance of driving a Jeep). Then if they visit a new store and are exposed to a brand new item, for which we have no data, I need to apply that 67% Jeep onto the new item (still respecting the relevance of that item to the store). Then use that item’s less-than-certain Jeep statistic to influence our predictions of parking lots that contain that new item (which was never directly measured). Perhaps this requires us to add a confidence interval of some kind? Or how can we represent that uncertainty, without every one of the millions of items we analyze eventually averaging out to 50%?
I REALLY appreciate your help on this!
I think, you need to build cross-correlation matrix,
where lines are goods, and columns are car types.
Each cell contains normalized coefficient, how to some
good (i.e. diamond ring) is related to car type (Geo or Mercedes).
Details see here:
http://en.wikipedia.org/wiki/Cross-correlation
I am using an application that collects price data and makes sensible buying and selling prices each time data is retrieved. Now it can happen that the numbers are way to high or way too small because of how to system works. I can't do anything about this.
Now my question is, if I have an array of number like:
$prices = ['300','312','293','298','1025','12'];
What would be a good algorithm to get rid of the 12 and 1025? Note that a higher number appears far more often than a really low number so simply taking a average doesn't work.
I thought about taking a average of the whole array, looping through the array and checking for a differential percentage for each item and check if it under the threshold but I thought that this wouldn't be as accurate as I would like.
Have you thought about absolute numbers?
If I understood you correct there are multiple price lists so the average valid price could differ, it could be 1000 and some could be around 300 like in your example, my algorithm suggestion will work with both. You did not inform if the price would always be as close as in the examples or it could be higher if the price was higher.
I will split my answer in four parts, the first part will be for both situations (price difference is low at low values and high at high values). And the second part will be useful if the price difference will increase as the average valid price increases. The third part will be the whole algorithm for how you want to wrap it all together. The last part will be what to do at the first run.
Part 1: Finding a value for validation processing
you say that you have a list of these numbers and that it retrieves new data all the time. The way I would suggest you do, is that if you subtract two numbers with each other and the absolute value.
Example:
300-312=|12|
With the number 12 we can conclude that both these prices are in the valid price range. Now let's take 3 other examples, one where both values are invalid and one where only one is invalid.
Example:
1025-12=|1013|
We can see that 1013 is no way an average price in this list, since both are invalid we have to test them both against a valid price. The algorithm will then remove them both.
Example:
300-12=|288|
We can see that 288 isn't a valid price either, the algorithm will remove 12.
Part 2: validating a price with varying price differences
If you have lists where the average price could have a difference of 400, -50 and +50 in difference will give you bugs in your algorithm, therefore you need a way to determine this in a scalable way, that will make sure higher numbers can have higher differences in prices.
If the absolute value is Higher than 20%(or another number) of the average number of the two numbers, they would need further validation.
Example:
(300+312)/2=306 is the average number.
306*0.2=61,2
If you have a stored value of the highest and lowest valid number you could use 20% of their average to determine the threshold.
(293+312)/2=302,5
302,5*0,2=60,5
Part 3: wrapping it all up and making an algorithm
So the first thing you should do is to determine the amount of data in each list, the number of lists, and how often you recieve data, the bigger the amount of data and the more often you recieve data, it would be reasonable to index your data. The way I would suggest is that for each list you save the highest and lowest valid number. If this is not the case you can skip this part and look at part 4 as you can basically run the algorithm against the whole list each time you recieve new data.
First add 4 values to a list, min price, max price, average price and threshold. The average price is (max price+min price)/2. After this you can use a % of the average price to determine a threshold for your prices, I will suggest 20% since it will result in a number close to the number you use which is 50, find the threshold by multiplying the average number with 0,2.
Depending on your data you can always chose to find a threshold based on 20% of the average of min value, max value and a new number ((min+max+new)/2*0,2), you can change this calculation if the difference should ever change.
When you recieve new numbers your algorithm should check the absolute number against the threshold.
Depending on the frequency of new numbers I would suggest this at a low frequency.
ProcessNumber(var value)
{
if(absoluteValue(MinValue-value)<=MaxValue*0,2) //depending on how many numbers you want to be valid you can change the threshold, by doing this you allow the maximum value to change if the new number is valid but higher than max value
{
addNumber(value);
}
else
{
deleteNumber(value);
}
}
If the process of retrieving new numbers happens very often you can add two numbers at once, if odd numbers occur 1/3 times I'd suggest the above method instead.
ProcessNumbers(var value1, var value2)
{
if(absoluteValue(value1-value2)<=threshold) //if you want the thresholdnumber to be valid too, use less than or equal to
{
addnumber(value1);
addnumber(value2);
return true
}//If you have a method to add them
else
if(checkNumber(value1)) // returns true if valid)
{ //we now know value 1 is valid
deleteNumber(value2); //because the check was false and we know value1 is valid value2 must be the invalid one
addNumber(value1);
}
else if(checkNumber(value2))
{ //we now know value 2 is valid
deleteNumber(value1);
addNumber(value2);
}
else
{ //we now know both values are invalid
deleteNumber(value1);
deleteNumber(value2);
}
}
Part 4: first run
You will need an algorithm for the first run, if there currently are no invalid numbers and you didn't skip you can ignore this part.
For the first run you should group the numbers to sorted lists by what threshold they are in.
You take two numbers at a time and see if the absolute value is below the threshold.
absolute = value1-value2;
threshold = value1+value2)/2*0.2;
if(absolute<threshold)
AddToThreshold(threshold,value1,value2);
else
AddToLater(value1,value2);
the AddTolater is a list that contains values you have to doublecheck since you don't know if value1, value2 or both values sent them into this list.
The addtothreshold makes sure that if there's a threshold group with a value higher than the threshold submitted the values will be submitted to this group.
Now you should have a few groups with thresholds, what you do now is take the lowest of the lowest group and take the lowest of the highest group and check if their absolute value is below their threshold, you can then use this threshold to figure out if other absolute values are below this particular threshold and sort them from each other, let's take your list and use the lowest threshold with the highest absolute number from two valid numbers.
Threshold:
(293+298)/2=295,5*0.2=59,1 (this is the threshold)
Highest possible absolute number from 2 valid numbers:
293-312=|19|
This became a really long post and I hope it can give you at least some inspiration, although it might not be necessary with that much processing if you do not have that many lists all of this might be an overkill unless you are planning something scalable.
best of luck!
What you are describing is called outlier detection. There are statistical tests for this purpose. Beware anyway that nothing can guarantee 100% reliability.
http://en.wikipedia.org/wiki/Outlier#Identifying_outliers
I've got a table with 1000 recipes in it, each recipe has calories, protein, carbs and fat values associated with it.
I need to figure out an algorithm in PHP that will allow me to specify value ranges for calories, protein, carbs and fat as well as dictating the number of recipes in each permutation. Something like:
getPermutations($recipes, $lowCal, $highCal, $lowProt, $highProt, $lowCarb, $highCarb, $lowFat, $highFat, $countRecipes)
The end goal is allowing a user to input their calorie/protein/carb/fat goals for the day (as a range, 1500-1600 calories for example), as well as how many meals they would like to eat (count of recipes in each set) and returning all the different meal combinations that fit their goals.
I've tried this previously by populating a table with every possible combination (see: Best way to create Combination of records (Order does not matter, no repetition allowed) in mySQL tables ) and querying it with the range limits, however that proved not to be efficient as I end up with billions of records to scan through and it takes an indefinite amount of time.
I've found some permutation algorithms that are close to what I need, but don't have the value range restraint for calories/protein/carbs/fat that I'm looking for (see: Create fixed length non-repeating permutation of larger set) I'm at a loss at this point when it comes to this type of logic/math, so any help is MUCH appreciated.
Based on some comment clarification, I can suggest one way to go about it. Specifically, this is my "try the simplest thing that could possibly work" approach to a problem that is potentially quite tricky.
First, the tricky part is that the sum of all meals has to be in a certain range, but SQL does not have a built-in feature that I'm aware of that does specifically what you want in one pass; that's ok, though, as we can just implement this functionality in PHP instead.
So lets say you request 5 meals that will total 2000 calories - we leave the other variables aside for simplicity, but they will work the same way. We then calculate that the 'average' meal is 2000/5=400 calories, but obviously any one meal could be over or under that amount. I'm no dietician, but I assume you'll want no meal that takes up more than 1.25x-2x the average meal size, so we can restrict out initial query to this amount.
$maxCalPerMeal = ($highCal / $countRecipes) * 1.5;
$mealPlanCaloriesRemaining = $highCal; # more on this one in a minute
We then request 1 random meal which is less than $maxCalPerMeal, and 'save' it as our first meal. We then subtract its actual calorie count from $mealPlanCaloriesRemaining. We now recalculate:
$maxCalPerMeal = ($highCal / $countRecipesRemaining) * 1.5); # 1.5 being a maximum deviation from average multiple
Now the next query will ask for both a random meal that is less than $maxCalPerMeal AND $mealPlanCaloriesRemaining, AND NOT one of the meals you already have saved in this particular meal plan option (thus ensuring unique meals - no mac'n'cheese for breakfast, lunch, and dinner!). And we update the variables as in the last query, until you reach the end. For the last meal requested it we don't care about the average and it's associated multiple, as thanks to a compound query you'll get what you want anyway and don't need to complicate your control loops.
Assuming the worst case with the 5 meal 2000 calorie max diet:
Meal 1: 600 calories
Meal 2: 437
Meal 3: 381
Meal 4: 301
Meal 5: 281
Or something like that, and in most cases you'll get something a bit nicer and more random. But in the worst-case it still works! Now this actually just plain works for the usual case. Adding more maximums like for fat and protein, etc, is easy, so lets deal with the lows next.
All we need to do to support "minimum calories per day" is add another set of averages, as such:
$minCalPerMeal = ($lowCal / $countRecipes) * .5 # this time our multiplier is less than one, as we allow for meals to be bigger than average we must allow them to be smaller as well
And you restrict the query to being greater than this calculated minimum, recalculating with each loop, and happiness naturally ensues.
Finally we must deal with the degenerate case - what if using this method you end up needing a meal that is to small or too big to fill the last slot? Well, you can handle this a number of ways. Here's what I'd recommended.
The easiest is just returning less than the desired amount of meals, but this might be unacceptable. You could also have special low calorie meals that, due to the minimum average dietary content, would only be likely to be returned if someone really had to squeeze in a light meal to make the plan work. I rather like this solution.
The second easiest is throw out the meal plan you have so far and regenerate from scratch; it might work this time, or it just might not, so you'll need a control loop to make sure you don't get into an infinite work-intensive loop.
The least easy, requires a control loop max iteration again, but here you use a specific strategy to try to get a more acceptable meal plan. In this you take the optional meal with the highest value that is exceeding your dietary limits and throw it out, then try pulling a smaller meal - perhaps one that is no greater than the new calculated average. It might make the plan as a whole work, or you might go over value on another plan, forcing you back into a loop that could be unresolvable - or it might just take a few dozen iterations to get one that works.
Though this sounds like a lot when writing it out, even a very slow computer should be able to churn out hundreds of thousands of suggested meal plans every few seconds without pausing. Your database will be under very little strain even if you have millions of recipes to choose from, and the meal plans you return will be as random as it gets. It would also be easy to make certain multiple suggested meal plans are not duplicates with a simple comparison and another call or two for an extra meal plan to be generated - without fear of noticeable delay!
By breaking things down to small steps with minimal mathematical overhead a daunting task becomes manageable - and you don't even need a degree in mathematics to figure it out :)
(As an aside, I think you have a very nice website built there, so no worries!)
I have a PHP rating system (1-5), in which, some judges come rate some products. I want the results of these products to be fair. Normally what happens is some judges are very strict and may rate products only in the range of 1-2. While some judges rate products only in range of 4-5. Some judge correctly between 1-5.
Can some one give an idea or help in creating an algorithm for mean judges which scales the judges' ratings and compute the product score.
I thought of taking mean of the judges scores on all products but is that the way to go forward or some one has another good alternative to get fair results.
Edit
The rating system is not for an ecommerce application. Here there are only few judges say 10 who rate all the products. The product may be a song in a contest for example. Some of the judges may be very strict and some very liberal. There maybe several contests, so I have to record ratings of these very strict and liberal judges even for other contests and set a rule for them.
Simply put, you assign a weight to a judge based on the range of their typical votes (note, they must not be aware of this weight, or they will throw the system off.) Judges who always vote a single score get the lowest weight. Judges that give things a wide range of scores are considered more accurate.
This also assumes that these judges judge products with a fair range of quality; so if you give them a bunch of good or bad products and expect a range of vote levels, it might be unrealistic.
What you're looking for is the judge with the highest standard deviation (highest variation) in votes having the highest weight, whereas the judge with the lowest would have the least.
The non-algorithmic solution is (essentially) to run the algorithm on the judges, and then pick, American Idol style, judges that balance each other off to get what feels like an accurate result. In which case, you'd want to note the average vote as well as the standard deviation, and perhaps set three judges, one with the wide standard deviation, and then two narrows, one high and one low (liberal and strict) to judge it. This way they don't feel like they get 'less voice' because they are stricter or looser.
Then again, that could be an impetus for them to be less/more strict - if they are too easy or too hard on the product consistently they 'lose voice'.
It sounds like you may be trying to apply an algorithmic solution to a non-algorithmic problem. I'd think about why some "judges" vote only 1-2 and others vote only 4-5.
One possible cause could be self-selection. For example, people who bought an item online may be more likely to review the item if they were particularly disappointed or particularly pleased with their purchase. If this is your problem, you could try to to encourage shoppers to vote more, so that even those who had a non-extreme experience come back to vote.
Another possible issue may be guidance. Maybe your explanation of the rating system isn't clear to the judges. You can try to add a description of what each rating means, and see if that improves the quality of data.
In summary, any kind of a solution to your rating problem will need to have a "human" component and take into account the full story of how the judges choose ratings and why. There is not a whole lot that a ranking algorithm can do if your input data is poor quality. On the other hand, if your data has decent quality, then taking a mean works quite well.
One unrelated problem with taking a mean is that an item with one 5-star rating will rank above an item with hundred 5-star ratings + one 4-star rating. One simple solution is Laplace Smoothing, which addresses the problem by effectively starting every item with one vote of each value (1,2,3,4,5). You don't display the "smoothed" values, but you use them when sorting. See How Not To Sort By Average Rating post for an alternate solution.
How about truncated mean? Here is a good explanation of the idea.
EDIT
Let's say you have votes like: [1,4,3,2,5,1,1,3,2,4].
You need to sort the array in ascending order, giving you: [1,1,1,2,2,3,3,4,4,5].
Then let's say you want to get rid of 25% of the votes, which is 3 (rounding up). You simply discard three votes from the left and from the right, giving you [2,2,3,3].
Then, use arithmetic mean to get 2.5.
EDIT 2
Depending on your database schema, you could query the database to return the votes in ascending order. Then, calculate the percentage, use array_slice() to help you (read the documentation) and calculating the arithmetic mean is the least of your concerns now.
EDIT: Im sorry guys my explantion of the problem wasn't clear! This should be better:
User sends ID numbers of articles and the max. number of bundles(packages)
API searches for all prices available for the articles and calculates best result for min. number of bundles (limit to max. number provided by customer)
ONE Bundle is one package of items delivered to ONE platform(buyer)
Thanks!
This is a fun little problem. I spent a few hours on it this morning, and while I don't have a complete solution, I think I have enough for you to get started (which I believe was what you asked for).
First of all, I'm assuming these things, based on your description of the problem:
All buyers quote a price for all the items
There's no assumption about the items, they may all be different
The user can only interact with a limited number of buyers
The user wants to sell every item, each to one buyer
The user may sell multiple items to a single buyer
Exact solution -- brute force approach
For this, the first thing to realize is that, for a given set of buyers, it is straight forward to calculate the maximum total revenue, because you can just choose the highest price offered in that set of buyers for each item. Add up all those highest prices, and you have the max total revenue for that set of buyers.
Now all you have to do is make that calculation for every possible combination of buyers. That's a basic combinations problem: "n choose k" where n is the total number of buyers and k is the number of buyers you're limited to. There are functions out there that will generate lists of these combinations (I wrote my own... there's also this PEAR package for php).
Once you have a max total revenue for every combination of chosen buyers, just pick the biggest one, and you've solved the problem.
More elegant algorithm?
However, as I intimated by calling this "brute force", the above is not fast, and scales horribly. My machine runs out of memory with 20 buyers and 20 items. I'm sure a better algorithm exists, and I've got a good one, but it isn't perfect.
It's based on opportunity costs. I calculate the difference between the highest price and the second highest price for each item. That difference is an opportunity cost for not picking the buyer with that highest price.
Then I pick buyers offering high prices for items where the opportunity cost is the highest (thus avoiding the worst opportunity costs), until I have k - 1 buyers (where k is the max I can pick). The final choice is tricky, and instead of writing a much more complicated algorithm, I just run all the possibilities for the final buyer and pick the best revenue.
This strategy picks the best combination most of the time, and if it misses, it doesn't miss much. Its also scales relatively well. It's 10x faster than brute force on small scales, and if I quadruple all the parameters (buyers, buyer limit, and items), calculation time goes up by a factor of 20. Considering how many combinations are involved, that's pretty good.
I've got some code drafted, but it's too long for this post. Let me know if you're interested, and I'll figure out a way to send it to you.
This is a graph problem. It can be solved with the Edmond's Blossom V algorithm. It's a matching algorithm to find the best pairwise matching for example in dating programs. Maybe you want to look for the 1d bin-packing algorithm. In 1d bin-packing you have a limit items to assign to unlimited boxes or shelves the better the boxes get filled.
If I understand the problem correctly, it is NP-complete via reduction from Minimum Set Cover. We can translate an instance of Set Cover into an instance of the OP's problem as follows:
Let an instance of Set Cover be given by a set X of size n and a collection of subsets S_1, S_2, ..., S_m of X. Construct an instance of the OP's problem where the seller has n items to sell to m buyers, where buyer i offers a price of 1 for item j if *S_i* contains item j and 0 otherwise. A solution to the OP's problem where the number of buyers is limited by k and the total price paid is n corresponds to a solution to the original Set Cover problem with k sets. So, if you had a polynomial-time solution to the OP's problem, you could solve Minimum Set Cover by successively solving it for the case of 1, 2, 3, etc... buyers until you found a solution with total price equal to n.