Weighted voting algorithm - php

I'm looking for information on which voting algorithm will be best for me. I have a basic 'Up/Down' voting system where a user can only vote the product up or down. I would like to make it weighted so that a product that is a year old will not be held to the same standards as one that is brand new.
I'm thinking do an algorithm that takes the amount of votes for each product in the last 30 days. However this creates a draw back. I don't want votes older than 30 days to become meaningless, but maybe not weighted as much as newer ones. Then possibly votes after 90 days are even weighted less than ones older than 30 days.
Is anyone aware of an algorithm that does this already and even more so is able to be calculated easily in PHP?

Google App Engine has a nice example that deals with votes that "decay" over time.
It's in Python, but it should fit your needs.

I think that given the simplicity of your requirement, the best course of action is write this yourself.
Without knowing more, I think your challenge will be in deciding whether you save this data into your database in a pre-weighted format (e.g. "when vote is cast, give it $this_year + 1 points"), whether you calculate the weighting in your db query (e.g. order by a score that accounts for both upvotes and the date when a vote was cast), or whether you return all the needed data and deduce the weighting in PHP. The choice depends on what your app needs to do exactly and how much data there will be.

Related

PHP - how to find temperature trend

I have an array of last readings from temperature sensor.
How to find id the trend is going high or low.
I know I can compare last value, but this in not a good idea, because temperature tend to flow.
So let's say we have last readings in array:
$temp_array = array(5.1, 5.5, 6, 5.9, 6.2, 6.1);
How to find answer for question: is temperature will grow, or will go down based on last readings?
I am thinking to count average from last 3 vs first 3.
The average from first 3 is 5.533, and the average from latest three is: 6.066 - so I will say that trend is high. But maybe it is a smarter idea to do that?
Calculating a trend depends on many factors and the context it is calculated in.
For example, your rate of measurement will dictate your solution. For example, if you measure outside temperatures every hour, then looking at the last 3 measurements might make sense.
On the other hand, measuring every minute means you will get caught up in changing trends when even a cloud goes in front of the sun and you most likely don't want that (unless you try to predict solar-cell efficiency).
As you can see, the answer and algorithm to use can depend on a lot of factors.
Have a look in Excel of LibreOffice Calc and investigate the trend lines there. You'll see many types and complexity.
Your proposal might work fine for your use-case, but might be problematic as well. Did you consider a missing or wrong measurement? Example: 8.1, 8.0, 0, 7.9, 7.8, 7.7 ? In this case you will predict the wrong trend.
Keeping things simple, I would do something like:
- Filter out large deviations (you know the domain you are in, high changes are unlikely)
- Count the average
- Count how many measurements are below and above the average
Note, this is just a quick alternative, but it would make the prediction less biased to recent measurement.
Good luck.

How do I incorporate the Reddit page/post ranking algorithm?

I'm trying to learn how to code a website algorithm like Reddit.com where there are thousands of posts that need to be ranked. Their ranking algorithm works like this (you don't have to read it, its more of a general question that I have): http://amix.dk/blog/post/19588
Right now I have posts stored in a database, I record their dates and they each have an upvotes and downvotes field so I'm storing their records. I want to figure out how do you store their rankings? When specific posts have ranking values, but they change with time, how could you store their rankings?
If they aren't stored, do you rank every post every time a user loads the page?
When would you store the posts? Do you run a cron job to automatically give every post a new value every x minutes? Do you store their value? Which is temporary. Maybe, until that post reaches its minimum score and is forgotten?
I would definitely not calculate their rank every time you display them.
A simple, and not so performant solution would be to cache post rankings, and once one post's ranking changes, you clear or refresh the cache.
That is not ideal, but it is possible.
Another way would be to do as you alluded to: calculate and store ranks in the database (and ideally cache them), and then refresh those rankings using a cron job every x minutes.
Again, these are basic approaches to what you want to do. You can then build on them over time.
The algorithm you choose will most likely be very particular to your needs.
You need to also gauge what kind of traffic your site would be getting, as it would dictate what kind of lengths you should go through to get the right algorithm.
I would instantly calculate a score for the single vote on a time-weighted scale. I would send that score into a queue or use it to increment a field depending whichever of those is performant for you.
At a regular time interval, I would take all currently ranked articles and all articles that have received votes during the time window and rescore all ranked articles followed by all the queued articles in descending order of score until I had calculated enough to fill my ranking quota.
The ranking list would be cached and used until the next ranking cycle. You'll have to adjust the queue retention period (maybe anything that had activity in the last N queues is re-queued), retention of articles, etc. based on your site load, but this should be a well-performing starting point.
If you're using the exact algorithm reddit uses, you only need to change the ranking field whenever an item is voted up or down - and really only when the difference between upvotes and downvotes changes orders of magnitude. This article explains a little more about how their ranking works.
http://bibwild.wordpress.com/2012/05/08/reddit-story-ranking-algorithm/
Basically, the up and down votes only serve to "displace" the posts.
If D is the difference between the number of upvotes and downvotes, a post is shifted up or down 12 hours per order of magnitude of D. Other than that, it's just a simple time ranking.
if however you want to use your own ranking system where age of the post matters in some way other than linearly, you'll have to either create an indexed field and recalculate the rankings at time intervals as has been said, or just put your sorting into your SQL query, as I've said in my comment. But chances are, you can find a way where it doesn't have to be recalculated over and over.

user generated / user specific functions

I'm looking for the most elegant and secure method to do the following.
I have a calendar, and groups of users.
Users can add events to specific days on the calendar, and specify how long each event lasts for.
I've had a few requests from users to add the ability for them to define that events of a specific length include a break, of a certain amount of time, or require that a specific amount of time be left between events.
For example, if event is >2 hours, include a 20min break. for each event, require 30 minutes before start of next event.
The same group that has asked for an event of >2 hours to include a 20 min break, could also require that an event >3 hours include a 30 minute break.
In the end, what the users are trying to get is an elapsed time excluding breaks calculated for them. Currently I provide them a total elapsed time, but they are looking for a running time.
However, each of these requests is different for each group. Where one group may want a 30 minute break during a 2 hour event, and another may want only 10 minutes for each 3 hour event.
I was kinda thinking I could write the functions into a php file per group, and then include that file and do the calculations via php and then return a calculated total to the user, but something about that doesn't sit right with me.
Another option is to output the groups functions to javascript, and have it run client-side, as I'm already returning the duration of the event, but where the user is part of more than one group with different rules, this seems like it could get rather messy.
I currently store the start and end time in the database, but no 'durations', and I don't think I should be storing the calculated totals in the db, because if a group decides to change their calculations, I'd need to change it throughout the db.
Is there a better way of doing this?
I would just store the variables in mysql, but I don't see how I can then say to mysql to calculate based on those variables.
I'm REALLY lost here. Any suggestions? I'm hoping somebody has done something similar and can provide some insight into the best direction.
If it helps, my table contains
eventid, user, group, startDate, startTime, endDate, endTime, type
The json for the event which I return to the user is
{"eventid":"'.$eventId.'", "user":"'.$userId.'","group":"'.$groupId.'","type":"'.$type.'","startDate":".$startDate.'","startTime":"'.$startTime.'","endDate":"'.$endDate.'","endTime":"'.$endTime.'","durationLength":"'.$duration.'", "durationHrs":"'.$durationHrs.'"}
where for example, duration length is 2.5 and duration hours is 2:30.
Store only the start time and end time for the event, and a BLOB field named notes.
I've worked on several systems that suffered from feature creep of these sorts of requirements until the code and data modeling became nothing but an unmaintainable collection of exception cases. It was a lot of work to add new permutations to the code, and typically these cases were used only once.
If you need enforcement of the rules and conditions described in the notes field, it's actually more cost-effective to hire an event coordinator instead of trying to automate everything in software. A detail-oriented human can adapt to the exception cases much more rapidly than you can adapt the code to handle them.

Curve-fitting in PHP

I have a MySql table called today_stats. It has got Id, date and clicks. I'm trying to create a script to get the values and try to predict the next 7 days clicks. How I can predict it in PHP?
Different types of curve fitting described here:
http://en.wikipedia.org/wiki/Curve_fitting
Also: http://www.qub.buffalo.edu/wiki/index.php/Curve_Fitting
This has less to do with PHP, and more to do with math. The simplest way to calculate something like this is to take the average traffic for a given day over the past X weeks. You don't want to pull all the data, because fads and page content changes.
So, for example, get the average traffic for each day over the last month. You'll be able to tell how accurate your estimates are by comparing them to actual traffic. If they aren't accurate at all, then try playing with the calculation (ex., change the time period you're sampling from). Or maybe it's a good thing that your estimate is off: your site was just featured on the front page of the New York Times!
Cheers.
The algorithm you are looking for is called Least Squares
What you need to do is minimize the summed up distances from each point to the function you will use to predict the future values. For the distance to be always positive, not the absolute value is taken into calculation, but the square of the value. The sum of the squares of the differences has to be minimum. By defining the function that makes up that sum, deriving it, solving the resulting equation, you will find the parameters for your function, that will be CLOSEST to the statistical values from the past.
Programs like Excel (maybe OpenOffice Spreadsheet too) have a built-in function that does this for you, using polynomial functions to define the dependence.
Basically you should take Time as the independent value, and all the others as described values.
This is called econometrics, because its widespread in economics. This way, if you have a lot of statistical data from the past, the prediction for the next day will be quite accurate (you will also be able to determine the trust interval - the possible error that may occur). The following days will be less and less accurate.
If you make different models for each day of week, include holidays and special days as variables, you will get a much higher precision.
This is the only RIGHT way to mathematically forecast future values. But from all this a question arises: Is it really worth it?
Start off by connecting to the database and then retrieving the data for x days previously.
Then you could attempt to make a line of best fit for the previous days and then just use that and extend into the future. But depending on the application, a line of best fit isn't going to be good enough.
a simple approach would be to group by days and average each value. This can all be done in SQL

A Digg-like rotating homepage of popular content, how to include date as a factor?

I am building an advanced image sharing web application. As you may expect, users can upload images and others can comments on it, vote on it, and favorite it. These events will determine the popularity of the image, which I capture in a "karma" field.
Now I want to create a Digg-like homepage system, showing the most popular images. It's easy, since I already have the weighted Karma score. I just sort on that descendingly to show the 20 most valued images.
The part that is missing is time. I do not want extremely popular images to always be on the homepage. I guess an easy solution is to restrict the result set to the last 24 hours. However, I'm also thinking that in order to keep the image rotation occur throughout the day, time can be some kind of variable where its offset has an influence on the image's sorting.
Specific questions:
Would you recommend the easy scenario (just sort for best images within 24 hours) or the more sophisticated one (use datetime offset as part of the sorting)? If you advise the latter, any help on the mathematical solution to this?
Would it be best to run a scheduled service to mark images for the homepage, or would you advise a direct query (I'm using MySQL)
As an extra note, the homepage should support paging and on a quiet day should include entries of days before in order to make sure it is always "filled"
I'm not asking the community to build this algorithm, just looking for some advise :)
I would go with a function that decreases the "effective karma" of each item after a given amount of time elapses. This is a bit like Eric's method.
Determine how often you want the "effective karma" to be decreased. Then multiply the karma by a scaling factor based on this period.
effective karma = karma * (1 - percentage_decrease)
where percentage_decrease is determined by yourfunction. For instance, you could do
percentage_decrease = min(1, number_of_hours_since_posting / 24)
to make it so the effective karma of each item decreases to 0 over 24 hours. Then use the effective karma to determine what images to show. This is a bit more of a stable solution than just subtracting the time since posting, as it scales the karma between 0 and its actual value. The min is to keep the scaling at a 0 lower bound, as once a day passes, you'll start getting values greater than 1.
However, this doesn't take into account popularity in the strict sense. Tim's answer gives some ideas into how to take strict popularity (i.e. page views) into account.
For your first question, I would go with the slightly more complicated method. You will want some "All time favorites" in the mix. But don't go by time alone, go by the number of actual views the image has. Keep in mind that not everyone is going to login and vote, but that doesn't make the image any less popular. An image that is two years old with 10 votes and 100k views is obviously more important to people than an image that is 1 year old with 100 votes and 1k views.
For your second question, yes, you want some kind of caching going on in your front page. That's a lot of queries to produce the entry point into your site. However, much like SO, your type of site will tend to draw traffic to inner pages through search engines .. so try and watch / optimize your queries everywhere.
For your third question, going by factors other than time (i.e. # of views) helps to make sure you always have a full and dynamic page. I'm not sure about paginating on the front page, leading people to tags or searches might be a better strategy.
You could just calculate an "adjusted karma" type field that would take the time into account:
adjusted karma = karma - number of hours/days since posted
You could then calculate and sort by that directly in your query, or you could make it an actual field in the database that you update via a nightly process or something. Personally I would go with a nightly process that updates it since that will probably make it easier to make the algorithm a bit more sophisticated in the future.
This, i've found it, the Lower bound of Wilson score confidence interval for a Bernoulli parameter
Look at this: http://www.derivante.com/2009/09/01/php-content-rating-confidence/
At the second example he explains how to use time as a "freshness factor".

Categories