0

Feature request: Community prediction

Comments:
Threaded Linear
#1
Mrg_41

If users can predict on matches like in thespike.gg that would be great.
Also a leaderboard for correct predictions

#2
ArgieGR8ArgieB8ArgieM8
4
Frags
+

User-polled predicts on individual matches is exactly what I've wanted for a while now. For a prediction leaderboard I would suggest using a mean squared error(Brier), it's very simple to calculate and provides a nice metric for an individuals or aggregates' predictions over time.

The problem with a prediction leaderboard of course is, that not every prediction is created equal. Only those, that predict the same games can be compared with such metrics. So it would have to be per tournament and only inclusive to those, who predict on the same games(or just all games in that tournament for simplicity).

#3
ranker11
0
Frags
+

how about less weight on quarters and most weight on finals smth like that?

#4
ArgieGR8ArgieB8ArgieM8
0
Frags
+

Irregardless of system or metric the comparison of prediction accuracy should only be limited to those who predict on the same games. This is because when you start comparing people who did different predictions, they are working on not only different amounts of information, but also some predictions are just easier to make based on the teams participating and their skill difference.

On the weighing idea, what you're proposing would be the opposite of the things you'd want to weigh. The deeper you are into a tournament, generally the easier it becomes to predict the games, because you have more data to make any given prediction on. Predicting is inherently an information game. To compare different predictors, you need to make sure, that those predictions were made while the predictors were working under the same amount of information.

Brier scoring already "weighs" predictions with how confident you are in them, so you don't even have to worry about that aspect. The only thing that matters is the underlying uncertainty based on the information you have and whether the predictions were made on the same events.

Let's say there's a really close matchup, that's about 40/100 for team 1. Your goal as a predictor would be not only to get the correct prediction, but also predict how much uncertainty there is given the information(yes you're rewarded for predicting uncertainty correctly too, but predicting uncertainty consistently gives you a lower resolution score). With more available information, the uncertainty naturally decreases.

A brier score of 0 means you've made perfectly confident correct predictions, a Brier score of 1 means you've made perfectly confident incorrect predictions. A Brier score of 0.5 would be achieved by choosing the correct predictions 50% of the time with 100% confidence. So if you predict, that team 1 has a 40% probability of winning, and they win, the Brier score of the prediction is (0.4-1)^2=0.36, but if they don't win, the Brier score for the prediction is (0.4-0)^2=0.16. So you are not penalized too hard for getting uncertain predictions wrong or right. Generally a score under 0.25 is good, because guessing uncertainty consistently would net you such a score(a prediction with 50% confidence will produce a score of 0.25, irregardless of the outcome).

tl;dr: Predictors have to be compared in the same situations no matter the system, because fundamentally the art of prediction is about the amount of information any given predictor has to make inferences. In some situations there is more information available, therefore the predictions are easier to make.

  • Preview
  • Edit
› check that that your post follows the forum rules and guidelines or get formatting help
Sign up or log in to post a comment