Your mileage may vary on the validity of the Sagarin ratings, but history shows a sustained accuracy in determining straight up winners across sports and levels. Even given the volatility of Sagarin's early season ratings, which carry over last year's performance until the current season adds enough data to stand on its own, not taking fully into account the current team's relative competitiveness... the ratings are to a great degree accurate, from 65-70%.
- Against the spread is another story, and to a system all systems seem to only have a consistent 47-55% accuracy against the spread, but that's another issue for down the road.
- While there are a variety of other predictive systems, I go with Sagarin given its longer history of accuracy (dating back to 1985), and because Sagarin covers all major sports compared to other systems focusing on certain sports. The methodology has a consistent accuracy across the board in all sports, college or pro.
As many others certainly do, I started out in the Ballhype contest using Sagarin's ratings as a guide. The only problem, of course, is that you typically end up picking the favorites, which comes with little reward since so many others are picking the favorites as well. You receive a small fraction of a point when you win, and lose the full one point when you lose. A single loss can undo several wins in an instant. However, when the underdog scores the upset, the Hensleys of the world pick up massive points and blow by everyone at once, even after missing on several other underdog picks where the favorite won.
Eventually, I started taking the Hensley route and picking all the underdogs in the pro games, as that's where a regular number of upsets occur. Even the worst pro teams often manage to win 30-40% of their games, and the best teams can lose 20-40% of theirs. Simple logic would indicate that scoring 5-10 points around 20-40% of the time and losing only one point 60-80% of the time will still lead to a big net gain.
At the same time, I noticed that many of the dozens of college games in football and basketball were such lopsided contests that picking the underdog still didn't make sense, such as the Florida football team against, say, Chattanooga. Even Hensley himself avoided picking an underdog in some contests. Many college games had 9-11 picks for the favorite and 0 for the underdog, and that favorite rolled to victory, a meager but assured 0.09 points for everyone.
Seeing that dual phenomenon, I felt there was a way to improve on Hensley's underdogs-only method, a middle ground where you could pick a favorite and have a good chance to win, while knowing when to pick an underdog. Every now and then Sagarin ratings would indicate an underdog was the most likely team to win but these instances weren't frequent.
However, some comparisons were closer than others. Some Sagarin comparisons showed lopsided differences between teams, while some leaned one way but were very close. Obviously, not all picks were equal, and I recalled my poker research and discussions of expected value. Knowing the relationship between probability and expected value, I realized that there had to be a direct correlation between the marginal difference in Sagarin ratings between two teams and the probability of each team winning. Putting that correlation and the idea of expected value together, I decided there had to be a way to devise a system that would maximize the return on each Ballhype Golden Picks selection.
"Why do this?", you ask. "Who cares? It's just a game." Yes it is. And so is, say, sportsbook wagering. The difference is that the latter nets you money when you win. Knowing that poker players utilize odds and expected value concepts to play poker profitably over the long run, I realize that EV concepts could cross apply to selecting teams provided systems of rating teams that showed a consistent correlation in picking winners. While point spreads provide an additional challenge over Ballhype in picking winners, I figured I could cross that bridge if/when I confirmed that such systems worked in the confines of the Ballhype contest, which operates on a similar scope with straight up picks.
The big obstacle was determining a consistent method for devising a team's probability to win. That was the next step in my research....
[Continued in Part 4]
Showing posts with label Expected Value. Show all posts
Showing posts with label Expected Value. Show all posts
Saturday, November 28, 2009
Ballhype, Golden Picks and EV, Part 2: Explaining Expected Value to the Uninitiated through a poker example
A few days ago, I recalled my poker research and the common theme of expected value (EV). Similar to the microeconomic concept of marginal utility, EV focuses around taking the expected gain of a positive outcome and multiplying by its probability of occuring... then subtracting the expected loss of a negative outcome multiplied by its probability of happening... to get a net expected value. If the final EV is negative, taking that route will fail in the long run, while the play will succeed in the long run if the net EV is positive.
To better illustrate the EV concept, here's a simplified example: Let's say you're playing $4/$8 limit Texas Hold'Em poker, and you have two pair on the turn, having paired your Ace and your Ten, with one card yet to come. There's $46 in the pot, one other player in the pot and he has made a single big bet of $8. You can close the action with a call, close the hand with a fold or keep the turn going with a raise to $16.
Let's say there's three of one suit on the board, neither of which match your cards, and whether or not you are a master reader, you know the guy making this bet well enough that you're fairly sure he has a flush (let's say the three cards are far enough apart that a straight flush is impossible), so to win this hand the river card has to improve your two pair to a full house (the only hand that will beat the flush). Therefore let's say the only options you will consider here are calling the $8 bet or folding. Is calling the $8 bet profitable?
With four cards on the board, and two in your hand, there are 46 other possible cards to come on the river. We need to find the probability that our needed full house card will come on the river. Let's never mind the cards other players have folded, cards that were burned between each street and cards in your opponent's hand, as the forthcoming odds will compensate for the chance that your needed cards are among the dead cards.
There are four cards that will score the full house: The two remaining Aces (there are four total in the deck, one is in your hand and one is already on the board), and the two remaining tens (ditto). Since four of the 46 possible remaining cards will win the hand on the river, our odds of winning the hand on the river are 4 out of 46 (8.7%).
Let's keep the whole implied odds concept simple and say that we get to act last and that, if we call the $8 and our river card hits, our opponent will just go ahead and make another $8 bet on the river, which we'll call. Let's also assume that, with the pot so big, the casino dealer has already pulled the maximum rake and jackpot drop, so no additional money will be taken from the pot.
With $46 in the pot plus another $8 from the opponent's bet, there's $54 total. Knowing this player will bet another $8 on the river if we hit, that's a total of $62 we will win if our hand hits. That's our expected positive outcome if we call: We will get $62.
If we call and the hand doesn't hit, we lose $8. We ignore all other money we've put into this pot: That's a sunk cost which you're not getting back whether you fold this hand or call and lose. Thus if we fold, we have a 100% chance of netting 0 dollars on that decision.
The expected value of calling is determined by the chance of hitting the full house and winning $62 minus the 91.3% chance of missing and our $8 call going to waste:
(0.087 * $62.00) + (0.913 * -$8.00) = -$1.91
If you hypothetically got into this exact same decision a million times, and you made the exact same decision to call every single time... over the long run you would average a loss of $1.91 for every time you called the bet. Thus the decision to call is not a profitable one: the expected value of calling is negative.
The decision to fold, even though its expected value is $0.00, is more profitable over the decision to call by $1.91. Yes, it's guaranteed you win nothing, but is a more relatively lucrative decision that the negative EV decision of calling. The times you hit and win money will not offset all the times you call and lose money.
Many experienced poker players make decisions involving expected value all the time, and (provided they have requisite skill and experience) over the long run win money because they don't invest in bets, calls and raises unless doing so has a positive expectation. As they get into these situations time and again, the positive EV decisions mean that they lose, but what they win when they invest offsets those losses and nets them a profit over the long run.
The reason I wasted your time with this long poker example is because expected value is a concept you can apply to everything in life.
... just as I decided to apply to Ballhype's Golden Picks contest. More to come in Part 3
To better illustrate the EV concept, here's a simplified example: Let's say you're playing $4/$8 limit Texas Hold'Em poker, and you have two pair on the turn, having paired your Ace and your Ten, with one card yet to come. There's $46 in the pot, one other player in the pot and he has made a single big bet of $8. You can close the action with a call, close the hand with a fold or keep the turn going with a raise to $16.
Let's say there's three of one suit on the board, neither of which match your cards, and whether or not you are a master reader, you know the guy making this bet well enough that you're fairly sure he has a flush (let's say the three cards are far enough apart that a straight flush is impossible), so to win this hand the river card has to improve your two pair to a full house (the only hand that will beat the flush). Therefore let's say the only options you will consider here are calling the $8 bet or folding. Is calling the $8 bet profitable?
With four cards on the board, and two in your hand, there are 46 other possible cards to come on the river. We need to find the probability that our needed full house card will come on the river. Let's never mind the cards other players have folded, cards that were burned between each street and cards in your opponent's hand, as the forthcoming odds will compensate for the chance that your needed cards are among the dead cards.
There are four cards that will score the full house: The two remaining Aces (there are four total in the deck, one is in your hand and one is already on the board), and the two remaining tens (ditto). Since four of the 46 possible remaining cards will win the hand on the river, our odds of winning the hand on the river are 4 out of 46 (8.7%).
Let's keep the whole implied odds concept simple and say that we get to act last and that, if we call the $8 and our river card hits, our opponent will just go ahead and make another $8 bet on the river, which we'll call. Let's also assume that, with the pot so big, the casino dealer has already pulled the maximum rake and jackpot drop, so no additional money will be taken from the pot.
With $46 in the pot plus another $8 from the opponent's bet, there's $54 total. Knowing this player will bet another $8 on the river if we hit, that's a total of $62 we will win if our hand hits. That's our expected positive outcome if we call: We will get $62.
If we call and the hand doesn't hit, we lose $8. We ignore all other money we've put into this pot: That's a sunk cost which you're not getting back whether you fold this hand or call and lose. Thus if we fold, we have a 100% chance of netting 0 dollars on that decision.
The expected value of calling is determined by the chance of hitting the full house and winning $62 minus the 91.3% chance of missing and our $8 call going to waste:
(0.087 * $62.00) + (0.913 * -$8.00) = -$1.91
If you hypothetically got into this exact same decision a million times, and you made the exact same decision to call every single time... over the long run you would average a loss of $1.91 for every time you called the bet. Thus the decision to call is not a profitable one: the expected value of calling is negative.
The decision to fold, even though its expected value is $0.00, is more profitable over the decision to call by $1.91. Yes, it's guaranteed you win nothing, but is a more relatively lucrative decision that the negative EV decision of calling. The times you hit and win money will not offset all the times you call and lose money.
Many experienced poker players make decisions involving expected value all the time, and (provided they have requisite skill and experience) over the long run win money because they don't invest in bets, calls and raises unless doing so has a positive expectation. As they get into these situations time and again, the positive EV decisions mean that they lose, but what they win when they invest offsets those losses and nets them a profit over the long run.
The reason I wasted your time with this long poker example is because expected value is a concept you can apply to everything in life.
... just as I decided to apply to Ballhype's Golden Picks contest. More to come in Part 3
Part 1: Ballhype's Golden Picks Contest and Expected Value
For the last few weeks I've played Ballhype's Golden Picks Contest. You basically try to predict winners and you received a weighted score for correct picks depending on which team other players picked. You get -1 point for every pick you make that loses. How many points you get for winning picks depends on how many other players picked the team that won and the team that lost. The winning players split a pool that consists of one point plus one point for every player that picked the wrong team. This offers a small reward for picking a favorite, while winning underdogs net far more points.
For example, let's say Florida plays Troy, and 9 players pick Florida to win while 1 player picks Troy to score the upset. If Florida wins like they're supposed to, the nine winning players evenly split a pool of two points: One point for the moron that picked Troy to win (that moron loses a point for picking wrong), and one bonus point for picking a winner. Two points divided by nine equals 0.22 points per player, so by picking Florida you get 0.22 points.
But let's say half of Florida's team gets eaten by Tremors-like underground burrowing alligators that for some reason find the taste of Troy Footballers unappealing, the game continues on despite the howling protests of Florida fans who weren't eaten before SWAT soliders were able to execute the offending alligators, and Troy manages to score a huge upset.
The one dude who picked the upset gets 9 points for every poor schlub that picked Florida, plus one bonus point for making the right pick. For successfully predicting the upset (or guessing), the winning player gets a total of 10 points.
Now, an astute player named Rich Hensley has exposed the folly of such a system: By predicting upsets in most games, Hensley scores so many points every time an underdog wins that it more than offsets all the times he loses a point when the favorite wins. Each week he is usually the winning player.
I hang around near the top each week thanks to keeping abreast of the Sagarin ratings, along with having taken to frequently mimicing Hensley's tactic. At the same time I notice his sub-.500 record with his picks and have wondered... if there a more optimal method to making picks that can maximize my score. Because otherwise, the best I can do is to just pick underdogs and essentially tie with Hensley for the top rating, and what's the fun in that.
For example, let's say Florida plays Troy, and 9 players pick Florida to win while 1 player picks Troy to score the upset. If Florida wins like they're supposed to, the nine winning players evenly split a pool of two points: One point for the moron that picked Troy to win (that moron loses a point for picking wrong), and one bonus point for picking a winner. Two points divided by nine equals 0.22 points per player, so by picking Florida you get 0.22 points.
But let's say half of Florida's team gets eaten by Tremors-like underground burrowing alligators that for some reason find the taste of Troy Footballers unappealing, the game continues on despite the howling protests of Florida fans who weren't eaten before SWAT soliders were able to execute the offending alligators, and Troy manages to score a huge upset.
The one dude who picked the upset gets 9 points for every poor schlub that picked Florida, plus one bonus point for making the right pick. For successfully predicting the upset (or guessing), the winning player gets a total of 10 points.
Now, an astute player named Rich Hensley has exposed the folly of such a system: By predicting upsets in most games, Hensley scores so many points every time an underdog wins that it more than offsets all the times he loses a point when the favorite wins. Each week he is usually the winning player.
I hang around near the top each week thanks to keeping abreast of the Sagarin ratings, along with having taken to frequently mimicing Hensley's tactic. At the same time I notice his sub-.500 record with his picks and have wondered... if there a more optimal method to making picks that can maximize my score. Because otherwise, the best I can do is to just pick underdogs and essentially tie with Hensley for the top rating, and what's the fun in that.
Subscribe to:
Comments (Atom)