Well, the new college basketball season is upon us. My early predictions indicate that the Kentucky Blue Platoon is going to play the Kentucky White Platoon in the Championship game, but that prediction might evolve as we see more games.

In the meantime, I have been working on the Net Prophet model (for new readers of the blog - ha, as if! - sometimes known as the Pain Machine or PM). Traditionally the PM has used a linear regression as its predictive model, but lately I have been looking at other possibilities. In coming blog posts I'll detail some of these things.

I'm also hoping to get my act together enough to get the PM listed on the Prediction Tracker page for basketball predictions. I meant to do this last year but never quite got around to doing it. But this year for sure :-).

# Net Prophet

Exploring algorithms for predicting NCAA basketball games.

## Thursday, November 20, 2014

## Friday, October 10, 2014

### Day of the Week Effect && Polynominal Variables in Linear Regression

Motivated partly by recent discussion of Thursday Night Football, I began to wonder if the day of the week has any impact upon college basketball games. This is a little bit of a tricky topic, because conferences play games on different nights (e.g., the Ivy League plays on Friday nights) so there's some conference bias mixed into any discussion of the impact of day of the week. But I decided to ignore that for the moment and just look at the straightforward question.

This is a little trickier than you might expect, because my prediction model uses linear regression. Linear regression works fine when we're looking for the relationship between two numerical variables (e.g., how does rebounds/game affect score) but it doesn't work so well with polynominal (not polynomial!) variables. A polynominal variable is one that takes on a number of discrete, non-numeric values. In this case, day of the week can be Monday, Tuesday, Wednesday and so on.

To use a polynominal variable in linear regression, we turn it into a number of binominal variables. In this case, we create a new variable called "DOW = Monday" and give it a 1 or 0 value depending upon whether or not the day of the game is Monday. We do this for each possible value of the polynominal variable, so in this case we end up with seven new variables. We can then use these as input to our linear regression.

When I do so, I find that only one of the new variables has any importance in the regression:

Translating, this says the home team is at a small disadvantage in Friday games. I leave it up to the reader to explain why that might be true. (Ivy League effect?)

We can also look at whether predictions are more or less accurate on some days. When I do that for my model, I find that the predictions are most accurate for Saturday games, and the least accurate for Sunday games. The difference in RMSE is about 6/10 of a point, so it's not an entirely trivial difference. In fact, Saturday games are more accurate than any other day of the week.

This is a little trickier than you might expect, because my prediction model uses linear regression. Linear regression works fine when we're looking for the relationship between two numerical variables (e.g., how does rebounds/game affect score) but it doesn't work so well with polynominal (not polynomial!) variables. A polynominal variable is one that takes on a number of discrete, non-numeric values. In this case, day of the week can be Monday, Tuesday, Wednesday and so on.

To use a polynominal variable in linear regression, we turn it into a number of binominal variables. In this case, we create a new variable called "DOW = Monday" and give it a 1 or 0 value depending upon whether or not the day of the game is Monday. We do this for each possible value of the polynominal variable, so in this case we end up with seven new variables. We can then use these as input to our linear regression.

When I do so, I find that only one of the new variables has any importance in the regression:

0.6636 * DOW = 4=false

Translating, this says the home team is at a small disadvantage in Friday games. I leave it up to the reader to explain why that might be true. (Ivy League effect?)

We can also look at whether predictions are more or less accurate on some days. When I do that for my model, I find that the predictions are most accurate for Saturday games, and the least accurate for Sunday games. The difference in RMSE is about 6/10 of a point, so it's not an entirely trivial difference. In fact, Saturday games are more accurate than any other day of the week.

## Monday, September 8, 2014

### A Few More Papers

As usual, all these papers are available in the Papers archive.

Trono is very concerned about the NCAA football polls and with formulating a rating system that will closely match those polls. I'm not exactly sure what utility that provides -- surely if I want to know what the polls say I can just look at them? That issue aside, his description of his ranking system is vague and confusing -- I came away with no good understanding of how it worked or how to implement it.

**[Trono 2007] Trono, John A., "An Effective Nonlinear Rewards-Based Ranking System," Journal of Quantitative Analysis in Sports, Volume 3, Issue 2, 2007.**Trono is very concerned about the NCAA football polls and with formulating a rating system that will closely match those polls. I'm not exactly sure what utility that provides -- surely if I want to know what the polls say I can just look at them? That issue aside, his description of his ranking system is vague and confusing -- I came away with no good understanding of how it worked or how to implement it.

**[Minton 1992] Minton, R. "A mathematical rating system."**

*UMAP Journal*13.4 (1992): 313-334.
This is a teaching module for undergraduate mathematics that illustrates basic linear algebra through application to sports rating. The ratings systems developed are simple systems of linear equations based upon wins, MOV, etc. The systems are very simple, but this is a clear and detailed introduction to some basic concepts.

**[Redmond 2003]**

**Redmond, Charles. "A natural generalization of the win-loss rating system."**

*Mathematics magazine*(2003): 119-126.
Redmond presents a rating system based upon MOV that includes a first-generation strength of schedule factor. It isn't extremely sophisticated, but makes a nice follow-on to [Minton 1992].

**[Gleich 2014] Gleich, David. "PageRank Beyond the Web," http://arxiv.org/abs/1407.5107.**

This is a thorough and well-written survey of the use of the PageRank algorithm. Gleich provides clear, non-formal descriptions of the subject but also delves into the mathematical details at a level that will require some knowledge to understand. There is a section on PageRank applied to sports rankings, and Gleich also shows that the Colley rating is equivalent to a PageRank. Required reading for anyone interested in applying PageRank-type algorithms.

**[Massey 1997] Massey, Kenneth. "Statistical models applied to the rating of sports teams."**

*Bluefield College*(1997).
Kenneth Massey's undergraduate thesis is required reading for anyone interesting is sports rating systems. He covers the least-squares and maximum-likelihood ratings that form the basis of the Massey rating system.

## Thursday, September 4, 2014

### Welcome Back & The Oracle Rating System

Welcome back! I hope you had a great summer. With Fall rapidly approaching my attention has returned (somewhat) back to NCAA basketball and sports prediction. One trigger was happening across a paper from the June issue of JQAS:

The paper describes a variant of a random walker algorithm and uses it to predict NFL games. The work here was motivated by a quirky feature of random walkers. Beating a very good team can raise a team's rating significantly, even if the rest of the team's performance is poor. In some ways this makes sense, but it can lead to a situation where a mediocre team is ranked inordinately high based upon a lucky win over a very good team. To address this, the Oracle algorithm introduces an artificial additional team (called the Oracle) and by varying how many times each real team has "won" or "lost" against this Oracle team, biases the resulting rankings. The authors test the predictive performance of the Oracle rating on NFL games from 1966-2013, and out-perform rating systems like Massey and Colley, although only by small margins (1-2% in most cases). The paper is well-written and comprehensive, with clear explanation of the approach, illustrative examples, and thorough testing.

Since I have previously implemented various random walker algorithms, it wasn't difficult to implement this approach and test its performance on NCAA basketball games. There were a couple of interesting results from this experiment.

First of all, I found the best performance was based upon the won-loss records of teams, and not margin of victory (MOV). This is pretty unusual -- I don't think I've found any other rating system that performed better using won-loss than MOV. The performance was also competitive with very good MOV-based rating systems.

Second, I found that for NCAA basketball games, the algorithm performed much better without a converting the results matrix to a column-stochastic form before creating the ratings. A brief digression is in order to explain that remark.

Random walker algorithms model a system with a large number of random walkers:

It isn't clear what the ratings "mean" if you don't convert to column stochastic form, but I found that the ratings had much better performance for NCAA basketball games without the conversion. When I reported this result back to Eduardo Balreira, he tested it for his corpus of NFL games and found that it performed worse. It's altogether a rather curious result and I'm not certain what to make of it.

In my experimentation so far, I haven't found any customization of the Oracle system that produces results better than my current best predictors. However, it is close and has a few interesting properties that bear some more thought, so I may continue to play with it to see if I can discover a way to further improve its performance for NCAA basketball games.

**[Balreira 2014] Eduardo Cabral Balreira, Brian K. Miceli and Thomas Tegtmeyer, "An Oracle method to predict NFL games", Journal of Quantitative Analysis in Sports. Volume 10, Issue 2, Pages 183–196, ISSN (Online) 1559-0410, ISSN (Print) 2194-6388,DOI: 10.1515/jqas-2013-0063, March 2014**The paper describes a variant of a random walker algorithm and uses it to predict NFL games. The work here was motivated by a quirky feature of random walkers. Beating a very good team can raise a team's rating significantly, even if the rest of the team's performance is poor. In some ways this makes sense, but it can lead to a situation where a mediocre team is ranked inordinately high based upon a lucky win over a very good team. To address this, the Oracle algorithm introduces an artificial additional team (called the Oracle) and by varying how many times each real team has "won" or "lost" against this Oracle team, biases the resulting rankings. The authors test the predictive performance of the Oracle rating on NFL games from 1966-2013, and out-perform rating systems like Massey and Colley, although only by small margins (1-2% in most cases). The paper is well-written and comprehensive, with clear explanation of the approach, illustrative examples, and thorough testing.

Since I have previously implemented various random walker algorithms, it wasn't difficult to implement this approach and test its performance on NCAA basketball games. There were a couple of interesting results from this experiment.

First of all, I found the best performance was based upon the won-loss records of teams, and not margin of victory (MOV). This is pretty unusual -- I don't think I've found any other rating system that performed better using won-loss than MOV. The performance was also competitive with very good MOV-based rating systems.

Second, I found that for NCAA basketball games, the algorithm performed much better without a converting the results matrix to a column-stochastic form before creating the ratings. A brief digression is in order to explain that remark.

Random walker algorithms model a system with a large number of random walkers:

If you let this process go long enough, it reaches a steady state, and the percentage of total walkers on each team becomes that team's rating. That means that the sum of all the ratings is 1, and each rating represents the probability that a walker will end on that team. When you formulate this as a matrix mathematics problem, you must normalize each column in the raw results matrix to sum to one (making the matrix "column stochastic") to ensure that the final ratings will represent the probabilities.Consider independent random walkers who each cast a single vote for the team they believe is the best. Each walker occasionally considers changing its vote by examining the outcome of a single game selected randomly from those played by their favorite team, recasting its vote for the winner of that game with probability p (and for the loser with probability 1-p).

It isn't clear what the ratings "mean" if you don't convert to column stochastic form, but I found that the ratings had much better performance for NCAA basketball games without the conversion. When I reported this result back to Eduardo Balreira, he tested it for his corpus of NFL games and found that it performed worse. It's altogether a rather curious result and I'm not certain what to make of it.

In my experimentation so far, I haven't found any customization of the Oracle system that produces results better than my current best predictors. However, it is close and has a few interesting properties that bear some more thought, so I may continue to play with it to see if I can discover a way to further improve its performance for NCAA basketball games.

## Monday, April 7, 2014

### Championship Game Prediction

The Prediction Machine hasn't fared very well this Tournament (languishing in the middle of both the Kaggle and March Machine Madness contests) but for what it's worth here is the prediction for the Championship Game:

I'd like to see Connecticut win myself, but I think they have a hard row to hoe. Napier & Boatright have been destroying opposing guards with their pressure defense. If they can do that to the Harrison twins and keep them from repeatedly driving the lane, that will certainly help Connecticut's chances. But so far the referees have been very stingy with charge calls, which is going to be make it very difficult for Connecticut's undersized defense to deal with Kentucky's dribble-drive offense. Wisconsin figured out in the second half that they could mug the Harrisons once they were in the lane with little repercussion, but who knows if the reffing crew tonight will allow that. And you have to figure that Kentucky is going to continue to enjoy an enormous advantage in rebounding. Still, anything can happen, and it will hopefully be a tight and entertaining game.Connecticut vs. Kentucky: Kentucky by 2

### Machine March Madness Winner: Congratulations to Monte McNair!

Apparently none of the competitors in the Machine March Madness have either Kentucky or Connecticut winning the final game, so the contest has been decided, and the winner is Monte McNair with 108 points and 40 correct picks.

(Note that we did have one Machine March Madness competitor who did better than Monte -- "TD" -- but since he never contacted me to explain his entry, he has been disqualified.)

Congratulations to Monte who continues to be one of the strongest competitors year after year. (Although unfortunately something went wrong for him in the semi-final games in the Kaggle contest, where he dropped from the top ten to 44!)

(Note that we did have one Machine March Madness competitor who did better than Monte -- "TD" -- but since he never contacted me to explain his entry, he has been disqualified.)

Congratulations to Monte who continues to be one of the strongest competitors year after year. (Although unfortunately something went wrong for him in the semi-final games in the Kaggle contest, where he dropped from the top ten to 44!)

## Wednesday, April 2, 2014

### Recent Papers Reviewed

I have added several new papers to the Papers archive. Short descriptions follow.

**[Barrow 2013] D. Barrow, I. Drayer, P. Elliott, G. Gaut, and B. Osting, "Ranking rankings: an empirical comparison of the predictive power of sports ranking methods," 2013.**This paper compares a number of ranking systems on predictive power. The main conclusions are that (1) ranking systems which use margin of victory are more predictive than those that use only win-loss data, and (2) least squares and random walkers are better than other methods for predicting NCAA football outcomes.

**[Hvattum 2010] Lars Magnus Hvattum, , Halvard Arntzen, "Using ELO ratings for match result prediction in association football," International Journal of Forecasting 26 (2010) 460–470.**This paper looks at using ELO ratings to predict association football (soccer) matches. ELO was better than all of the other rating systems, but failed to out-perform the market lines.

**[Kain 2011] Kyle J. Kain and Trevon D. Logan, "Are Sports Betting Markets Prediction Markets? Evidence from a New Test," January 2011.**This paper tests whether the point spread is a good predictor of margin of victory (it is) and whether the over/under is a good predictor of total points scored (it is not).

**[Melo 2012] Pedro O. S. Vaz De Melo, Virgilio A. F. Almeida, Antonio A. F. Loureiro, and Christos Faloutsos, "Forecasting in the NBA and Other Team Sports: Network Effects in Action," ACM Transactions on Knowledge Discovery from Data, Vol. 6, No. 3, Article 13, October 2012.**

This is a rather interesting paper that models NBA teams as networks exchanging players and coaches. This allows the authors to look at hypotheses such as "trading players improves a team's performance," or "a player who has played for a number of teams is more valuable than one who hasn't." They develop metrics such as "team volatility" and use these to predict future performance.

**[Page 2007] Garritt L. Page, Gilbert W. Fellingham, C. Shane Reese, "Using Box-Scores to Determine a Position’s Contribution to Winning Basketball Games," Journal of Quantitative Analysis in Sports, Volume 3, Issue 4 2007 Article 1.**This paper looks at box scores for games from the 1996-97 NBA season to determine the importance of different basketball skills (e.g., defensive rebounding) were to each basketball position (e.g., point guard). The surprising result was the importance of defensive rebounding by the guard positions and offensive rebounding by the point guard.

**[Park 2005] Juyong Park and M. E. J. Newman, "A network-based ranking system for US college football," Department of Physics and Center for the Study of Complex Systems, University of Michigan, Ann Arbor, MI, 2005.**The authors develop a ranking system based upon the intuitive logic that "If A beat B and B beat C, then A indirectly beat C" and apply it to college football.

**[Strumbelj 2012] Erik Štrumbelj, Petar Vračar, "Simulating a basketball match with a homogeneous Markov model and forecasting the outcome," International Journal of Forecasting 28 (2012) 532–542.**The authors build a possession-by-possession transition matrix for an NBA game based upon box score data and team statistics. They then use this matrix to predict game outcomes. The results were not statistically better than methods such as ELO, and worse than point spreads.

Subscribe to:
Posts (Atom)