Wednesday, May 11, 2011

Random Walkers

The next RPI alternative we'll look at is inspired by the LRMC rating system developed by Paul Kvam and Joel Sokol at Georgia Tech.  This rating system has gotten some publicity in recent years for predicting the NCAA tournament outcomes better than other rating systems.  The LMRC rating system is inspired by ranking system for NCAA football developed by Callaghan, Porter, and Mucha.  The core of their system is what they call a "simple random walker": 
Consider independent random walkers who each cast a single vote for the team they believe is the best. Each walker occasionally considers changing its vote by examining the outcome of a single game selected randomly from those played by their favorite team, recasting its vote for the winner of that game with probability p (and for the loser with probability 1-p).
So the random walker rating system simulates a large number of these random walker voters, and iterates until the votes across all the teams settles down to a steady state.

Callaghan, Porter, and Mucha derive a solution for this problem based upon linear algebra and differential equations that is beyond my mathematical abilities to comprehend (or program).  However, it is relatively straightfoward to calculate this with the sort of iterative solution we have used elsewhere.

Assume that a team T currently has W (weight) votes as the best team. (W need not be an integer.)  We update W by looking at the games that T has played, and distributing some of W to the winner and the loser of each game.  So some of the weight comes back to team T and some goes to its opponent.  We do the same thing for every other team until the weights of the teams settle down to a steady state.

If team T has played N games, we begin our distribution by splitting W into N buckets.  We then walk through the list of games that team T has played, and for each team we split the bucket (W/N) into two parts: (W/N)*p for the winning team, and (W/N)*(1-p) for the losing team.

The selection of "p" determines how the weights will be split between winning and losing teams.  If p=1.0, then the winning team in each game receives all the "votes" and the system is similar to RPI.  If p=0.50, then there is no bonus for winning games and every team tends to the same rating. In the casue of the Callaghan, Porter, and Mucha paper, they use p=0.80 to generate their ratings.  (LRMC, as we shall see, uses a more complex formula based upon MOV to determine the split.)  We will try a range of values for p=0.50 to p=1.00:

  Predictor    % Correct    MOV Error  
Wilson77.7%10.33
RW (p=0.60)70.3%11.87
RW (p=0.70)71.0%11.72
RW (p=0.80)69.3%12.09
RW (p=1.00)64.7%13.60

Performance is maximized somewhere in the neighborhood of p=0.70, but even so is considerably worse than our best predictor so far.

We'll revisit the "random walker" model later when we look at LRMC.  It's an intriguing framework for distributing weight between teams, and perhaps can be modified to provide better overall performance.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.