clock menu more-arrow no yes mobile

Filed under:

Community Project: Ideal Ranking Methodologies

As many of you have noticed, and begun to comment on, we're making a switch in our Top 25 voting methodology. Burnt Orange Nation was designed to be, and has become, a community project, and in that spirit, we want to open up a thread to you to help improve our methodology as we move forward.

I'm going to lay this out in several parts.  First, a brief snyopsis of why we're making a change in our methodology. Second, some background reading material. And third, an explanation of where we're headed with our thinking. Then, as always, the post will be open in the comment section for your help. We feel confident that we're on the right track to an improved voting methodology, but we're also certain that it's a work in progress. We'll need your help along the way to make appropriate adjustments, clarifications, specifications, and the like, to get it where we want.

Why Change Now?

Long-time readers of this site know that we've written no fewer than a dozen posts complaining about various voting methodologies. We've studied and raised concerns about the methodologies that fuel both the human and computer voting lines, not to mention the BCS.

BON is also a charter member of the BlogPoll, an alliance of college football bloggers that vote on a Top 25 each week, including in the preseason and at the conclusion of the bowl games. As busy as we get writing about Texas, we haven't had enough time to really sit down and visit our ballot methodology with any real rigor. As such, our ballots are simply the combined opinion of AW, myself, and you, the readers. I think we produce reasonable ballots, and were we to continue with business as usual, it wouldn't be the worst sin in the world. Between us, we do a pretty good job of sorting teams out.

But business as usual precludes the possibility of complaining about other methodologies, and it sells the BlogPoll short of its potential, and works against its founding mission, which was to be something not just different, but better than the aforementioned mainstream polls.

We're not alone in this feeling. SMQ and I have become engaged in some excellent dialogues about this very subject, and have some offseason proposal for improving the BlogPoll. The most fundamental change we think must be made is a statement of methodology from each balloter. With that on the horizon, we decided that we'd better be on the frontier of this revolution (if you'll pardon my word choice), actively seeking to define a methodology that we can feel 100% comfortable standing behind. We'll get into the actual methodology preference in the last section, but the basic premise is that the previous methodology was insufficient, for a number of reasons. Strict subjective rankings is too artistic, and too similar to the very process we shake our heads at.

Thus, the change.

Background Reading

For those interested in joining us in this process, some background reading may be useful. Below are links to various posts, polls, and so forth, that should help frame the issue at hand.

SMQ summary article of ranking methodologies
The Boston College problem
BlogPoll central
BCS Central
Las Vegas oddsmakers' balloting
MGoBlog counter-argument to Vegas
Madness of not including MOV in computer rankings (MGoBlog)

There's plenty more, if anyone's interested. Email me if you'd like further reading.

The New Methodology

When you sort through this, and think about this, long enough, one of the conclusions you're almost inevitably drawn to is that subjectively ranking teams carries with it a host of nasty problems. Take, as an example, West Virginia. A healthy number of readers are less than impressed with this team, and the complaint almost always sounds like this, "They haven't beat anybody yet this year!" It's a fair complaint, but it's one that the mainstream human voters conveniently ignore, because they can. Teams, then, are ranked on the opinion of the voter, and as any good college football student knows, that's a dangerous crutch to stand on. The fact is that we're pretty lousy at predicting which teams are better than others. If we weren't, there wouldn't be any controversy over this stuff. To leave voting to entirely subjective evaluations is to place undue confidence in the voter, who has time to watch, what, four, five games per week? At most.

And lest anyone say, "Yeah, but those Vegas guys - they have time to watch 'em all, and rank them properly," let me remind you that if their evaluations were sufficient, they'd not need to offer us a new set of rankings each week, which they most assuredly do. Were their own evaluations sufficient, last week's Top 25 ought not change in the next week. There'd be no need. Clearly, of course, there is a need, and they certainly change their rankings each week.

What, then, is the solution? There are two routes a voter can go, really. One can try to find the perfect algorithm for the ranking of college football teams each week, plug in the input from every game each week, push the button, let the computer whirl, and see what it spits out. It is, I would argue, an improvement over the strict subjectivism ranking that we see from, say, an AP voter, but it's subject to its own set of biases.

Further, we haven't the time, knowledge, or wherewithal to do any such thing.

Where does that leave us? With, I think, the most logical solution - a resume voting methodology. Teams are ranked based on actual performance to date, without any concern for future performance (e.g. "West Virginia's schedule is soft and I think they'll wind up in the BCS title game, so I'm forecasting them, so to speak, at #2.") In resume voting, then, West Virginia and Florida are compared side-by-side, and the voter notes that Florida's resume has three very quality wins, while West Virginia's includes none. No question as to who gets ranked higher.

The question for you, the readers, then, is how we should employ our resume methodology. You'll note in our first stab below that I made a first attempt at assigning Quality Wins to each team. You'll also note, as some of you already have, that the current definition of a Quality Win is a bit hazy. So, the community project:

  1. What would an ideal resume voting methodology look like? (i.e. What are its components?)
  2. Given those components, how do we define them?
  3. As we're not creating a strict formula for plug and play, what subjective evaluations are allowed, under which circumstances, and to what end? Should they only serve as tie-breakers? What are the relative values of each component? (The components, for example, might be Quality Wins, Margin of Victory, Quality Losses, and Bad Losses. If those are the components, what is the relative value of each?)
This is a big project, and our big offseason community project, so don't feel like we need to nail this issue perfectly overnight. Rather, consider this a work in progress as we strive to get on the frontier of this process. As we do so, we'll be in the position to urge others to follow along. And that will, eventually, help the BlogPoll to achieve its noble founding mission.

--PB--