clock menu more-arrow no yes mobile

Filed under:

BlogPoll Voter Methodology: A Basic Analysis

Those of you who were around during the doldrums of the summer know that I became pretty obsessed with sorting through polling, rankings, and the like. We studied correlations between end-of-year and preseason rankings, whether strong finishes to a previous season were predictive of next season performance, and much more.

This season, we've gone through the ranking process together here at BON each week, including our recent look at the merits and demerits of Vegas-style oddsmaker rankings. And, recently, I laid out a series of questions for the ever-thoughtful SMQ. As was the hope, SMQ wrote an excellent summary article on the various methods of ranking college football teams, with a set of pros and cons for each method. I engaged SMQ, and others, in the comment section below the article. Before you read on, I'd suggest that article, and the comments that follow it, as a useful primer on the analysis below.

With this discussion in peak form, I thought we'd keep the momentum going and take a closer look at what differences, at this point in the season, exist between: 1) SMQ's "resume" method, 2) BON's mish mash of resume/subjective analysis/future forecasting, and 3) the actual BlogPoll.

It turns out that the discrepancies are minimal - far more so than I would have expected, anyway. Below find a three column chart, with the ballots for Week 7 of SMQ, BON, and the BlogPoll final rankings. Next to BON's rankings are the delta scores between it and SMQ's ballot. Next to the BlogPoll ballot are the delta scores between it and, first, SMQ's ballot, and second, BON's.

Analysis 1: BON versus SMQ

As I continue my quest to better understand the best way(s?) to rank college football teams, I thought this would be an interesting side-by-side to start with. What can I learn about our own methodology? About SMQ's? We'll start the analysis at the team level before moving into methodological conclusions.

West Virginia (+6) and Auburn (-7) SMQ and I have essentially swapped Auburn and West Virginia, in a way that's instructive about the differences between our methodologies. In SMQ's analysis, the Auburn resume trumps that of West Virginia, and significantly so. I don't disagree with that (let's call it what it is) fact. Where we disagree is how to rank the two, of course, and in my own methodology, the Tigers look to me like a team that is 6-1, should be 5-2, and is fortunate not to be 4-3. We concede that wins are wins - which is why Auburn earns a very respectable #11 BON ranking, but we don't think the Auburn offense would have a prayer of keeping up with what would surely be a point-producing, Razorback mirroring West Virgina rushing game. For SMQ's methodology, such speculation is disallowed. Thus, the difference.

Texas (+5) and Florida (-4) Another virtual swap for SMQ and BON. Florida's resume is clearly better - also a fact - but we see another not-so-great Florida offense, and conclude Texas is the better team. We're also a Texas blog. And this vote is probably biased. This illuminates another difference in methodology, though - our own allows for bias to creep in far more than SMQ's does. Where SMQ sees a potential drawback, we see a potential value - flexibility.

Arkansas (+4) I think the difference between the two evaluations of Arkansas, relatively small as it is, pretty well captures the difference between the two methodologies. We see an improving (health and effectiveness) Arkansas team that would give USC far more trouble this week than it did in week one. SMQ sees a nasty loss on the resume, and some early season squeak-by wins. We saw them, too, and aren't getting -too- carried away, but we're giving the Razorbacks more credit than SMQ's methodology allows. (We also note, in our weekly comments, that we worry about having Arkansas overrated. SMQ need not make such an explanation.)

Conclusions We could go through further examples, but the differences in the two methodologies is, I think, sufficiently illuminated by the above. One of the key words you'll see in the preceding examples is "allows," and that gets at the crux of the matter. SMQ's methodology varies significantly from our own in the degree of freedom the voter is allowed. The resume is the resume, no matter what the subjective experience of the voter in watching a given team. SMQ's methodology is not without analysis, nor room for interpretation - only less so.

Analysis 2: All Three Ballots

Is anyone else surprised by how much agreement there is in the three ballots? There are differences, certainly, but the similarities are greater than I'd expect. The top three spots are identical, and there is great consensus on Tennessee, Clemson, Georgia Tech, Oregon, Nebraska, Texas A&M, Pittsburgh and Louisville. There exists ballpark consensus on nearly everyone else.

We don't know which method each voter in the BlogPoll employs. A good number of the voters do offer a methodological explanation of their ballots, but I, for one, certainly don't have the time to keep track of who's voting how, and why.

Still, that won't keep me from hypothesizing that seven, eight weeks into the season, non-SMQ Method voters tend to shed more and more of their subjective hunching about teams for resume analysis. Clearly, not completely so, but the amount of agreement suggests that everyone is, very generally speaking, doing the same thing. Outlier ballots are pretty well canceled out, leaving a general consensus around most teams. That general consensus is - and perhaps we shouldn't be surprised - quite well correlated with the resume methodology employed by SMQ.

Conclude from that what you will, but when SMQ writes in his introduction to this week's BlogPoll ballot that his "resume" analysis is starting to spread, he appears to be far more right than he realizes.

--PB--