clock menu more-arrow no yes mobile

Filed under:

Pre-Season Polls Hate America

Conversations about football polls, especially preseason polls, are cans of worms.  HornBrain opened up such a can of worms the past few days, and I must say that I'm not very good at walking away from cans of worms.  Instead, I like to go fishing with them.  To that end, I'd like to talk briefly about why preseason polls are an inevitbaly logically inconsistent scourge that is not only useless in theory, but in practice is also playing a part in ruining college football.  And America!*

What's that?  You want me to tell you how I really feel?  Fair enough.  Will do.  But first we have to briefly look at the different logical methods by which people rank teams.

Resume Ranking vs. Power Ranking vs. Projection Ranking

I've talked about this endlessly before in different contexts, but most thoroughly in Part 1 of my Flex Playoff proposal from a year and a half ago (my how time flies).  To quote myself in explaining the two:

  • Resume Rankers: believe that the notion of "best" is based on what you have done or not done so far in relation to what everyone else has done or not done so far.  This can cause some weird-looking rankings at the beginning of the season, but logical ones towards the end.
  • Power Pollsters: believe that the notion of "best" is a subjective analysis based both on what teams have done and the pollster's own opinions on how good each team is according to what cannot be captured in the win-loss column and the margin of victory totals. 
  • These diverging methods of ranking teams lead to different ways of determining who is the "best" team, and thus different ways of determining which system is the most ideal for crowning this "best" team the national champion.

    The resume ranker looks at the results of the regular season and determines who the best teams are based on those results.  The regular season results don't just play heavily into the rankings; they ARE the rankings.

    The power pollster of course pays attention to the regular season results, but then adds in his own subjective analysis of who he THINKS is best.

    I didn't mention the "Projection Ranking" theory in that article because that column had to do with end-of-year rankings and projection rankings don't apply to end-of year rankings.  Essentially, this theory of ranking teams states that you look at how good a team is (either based on resume or "power") and also look at their upcoming schedule to determine how you think those teams will end up ranked at the end of the season.  This is most prevalent early in the season, and for obvious reasons is non-existent at the end of the season: it's either a power ranking or a resume ranking at the end because there are no actual games remaining of which to project the results.

    Which is Better Generally?

    My personal belief (and I believe the dominant majority belief, at least in theory) is that resume ranking is the preferred method.  Resume ranking is not entirely objective (nothing ever can be given the nature of human beings), but it strives for ranking based on hard data: the results of actual games that have actually been played.  Based on what has happened so far, who has had the best season? 

    Power Rankings take into account the results so far, but instead of leaving it at that, take that information and layer on top of it a level of complete subjectivity in determining which team is "better" than which other team - which presumably means which team would beat which other team in an imaginary game played completely within the head of each individual power pollster.  Thus, while two different resume rankers may rank teams differently, at least they are using the same reality-based information as the basis of their rankings.  Power pollsters base their rankings in part on completely different information as each other because such information is all imaginary.  And in the end, they don't care who had the best season.  The best season is irrelevant to them.  The only thing that matters is who would beat who if they played tomorrow.

    Projection Ranking is, to my mind, a worse folly than power ranking because it takes the problem inherent in power ranking and doubles down.  Not only is a projection ranker trying to predict what will happen on the field in a series of games that have not been played, but he further attempts to predict how the pollsters as a whole will react to the imaginary results of that as-yet unplayed game.  If you're counting at home, that's two levels of utter speculation on the future: one more than power ranking and two more than resume ranking.

    Click through to read why pre-season polls are destroying America.

    Why Preseason (and Early-Season) Rankings are Useless in Theory

    Obviously, one cannot resume rank teams in the preseason.  And it's a pretty useless endeavor for the first few weeks of the season because there are hardly any results to go off of.  If you attempt to resume rank for the first few weeks of the year but keep your rankings looking like something approaching normal, you have to allow for some cognitive dissonance as you let some power polling seep into your rankings.

    And the very fact that you have no results makes it virtually impossible to make an accurate power poll as well.  I think power polls are fundamentally flawed for the reasons outlined above and elsewhere, but there are better methods than others, and a good power pollster will create a ranking based on the extrapolation of results so far into a list of who would beat whom if they played tomorrow.  A bad power pollster will base his determination of "who would beat whom" on conjecture, personal beliefs, a team's propensity in previous years to choke (or shine) in big games, etc.  The problem with pre-season power poll rankings is that they are by definition the "bad" kind.  You rely exclusively on how the team did last year, which players were lost, how a newspaper reporter says that a player looks in spring practice, etc.  You do this because you have nothing else to rely on.  Nothing has happened!

    So essentially, pre-season rankings turn resume rankers into power pollsters, and turn good power pollsters into bad ones.  Sounds like a recipe for disaster before we even get to the fact that some pre-season pollsters like to look at the upcoming schedule of each team to figure out how they will end the season and where they will be ranked as a result.  They are doing this without any data on either the team they're trying to rank or the 12 other teams that team will play in the future.  As the season goes along, at the very least a projection ranker's method is based in part on actual results.  What kills me is that the people who do this projection ranking often claim they are looking at the schedules to provide some sort of "context" or "reality" to their power rankings that were necessitated by the fact that resume rankings are impossible.

    To recap the effect pre-season polls have on resume-rankers:

    Resume Ranker --> good Power Pollster --> bad Power Pollster --> Projection Ranker

    All preseason ranking does is systematically increase both the level of speculation and the influence of one's imagination on the ranking system, even by those people like me who are staunch resume rankers.  There's nothing otherwise that we can do.

    Why Preseason Rankings are Bad for College Football

    For one thing, forcing things like power ranking and projection ranking on everyone (including on those who would prefer to rank based on resume) implicitly endorses and reinforces the legitimacy and preferability of those two methods, which I obviously think is counterproductive.  This matters because the two teams that make the college football BCS playoff are chosen based on their rankings.  I firmly believe that the two teams chosen for such a playoff (or occasioanlly more than two, if my Flex Playoff proposal ever takes hold...which of course it won't) should be chosen based on who has had the best season on the whole.  Not "who would beat whom according to the imaginary game pollsters just played in their heads."  The more legitimacy Power Polling has early in the season, the more likely that legitimacy is going to seep into late-season polls and the more likely pollsters' imaginations are going to play a part in determining who plays in the national championship game.

    Second, pollsters are subject to a severe case of inertia.  Wherever a team is ranked in the beginning of the year, it's going to stay pretty much right there until it loses.  So when OU and USC started the season 1-2 and Auburn started at 19 or something and all 3 went undefeated, Auburn had no chance of overtaking USC and OU because the latter two had been entrenched as 1-2 since before the season even started.  This happens even with people who claim to be resume ranking (and for the record, if someone is prone to inertia, they are not purely resume ranking, despite claims to the contrary; liars!). 

    This is a problem that isn't going to be completely fixed by eliminating pre-season rankings, but if you wait until mid-season or, say, after the end of non-conference play to begin ranking teams, then at least the spots in the poll in which certain teams will be entrenched are based on something only than complete conjecture and imagination.  And beyond that, it will encourage teams to schedule better non-conference games.  As it is, certain teams are almost always ranked highly in the pre-season because they have lots of talent.  There is no incentive for those teams to schedule tough non-conference games because as long as they win all of their non-conference games, they will remain in the same spot in the polls.  But if those rankings don't begin until after the non-conference games are played, the rankings will be based on the results thus far.  And an undefeated team with a win over a good team should be (and will be, under a resume-ranking method) ranked higher than an undefeated team with wins over nothing but patsies.  Essentially, the leaping off point is based at least somewhat in reality.  And that has to be an improvement.

    Conclusion

    Heisenberg_s_uncertainty_principle_graph_medium
    So this is quite like college football.

    I'm not saying that there's no point in discussing who you think the better team is in  the pre-season or during the first few weeks of the year.  That's the fun of all of this and talking about it ad nauseum is one of the reasons we're all here.  I'm very happy to have that discussion.  But when that fun gets institutionalized and rankings done by certain people actually affect how the season plays out, we have to be careful to think about how and why we are ranking these teams.  It's an extremely simplified Heisenberg Uncertainty Principle**: there is no absolute value that can be given to rank these teams because there is no such thing as definite position, and because the very act of measuring these teams affects their position within the rankings, we must be careful to measure them in the most accurate and least disturbing manner possible.  Because power ranking and projection ranking are based on things other than facts, the method of measurement that disturbs the position of these teams the least is the one that is the most objectively based on facts -- resume ranking.  And there is no possible way to be any more objective in ranking teams than by doing it based only on what has actually occurred, something the preseason poll doesn't allow you to do.

    *Only true if you believe college football is the majority component in the fabric of the American way of life.  As I do.

    **Apologies to any quantum physicists for totally simplifying and/or somewhat misconstruing the uncertainty principle.