As we continue our discussion about preseason rankings, two differing philosophies underlie most people's thinking. Either (1) preseason rankings should be an attempt to project the final -actual- rankings of teams (see: Feldman, Bruce), or (2) preseason rankings should just be an attempt to list the best teams, schedule and projected results be damned.
Today we've set up a formal point, counter-point debate between yours truly and College Football Resource. Before you read on, understand that we're each taking one side and arguing for it completely. We're each playing Devil's Advocate for the other, and these arguments do not necessarily reflect what I, or CFR, think. We're simply representing a side of the argument. With those caveats in mind, let's get started.
Burnt Orange Nation: This should be fun, CFR. Now, I'm going to start this party by laying out a premise for my side, and I'll let you take it from there.
We both agree that preseason rankings are rather meaningless, so let's not worry about that today. But since everyone does it, including those in the Blog Poll, we'd might as well talk out the differing approaches.
I'm going to argue today that we should be looking at the schedules, trying to project wins and losses, and doing our best to foresee how the final season rankings are going to look. Here's the thing: at the end of the year, the BCS standings are all that matter. And those standings invariably account for wins and losses. It doesn't matter if Georgia, by the end of the year, is the third best team in my mind. If they lost their first four, they're not going to be ranked #3. Why shouldn't I be projecting the cumulative result of their season?
College Football Resource: Peter, thank you for inviting me to be a part of this dialogue.
I think the two approaches to rankings differ thusly: the first approach, which you advocate, answers the question "based on their schedule, where should team X arrive at when the season is finished?" The second approach, which I'm advocating here, answers the question "how good should team X be this year?"
The advocates of both approaches must do some projecting, but the one has a more outward look projecting from the present to the future, while the other does the inverse, working backwards from the future to the present.
I'll toss this discussion back to you by asking the following:
Are we setting ourselves up for a fall by creating preseason rankings that assess teams not by the "content of their character" but by what their schedule says they should do?
Burnt Orange Nation: For starters, I'm not sure that they're mutually exclusive. In other words, I don't know that you can't, and shouldn't, do both.
The real tricky part of this whole matter is that every team doesn't play each other. Given that, two things seem to me to be true. First, there's a hefty amount of subjective evaluation that you have to live with. But second, because you don't play everyone, you have to place - perhaps more than you or I would prefer in an ideal world - a substantial amount of weight to wins and losses.
Let's get back to my Georgia example, as a starter. They strike me as a perfect example for why trying to rank teams based on who's "best" may be problematic. It may take a full season for the quarterback situation to resolve itself. Now, by the end of the year, with Stafford comfortable at quarterback, I can see them being one of the Top 10 teams in America. And yet, let's say these growing pains include four season losses - at South Carolina, Tennessee, Florida, and Auburn.
This is all hypothetical, but if I think Georgia's going to be a Top 10 team by the time they beat Georgia Tech and win a bowl game, but don't think they're going to have a season that permits ranking them in the Top 10, what do I do?
We're talking about preseason rankings, remember. Once we get into the meat of the season - we'll have so much more to work with. But let's run with this scenario for the fun of this argument. Your thoughts?
College Football Resource: I strongly agree that there's a tremendous amount of subjective evaluation that we must live with. I welcome it, in fact. But I'd rather we use that realization to release us from the strict burdens of wins and losses.
I could drive around in a high end luxury car if I wanted to, but it wouldn't necessarily mean I was a wealthy man. Just the same a team can be rolling around with a 11-1 or 13-0 record but it doesn't necessarily make them a great team. Leaning too heavily on objective measures hinders our ability to evaluate teams. As you said, you don't play everyone and that creates great confusion about one's performance (record) and actual aptitude (ranking).
Let's take a look at your Georgia example. I think we'd have to evaluate their expected performance in those losses, looking at if they were due to a raw quarterback finding his way, if they were close or if they were significant defeats, if the team was doing things that top 10 teams would do like stopping the run and limiting turnovers, etc. There are things to look at to help us make a reasonable subjective evaluation of their true worth.
If at some point that team had caught stride, say, settling on Matt Stafford and found some offensive consistency and started looking less like a top 25 team but a top 5-10 team, I'd have no difficulty whatsoever considering them for a top 10 preseason raking.
Part of the holdup with rankings is that people often argue about what teams "deserve". That is, if you've lost three games you shouldn't deserve to be in the top 10. Nobody deserves anything other than to be fairly and thoroughly evaluated by those considering them for the rankings. However, a team that has lost three games will suffer the consequences of those losses even from the strict "best teams" proponents. Any rational person realizes that losses matter and cannot be completely explained away. That's the balance between the pure win-loss method and the best teams method.
An example that comes to mind for choosing to rank teams by aptitude is last year's Tennessee team. I remember looking at a handful of preseason polls and magazines that all had the Volunteers in the top 2-5. I was shocked. I figured the Vols might win 8-9 games and look respectable, but I didn't think they'd be any good. As such they were not in my top 15 and that confused many people.
Now, nobody knew they'd be that bad last year, but even if they hadn't my method of ranking them was more realistic even if they had managed a more satisfactory record. I saw a team with a shaky quarterback situation, an overrated backfield and distracted players (several offseason arrests). They still had the talent and necessary soft spots in the schedule to get enough wins to be in the top 10, but I wasn't convinced they would be better than 10-15 teams. Unlike other parts of my preseason rankings, that assessment rang true.
Burnt Orange Nation: I'm not sure we quite settled this matter, CFR, but it's certainly been interesting. Thank you for representing one side of this debate, and I certainly enjoyed playing devil's advocate. I think the proper approach probably takes into account a bit of each - you want to rank your teams in a way that reflects who you think will be best by year's end, but you've got to have some sort of reward/penalty for a season's cumulative results.
Ultimately, while preseason rankings don't matter too much, regular readers of both our sites know that the college football narrative matters in the BCS system. And as Auburn's 2004 season illustrated: where you start the season has a lot to do with how you finish it.
And as the 2006 season gets closer and closer, I'm inclined to believe we could be on the brink of our first Full Scale BCS Disaster. Want a sneak preview of the BCS commissioners' worst nightmare? Have a gander. What's scary/delightful is that it's not -that- farfetched. And there are easily a dozen other, less wild, scenarios that could spark riots and couch burnings for years to come. Stay tuned. Just six weeks away, boys and girls.
--PB--