clock menu more-arrow no yes mobile

Filed under:

Texas Longhorns Open At No. 15 In AP Top 25

If you buy something from an SB Nation link, Vox Media may earn a commission. See our ethics statement.

[Update]: If you're wondering how likely it is to get the first 15 spots exactly the same in two preseason polls without cheating or bias, click through for my analysis. (Spoiler: it's pretty unlikely) -Horn Brain

Oh, those copycat writers. The Texas Longhorns debuted at no. 15 in the preseason AP Top 25, exactly the same ranking as the one bestowed upon the 'Horns by the coaches several weeks ago.

For a team with a high ceiling, but a relatively low floor, the ranking by the coaches seemed like a fair assessment at the time and no doubt remains so several weeks later with nothing too disastrous happening to the football team since then, except for multiple injury scares that should not be too significant -- as long as new kicker Anthony Fera can get his groin injury healed, that is.

Head after the jump for the full rankings.

RANK TEAM 2011 RECORD POINTS PV
1 Southern Cal (25) 10-2 1,445 6
2 Alabama (17) 12-1 1,411 1
3 LSU (16) 13-1 1,402 2
4 Oklahoma (1) 10-3 1,286 16
5 Oregon 12-2 1,274 4
6 Georgia 10-4 1,107 19
7 Florida St. 9-4 1,093 23
8 Michigan (1) 11-2 1,000 12
9 South Carolina 11-2 994 9
10 Arkansas 11-2 963 5
11 West Virginia 10-3 856 17
12 Wisconsin 11-3 838 10
13 Michigan St. 11-3 742 11
14 Clemson 10-4 615 22
15 Texas 8-5 569 NR
16 Virginia Tech 11-3 548 21
17 Nebraska 9-4 485 24
18 Ohio St. 6-7 474 NR
19 Oklahoma St. 12-1 430 3
20 TCU 11-2 397 14
21 Stanford 11-2 383 7
22 Kansas St. 10-3 300 15
23 Florida 7-6 214 NR
24 Boise St. 12-1 212 8
25 Louisville 7-6 105 NR


  • One notable difference is that USC is no. 1 by the writers, while the coaches ranked the Trojans behind both LSU and Alabama. Who's right, the coaches or the writers?
  • Kansas State didn't get much respect from the coaches, coming in at no. 21, and got slightly less respect form the writers, who had the Wildcats one stop lower.
  • Who was the idiot for gave Michigan a no. 1 vote?
  • Not much else that really stands out, although it is 1:30 pm CST here in Austin, Texas, and I'm receiving word from some very high-placed sources in the Texas administration who are saying that OU does in fact still suck. I will work to confirm.

Horn Brain:

I Can't Prove It, But...

... They cheated. I know they cheated.

You see, we engineers often struggle with the word "and". Take NASA's landing of the Curiosity rover on Mars this month (I will never stop plugging this. Woo, Caltech and JPL!): What was amazing was not that (1) the hypersonic guided entry worked, or (2) that the parachute opened and inflated on time, or (3) that the radar was able to find a suitable landing site, or (4) that the sky crane control scheme successfully dangled a couple billion dollars worth of nuclear-powered robot from a cable before gently depositing it on the surface of another planet with no human intervention; the amazing thing is that 1, 2, 3, and 4 worked, along with effectively countless other intermediate steps, each cutting the probability of success by its own fraction of reliability.

With that in mind, hopefully it makes more sense to the English majors out there (sorry, Wescott, had to pick someone) that, as an engineer, the idea of two completely different populations of voters with different biases and access to information managing to exactly match their first 15 selections (1 and 2 and 3 and...) without a single game ever being played sets off every one of my BS detectors. The only difference is explained by the departure of The Honey Badger from LSU in the interim. To investigate, I looked all the way back to this graph, showing an estimate for the average uncertainty in a ranking from the BCS (You may remember this from my first post on BON).

Chart2_medium

Now, using the quadratic fit to the data (which looks pretty good, if I may brag), I did a quick estimate of the probability that a poll would agree with the Xth ranking as 1/(Standard Deviation of Xth Ranking). You can quibble with this, but it's just meant to be a ballpark estimate. Now, the probability that all of the top 15 match is the product of all of the probabilities for X=1 to X=15. This gives us about (5 * 10^-7), or odds of about 1 in 2 million.

The obvious conclusion is that the polls are not independent, the poll that comes out first sets an expectation, and except for a few intentional disputes, subsequent polls will largely imitate their predecessors. People just aren't good at handling all of this data and making consistent deductions from it; the tree of consequences that must be searched is just too complex to find all of the inconsistencies in an individual's rankings, so we assume that what we've seen before is mostly right in an effort to ignore the fact that we have no hope of solving the problem.

The big problem with this is that it makes preseason polls matter. We see this perpetually in football, where Teams A, B, and C are all undefeated or have similar records, but Team A and B are ranked above C because C started off low in the preseason polls while A and B were expected to be title contenders. This "ranking inertia" is considered a problem by most thinking persons, but it is actually programmed into Richard Billingsley's horrible computer ranking intentionally. As nearly as I have been able to tell, this is because his brain has been almost completely replaced by an organ of similar structure, but whose sole function is to provide him with an endless supply of self-assuredness in the face of being proven over and over again to be a complete nitwit.

When the new format finally arrives involving a selection committee, I hope that the committee studies these effects and encourages its members to account for this subconscious bias in their decisions. That, however, would be something that I would do, and therefore will undoubtedly turn out to be highly unpopular.