Those who were monitoring the site back in the doldrums of February, where the energy and ability of your faithful scribe to write were both low, might remember the post on recruiting, which was none too kind to the salesman we call recruiting services and the ratings they devise.
It seems to me they're two parts research, two parts guesswork, three parts alchemy and about eight parts hucksterism. ...
And, for his money, C&F would rather have a five-star coach and three-star recruits than a three-star coach and five-star recruits. Mediocrity is a mindset that can be easily taught and even more easily learned.
Well, now we have a post from SMQ -- not, by any means, in response to C&F's criticism, but in response to others like it -- knocking those who would decry the ratings as a crap shoot.
SMQ calls such thinking "philistinism," which C&F believes means SMQ believes the critics are all as dumb as horses, and goes on to dissect a study meant to knock recruiting ratings and turn it instead into support of the lists. In short, he shows that a five-star recruit is far more likely to be an All-American than a one-star recruit, or even a two- or three-star recruit.
If the setting was 'random; -- if the rankings were worthless -- every level would show roughly the same 1 in 59 odds of producing an all-American. Three, four and five-star prospects all fared better than that, the top two much better than that. Zero, one and two-stars were not close. ...
But if one of the measures of the "sole purpose" of the guru rankings is their ability to "show a much greater percentage of 5-star recruits making the All-America team than 0-stars," then those rankings succeeded wildly. For predictive purposes, they are generally what they say they are.
So, as they say in Congress, C&F would like permission to revise and extend his remarks.
First of all, C&F disagrees with the idea that rankings should be judged on the ability to "show a much greater percentage of 5-star recruits making the All-America team than 0-stars." Part of that is because being named an All-American doesn't necessarily contribute to the purpose of the game of college football -- or any other competitive game, really -- and that is (as "common sense" would tell one) to win.
For example, CBS lists Antoine Cason, a cornerback from Arizona, as an All-American. The AP's lineback corps includes Jordan Dizon of Colorado.
This is not to dump on these players, their accomplishments, their teams or even the All-American lists themselves. But Cason and Dizon were on teams that just weren't very good, and they ended up being All-Americans.
Sure, a team with a lot of All-Americans usually wins a lot of games. But should we look at that the other way? Is it that a team that wins a lot of games -- and gets a lot of positive media attention -- tends to see its players rewarded for that team success with individual All-American slots? (This might be arguing over how many angels can dance on a pinhead, since team and individual accomplishments are hard to separate in football.)
More importantly, though, being named an All-American is almost purely subjective. There are no set standards -- i.e., the player with the most rushing yards isn't necessarily the first-team running back -- and the voting group is not necessarily representative of the college football world.
In that regard, C&F would like to stand by his statement that he'd rather have a good coach and solid recruits than a mediocre coach and great recruits.
But let's get beyond quibbles with the methodology and take a closer look. Yes, five-star recruits have a 1-in-9 chance of being an All-American. But, again, flip that: It means they have an 8-in-9 chance of not being an All-American. So while being a five-star recruit makes you more likely to end up an All-American, it can hardly be called predictive in that regard.
This does not mean that those eight players aren't good players. In fact, they might be incredibly productive players who simply don't make the All-American team. (Which, coincidentally, leads to the other problem with using the All-American lists as a guage of success for the rankings: Many players who miss the All-American lists are good players who make their teams better -- i.e., more likely to win.)
C&F also has to question whether there aren't some built-in advantages to being a five-star recruit. For example, aren't there some coaches who are more likely to play a five-star recruit earlier, or give him an earlier chance to earn playing time, than the one-star walk-on? That's not to say there's anything wrong with that approach; a lot more recruiting time and scholarship money have been put in the five-star recruit. But it is an advantage, and one that might or might not be grounded in actual ability.
And as for the team rakings, a team could recruit four five-star players who go bust, two who make it as starters but not All-Americans and two who make it on the All-American list. The rate of All-Americans coming out of the team's five-star recruits, 1-in-4, is better than the national average even for that tier. Whether the class is as strong as the initial team rankings indicated would be another matter.
The data, though, is pretty clear. There is something to the recruiting rankings, as far as projecting player outcomes is concerned. C&F still isn't sold that there is any automatic connection between highly-rated recruiting classes and championship-winning teams. Coaching still plays a major role. (See: Florida, 2002-04)
But the ratings also don't appear to be as random as C&F argued back in the day. Maybe they're two three parts research, two parts guesswork, three parts alchemy and about eight seven parts hucksterism.