As decision day approached, debate raged. Would Indiana make the cut? Could the ACC steal a bid? Would any of the three-loss SEC teams sneak in? So deep was the consternation, so high were the stakes, that conference took to firing public salvos across the College Football Playoff landscape.
The uproar didn’t subside when the 12-team field was revealed on Dec. 8. It hummed right along through the opening round and into the quarterfinals.
One interested observer watched the entire process unfold in disbelief, wondering why the immensely popular, multi-multi-billion-dollar sport doesn’t seek a better way— a cleaner, more transparent means of selecting its playoff participants.
“It’s astounding to me that there is no objective ranking system in football like we have in basketball,” Ken Pomeroy told the Hotline earlier this week.
“It’s just a bunch of subjective comparisons.”
The man who revolutionized college basketball analytics decades ago with his adjusted efficiency metrics — for the uninitiated, visit KenPom.com — tracks the college football as best he can. And like so many fans, Pomeroy is baffled by the CFP selection process.
He even dared to utter that three-letter word college football has tried to forget.
“The BCS wasn’t perfect,” Pomeroy said, “but at least there was some objectivity.”
The Bowl Championship Series picked the championship game participants using a combination of human polls and computer rankings — a mix that left few stakeholders satisfied.
The move to the four-team playoff in 2014 was supposed to improve the selection process with a 13-person committee modeled on the March Madness version. And for much of the CFP’s lifespan, controversy was limited. The exclusion of undefeated but quarterback-impaired Florida State in 2023 generated a next-level controversy, with threats made to members of the committee.
Expansion of the event this season created more layers to the process, more subjective decisions — and more scrutiny. There are seven at-large teams, not four. More teams were on the bubble than in the past. The rankings determined which teams received coveted opening-round byes and which played lucrative home games.
Nothing sparked more debate than whether Indiana, which was 11-1 but played a soft schedule, should make the field ahead of the SEC’s three-loss trio of Alabama, Mississippi and South Carolina.
“The discussion between Indiana and the SEC teams was so maddening: The SEC always gets in, but shouldn’t Indiana have a chance? It’s completely subjective,” Pomeroy said.
And so we ask: Is there a better way?
Should the CFP increase objectivity and marginalize subjectivity in much the same fashion as the NCAA Tournament selection committee?
Should it borrow an algorithm from basketball’s book and create a version of the NET rankings, which act as a sorting tool for the basketball committee by leaning into metrics that are both performance-based and predictive. (NET stands for NCAA Evaluation Tool.)
“Basketball is miles ahead of football, and even we aren’t where we should be,” said Pomeroy, whose efficiency ratings measure points per possession. “Every basketball league in the world uses an objective method to determine its postseason. High schools have whacky formulas, but they all have formulas.”
If creating a football version of the NET were easy, the CFP selection committee would have one at its disposal — or so we’d like to think.
“There isn’t a simple answer,” said Kevin Pauga, an associate athletic director at Michigan State and the creator of the acclaimed KPI (the Kevin Pauga Index) used by the basketball committee to select the 68-team field.
Pauga identified two obstacles, both rooted in the inherent differences between the sports:
— Football has a smaller sample size, with 12 or 13 games played prior to CFP selections compared to at least 32 games for most basketball teams before Selection Sunday.
That means each football game has three times the value of a given basketball game. And with a smaller sample size, outlier results carry more weight.
— Perhaps the more challenging obstacle to creating a football version of the NET is the murky way of judging success on the field.
“There are about 70 to 72 possessions for each team in basketball, and you can measure the outcome of every play: You scored or you stopped a team from scoring,” Pauga said.
“If there’s one point awarded per possession (in the computer algorithm), that’s 140 points — that’s real data. And you can adjust based on the location of the game and the quality of the opponent.
“But in football, there are only eight or 10 possessions per game for each team, plus all the plays within each possession. How do you measure (analytically) the yards gained on first down? It’s more difficult to quantify. And are we judging the better team based on total points scored? What if your kicker misses three field goals? Does that mean you are the lesser team?
“The data points are more difficult to compute.”
The CFP selection process was designed prior to realignment — before the existence of the 16- or 18-team conferences in which scheduling misses create another layer of complexity.
In the Big Ten, for example, Ohio State played Indiana, Penn State and Oregon. But Indiana, Oregon and Penn State missed each other.
In the SEC, Georgia played Texas, Alabama, Tennessee and Mississippi. But the Longhorns missed the other three.
The real challenge for football analytics in a realignment world, Pauga said, is determining the top two teams — the matchups for conference championship game. Once the tiebreaker process reaches the third or fourth step, it gets messy.
“There are plenty of teams that are 7-5 that might be better than a team that’s 9-3,” he said. “Sometimes, it’s the luck of the draw on the schedule, and I’m not sure we should reward or penalize teams based on the luck of schedule.
“What matters is how you play against the schedule you have. If you play 12 games and have a 90 percent chance of winning each game, the chance of going undefeated is just 28 percent.
“If you reduce that to an 80 percent chance to win every game, the chance of going undefeated is 7 percent.
“Indiana still went 11-1. If that were easy, more teams would do it.”
The CFP selection committee has a variety of analytics available, including the strength-of-record (SOR) metric mentioned by both Pauga and Pomeroy as particularly useful.
“It measures how hard it is to achieve your record against the schedule you play,” Pomeroy said.
But where does SOR fall within the pecking order of tools used by the committee?
Do some members prefer strength-of-schedule?
It’s easy to imagine the retired coaches on the committee — there were four this season — giving little heed to the metrics and relying solely on their eyes.
“We haven’t standardized the narrative,” Pauga said. “How do you quantify it? Where is the line between Indiana at 11-1 and somebody else at 11-2? How do you account for a conference championship game? Should the second loss count the same because it’s a championship? That gets you into campaign season.”
Ah, campaign season: The pinnacle of college football dysfunction in which coaches and executives use the media to stump for their teams, criticize competitors and question the committee’s process.
In early December, days before the CFP reveal, Iowa State athletic director Jamie Pollard and his SMU counterpart, Rick Hart, exchanged barbs on the social media platform X.
The next day, Big 12 commissioner Brett Yormark voiced frustration with the position of his top-ranked teams relative to Boise State.
“The committee continues to show time and time again that they are paying attention to logos versus résumés,” he told reporters.
Yormark added: “In no way should a Group of Five champion be ranked above our champion.”
Naturally, that prompted a response from Mountain West commissioner Gloria Nevarez, whose statement began: “Participation in the College Football Playoff isn’t about entitlement.”
Combine a deeply subjective process with a massive lack of clarity and transparency, and chaos is the inevitable outcome.
Is there a better way?
“There needs to be general agreement on what metrics to focus on,” Pauga offered.
“They definitely have data that can be informative with context. But part of reason we’re in this position is the solution is difficult regardless. And that leads to people steering the numbers in their favor.”
Or as Pomeroy framed it: “They haven’t standardized the narrative.”
*** Send suggestions, comments and tips (confidentiality guaranteed) to wilnerhotline@bayareanewsgroup.com or call 408-920-5716
*** Follow me on the social media platform X: @WilnerHotline