Hubert Davis Catch-all

  • Thread starter Thread starter LeoBloom
  • Start date Start date
  • Replies: 3K
  • Views: 45K
  • UNC Sports 
yes. the crowning achievement for ACC basketball since time immemorial
Early in the history of the league people complained that the conference's relative lack of success in post season was fatigue from the tournament. It was why we were one of the few (maybe the only. Don't remember) to have one into the 70s.
 
ok, you guys are right. The ACC tournament is meaningless (Roy seemed to agree), we are in great shape with recruiting (talent wants to come here), we play the portal and NIL like a fiddle and on court results are trending up. LFG!
 
ok, you guys are right. The ACC tournament is meaningless (Roy seemed to agree), we are in great shape with recruiting (talent wants to come here), we play the portal and NIL like a fiddle and on court results are trending up. LFG!
It's not what is was. What do they call it? An end of season cocktail party?
 
You're not making any sense. NCAAT seed factors in record too. They're not independent of each other. The NCAAT seed is, by definition, a more comprehensive analysis. You're losing yourself in hypotheticals and not focusing on what we're actually talking about.
Comprehensive analysis doesn't mean good.

Remember the old RPI formula? I think the Committee used it for quite a while. IIRC the RPI was worse than subjective impression. It was a very silly metric. So team record + conference > RPI. Is that true today? It's surely better than it was, but my point is that including analysis is only helpful if the analysis is good and it's bringing in useful information.
 
Remember the old RPI formula? I think the Committee used it for quite a while. IIRC the RPI was worse than subjective impression. It was a very silly metric.
RPI was a huge step forward when it was introduced, despite not being terribly advanced. It was later surpassed by other advanced metrics (first Sagarin, then others), but it was the best thing available when it first came out.
 
RPI was a huge step forward when it was introduced, despite not being terribly advanced. It was later surpassed by other advanced metrics (first Sagarin, then others), but it was the best thing available when it first came out.
Only until schools figured out how to game it. There was a significant period of time when it added negative value.
 
RPI was a huge step forward when it was introduced, despite not being terribly advanced. It was later surpassed by other advanced metrics (first Sagarin, then others), but it was the best thing available when it first came out.
I just asked GPT and it pointed me to this 538 analysis showing that **preseason polls** were better predictors than the RPI.

 
Comprehensive analysis doesn't mean good.

Remember the old RPI formula? I think the Committee used it for quite a while. IIRC the RPI was worse than subjective impression. It was a very silly metric. So team record + conference > RPI. Is that true today? It's surely better than it was, but my point is that including analysis is only helpful if the analysis is good and it's bringing in useful information.
Why are you talking about RPI? I never brought up RPI. We're talking about NCAAT seeds, which are arrived at by a committee factoring in numerous different data inputs about a team - their record, their ranking in various efficiency-based and resume-based metrics, their good wins and bad losses, etc. - and ranking them compared to all the other teams in the country. Which, very clearly, will give you a better picture of how successful a team was in the regular season than looking at their record alone. It seems like you're working extremely hard to avoid admitting this very simple point by throwing up all these other hypotheticals and random other comparisons.

BTW - the RPI is not, and never was, "comprehensive." In fact the whole problem with RPI was that it wasn't comprehensive enough - it only factored in wins and losses, not margin, and initially it didn't factor in home vs away vs neutral (it was later adjusted to account for that). But at no point was it "worse" than looking at a team's record alone.
 
I just asked GPT and it pointed me to this 538 analysis showing that **preseason polls** were better predictors than the RPI.

Hey super, did you see this line from the article?

"In addition, the RPI is a poor predictor because it restricts itself to wins and losses. More accurate methods use teams’ margin of victory or points per possession to make rankings and predictions. These approaches do a better job of stripping away the noise built into wins and losses, as a team’s record can look very different depending on the outcomes of a few fluke buzzer-beaters or blown calls."

Gee, it's almost like it is very self-evident that looking at a team's record alone is not a good way to determine how good a season it had.
 
I just asked GPT and it pointed me to this 538 analysis showing that **preseason polls** were better predictors than the RPI.

Yes, for evaluating 25-35 teams who are ranked and/or received votes.

For the other 300+ teams, preseason polls provide no data.
 
Hey super, did you see this line from the article?

"In addition, the RPI is a poor predictor because it restricts itself to wins and losses. More accurate methods use teams’ margin of victory or points per possession to make rankings and predictions. These approaches do a better job of stripping away the noise built into wins and losses, as a team’s record can look very different depending on the outcomes of a few fluke buzzer-beaters or blown calls."

Gee, it's almost like it is very self-evident that looking at a team's record alone is not a good way to determine how good a season it had.
I was using the RPI as an obvious example of how more data doesn't mean better. How formulaic data doesn't mean better. And thus, NCAAT seeds aren't always better (especially back when they were based on RPI).

I've never said that record is better than NCAAT. I've never said it's as good. I said it's not as obvious as you think. Record + conference is a heuristic. It's a very loose heuristic -- easy to apply, not that informative. But sometimes heuristics perform better than you might think, and better than alternatives that appear better.

To take one example of a simple, pretty good heuristic: if you want to predict the height of a white child, double their height at age 2. Simple, easy. Pretty accurate. Not as accurate as other, more detailed methods but a lot more accurate than a lot of things. Definitely more accurate than parents' height or even family height.

I don't know how predictive NCAAT seeds are. Judging by the number of upsets, in the recent past they seemed not to be all that predictive (keeping in mind, of course, that the benchmark is far short of 100%).

I very rarely "make no sense." If you think I'm making no sense, then it is far likelier that you're missing something or misunderstanding my statement. I am not error-proof but making no sense has never been my calling card.
 
Yes, for evaluating 25-35 teams who are ranked and/or received votes.

For the other 300+ teams, preseason polls provide no data.
Yes they do. They provide the data that the team isn't estimated to be in the top 30. I have no idea whether the RPI or Sagarin or any other model is good at distinguishing the #90 team from the #100 team. I've never cared about that and I doubt anyone else does either.
 
Yes they do. They provide the data that the team isn't estimated to be in the top 30. I have no idea whether the RPI or Sagarin or any other model is good at distinguishing the #90 team from the #100 team. I've never cared about that and I doubt anyone else does either.
That's datum.
 
I was using the RPI as an obvious example of how more data doesn't mean better. How formulaic data doesn't mean better. And thus, NCAAT seeds aren't always better (especially back when they were based on RPI).

I've never said that record is better than NCAAT. I've never said it's as good. I said it's not as obvious as you think. Record + conference is a heuristic. It's a very loose heuristic -- easy to apply, not that informative. But sometimes heuristics perform better than you might think, and better than alternatives that appear better.

To take one example of a simple, pretty good heuristic: if you want to predict the height of a white child, double their height at age 2. Simple, easy. Pretty accurate. Not as accurate as other, more detailed methods but a lot more accurate than a lot of things. Definitely more accurate than parents' height or even family height.

I don't know how predictive NCAAT seeds are. Judging by the number of upsets, in the recent past they seemed not to be all that predictive (keeping in mind, of course, that the benchmark is far short of 100%).

I very rarely "make no sense." If you think I'm making no sense, then it is far likelier that you're missing something or misunderstanding my statement. I am not error-proof but making no sense has never been my calling card.
a) nobody in this discussion has been trying to use NCAAT seeds as predictive, they've been trying to use them as summative, as descriptions of team success over the course of a season.

b) on aggregate, seeds are pretty predictive, with the exception that 10/11/12 seeds tend to do better than 9s. BracketOdds - How Far Does Each Seed Advance?
 
I don't know how predictive NCAAT seeds are. Judging by the number of upsets, in the recent past they seemed not to be all that predictive (keeping in mind, of course, that the benchmark is far short of 100%).
This is part of the problem. You have shifted the discussion from what it was initially about (what is a better proxy for assessing how successful a team's regular season was - its NCAAT seed or its record) to something different - which is more predictive of NCAAT success. Which is an entirely different question. But if you really want to get into a predictiveness discussion then the best efficiency metrics are a lot more predictive than RPI for sure. But I can promise you anyone trying to argue that UNC has not been slipping below its historical standard is not going to like comparing our efficiency metrics from the last five years to UNC's historical performance.

In any event, you are missing the forest for the trees in trying to endlessly analogize to other subjects. You linked a study comparing preseason rank to RPI in terms of which was more predictive of the winner in an NCAAT game. The implication was that "more data is not necessarily better" because RPI theoretically incorporated more data than the preseason rankings. But really what it is is different date, not more data - RPI has data about the current season performance but no historical knowledge or knowledge about things like recruiting, while preseason voters had historical knowledge and knowledge about recruiting but no data about the current season performance. So if anything, that is a story about which data is more predictive, not whether adding data to something makes it more predictive.

And in any event, the seeding process by definition is far more comprehensive than RPI. So the RPI analogy is ultimately useless. You seem determined to try to prove that in theory less data can be better than more data, while making no real attempt to actually argue the disputed assertion at hand (which, again, is that a team's NCAA seed is necessarily a more comprehensive assessment of the team's performance that season than its record alone would be). You are continuing to pointlessly abstract the question. rather than just considering the question itself.
 
This is part of the problem. You have shifted the discussion from what it was initially about (what is a better proxy for assessing how successful a team's regular season was - its NCAAT seed or its record) to something different - which is more predictive of NCAAT success. Which is an entirely different question. But if you really want to get into a predictiveness discussion then the best efficiency metrics are a lot more predictive than RPI for sure. But I can promise you anyone trying to argue that UNC has not been slipping below its historical standard is not going to like comparing our efficiency metrics from the last five years to UNC's historical performance.

In any event, you are missing the forest for the trees in trying to endlessly analogize to other subjects. You linked a study comparing preseason rank to RPI in terms of which was more predictive of the winner in an NCAAT game. The implication was that "more data is not necessarily better" because RPI theoretically incorporated more data than the preseason rankings. But really what it is is different date, not more data - RPI has data about the current season performance but no historical knowledge or knowledge about things like recruiting, while preseason voters had historical knowledge and knowledge about recruiting but no data about the current season performance. So if anything, that is a story about which data is more predictive, not whether adding data to something makes it more predictive.

And in any event, the seeding process by definition is far more comprehensive than RPI. So the RPI analogy is ultimately useless. You seem determined to try to prove that in theory less data can be better than more data, while making no real attempt to actually argue the disputed assertion at hand (which, again, is that a team's NCAA seed is necessarily a more comprehensive assessment of the team's performance that season than its record alone would be). You are continuing to pointlessly abstract the question. rather than just considering the question itself.
I was only defending the poster who you harshed on, by saying that I didn't think his position wasn't as extreme or ridiculous as you made it out to be. Then you said that I was making no sense when saying that less informed metrics can be more accurate than more informed one, which is not actually a controversial point.

Adding data does not always improve prediction. That's an incredibly well established proposition and I'm not going to argue it here. If you disagree you can read about it
 
Back
Top