College Basketball 2025-26 season thread

  • Thread starter Thread starter dukeman92
  • Start date Start date
  • Replies: 583
  • Views: 11K
  • Sports 
There's also no reason that any serious metric should have a "shake up", instead change should be gradual.
Early in a season, most serious metrics should have shake ups. Or, to be more precise (because I don't know the details of how those metrics work), most serious metrics are consistent with early-season shake-ups. Once Feb rolls around, yes, there should be gradual change.
 
ooohhh good, a dick measuring contest. i have indeed played organized basketball.
calm down, my friend. I have always considered you a poster friend and hope to remain so

I think I have been respectful even though we disagree on this. I put a ? at the end of my previous post not to have a dick measuring contest but to find out if not playing organized basketball may account for not considering how playing only a minute or two each game may have a negative impact on a player's statistics vs the same player's stats when he plays 10-20 minutes/game.

If my posts in our "debate" came across as an attack, I apologize. I don't want to lose you as a poster friend🙏
 
Nickel has kind of progressed like the old 4 year players used to. Had he stuck around at UNC he might have been that senior leader of yore.

As we know the game has changed.
Dante Calabria, perhaps. IIRC Calabria did nothing as a freshman, had a limited role as a sophomore, and only as a junior did he become a substantial contributor. Same with Shammond. Steve Hale, perhaps to a lesser extent. Bucknall.
 
Dante Calabria, perhaps. IIRC Calabria did nothing as a freshman, had a limited role as a sophomore, and only as a junior did he become a substantial contributor. Same with Shammond. Steve Hale, perhaps to a lesser extent. Bucknall.
Or going all the way back, Mike Pepper was blue team until his senior year.
 
I think I have been respectful even though we disagree on this. I put a ? at the end of my previous post not to have a dick measuring contest but to find out if not playing organized basketball may account for not considering how playing only a minute or two each game may have a negative impact on a player's statistics vs the same player's stats when he plays 10-20 minutes/game.
You're wrong here. Playing or not playing organized basketball at some level is not a good way for determining empirical reality for DI college ball. Either 2 minutes a game is or isn't a valid sample size. That's a question for data, not intuition. But anyway, most of the guys like Nickel and Tyson get their playing time in bigger chunks -- it's like 5 minutes against a weak team, and DNP against Dook.
 
Jimmy Black was well into his sophomore year before he started doing anything much. Dudley Bradley was ineffectual until his junior year, averaging 1.1 points a game as a sophomore.
 
calm down, my friend. I have always considered you a poster friend and hope to remain so

I think I have been respectful even though we disagree on this. I put a ? at the end of my previous post not to have a dick measuring contest but to find out if not playing organized basketball may account for not considering how playing only a minute or two each game may have a negative impact on a player's statistics vs the same player's stats when he plays 10-20 minutes/game.

If my posts in our "debate" came across as an attack, I apologize. I don't want to lose you as a poster friend🙏
we're all good, i consider you a poster friend as well. and understood on your previous post. my apologies for the overreaction.

limited playing time in any sport makes it really hard to get into a rhythm.

nickel averaged 6 mpg, so he was often getting more than a minute or two and i certainly agree with you that his stats would've been better had he gotten more minutes. but he just wasn't good enough at that time to get more than bit minutes at UNC. he was better the following year at VT but still in a supporting role on a fairly poor team. had he stayed he would've been a good shooter off the bench for us behind ingram/ryan in 23/24 and then last season and this season would've played a ton a la his classmate trimble. but he left. it is what it is.
 
You're wrong here. Playing or not playing organized basketball at some level is not a good way for determining empirical reality for DI college ball. Either 2 minutes a game is or isn't a valid sample size. That's a question for data, not intuition. But anyway, most of the guys like Nickel and Tyson get their playing time in bigger chunks -- it's like 5 minutes against a weak team, and DNP against Dook.
I am not equating playing at the high school level with DI college basketball. My point is that those who have played organized basketball understand that it is very difficult to get into the flow and rhythm on the court and get in synch with your teammates when you are playing only 1 or 2 minutes each time over the course of a game.

Based upon my experience, I would rather be on the court for a single 10 minute stretch than be inserted for five 1 or 2 minute stretches.

Nickel averaged 25 minutes in 33 games for Vandy in the not weak team SEC last year. He averaged over 10pts/game and shot over 40% from the three point line.

At this point, I feel like I'm howling at the moon...
 
I am not equating playing at the high school level with DI college basketball. My point is that those who have played organized basketball understand that it is very difficult to get into the flow and rhythm on the court and get in synch with your teammates when you are playing only 1 or 2 minutes each time over the course of a game.
But it's an empirical question, not an "extrapolate from my experience" question. You can see this most clearly by considering that the first two minutes of every game, every half is played exclusively by people who haven't gotten into the flow of the game yet. So are the first minutes of games sloppier than the midpoints? Statistics from the NBA say: Yes, but not that much. So there is a real effect about flow and sloppiness, but that's five guys on the court who are out of flow. What does that say about one guy?

ChatGPT says there's no clear data either way in the NBA (which is the usual touchstone for analysis because the data is so much better) as to the effect of playing time increments. The studies are inconclusive and all over the place. Which makes sense because this is a very hard thing to measure: you're talking about players who are indeed worse than the starters (or else they would be playing more) and the # of minutes of data isn't very large because they aren't playing many minutes; and often the reason they play is that is something is "off" about the game -- i.e. injuries, foul trouble, etc.

I would say the real effect is not having a defined role. If you're playing consistently 15 mpg, then you know what you're supposed to be doing. Your teammates know. If you're playing 2 minutes, your role is probably unclear. But that's also speculation and not something I would assert.

In any event, Nickel played 12 minutes against Syracuse and 10 minutes against Wake in the spring of his freshman year. He was pretty bad against Wake. Against Cuse, he didn't do much. Hit a three, got a steal and an assist. Not bad, but not good.
 
But it's an empirical question, not an "extrapolate from my experience" question. You can see this most clearly by considering that the first two minutes of every game, every half is played exclusively by people who haven't gotten into the flow of the game yet. So are the first minutes of games sloppier than the midpoints? Statistics from the NBA say: Yes, but not that much. So there is a real effect about flow and sloppiness, but that's five guys on the court who are out of flow. What does that say about one guy?

ChatGPT says there's no clear data either way in the NBA (which is the usual touchstone for analysis because the data is so much better) as to the effect of playing time increments. The studies are inconclusive and all over the place. Which makes sense because this is a very hard thing to measure: you're talking about players who are indeed worse than the starters (or else they would be playing more) and the # of minutes of data isn't very large because they aren't playing many minutes; and often the reason they play is that is something is "off" about the game -- i.e. injuries, foul trouble, etc.

I would say the real effect is not having a defined role. If you're playing consistently 15 mpg, then you know what you're supposed to be doing. Your teammates know. If you're playing 2 minutes, your role is probably unclear. But that's also speculation and not something I would assert.

In any event, Nickel played 12 minutes against Syracuse and 10 minutes against Wake in the spring of his freshman year. He was pretty bad against Wake. Against Cuse, he didn't do much. Hit a three, got a steal and an assist. Not bad, but not good.
Now check out Trimble's Frosh stats and compare with Nickel's frosh stats and tell me that, based upon the "data " who would you have kicked to the curb based purely on the all knowing "data " ?

I would have kept both "feeling/intuiting" like they showed promise for developing into very good players for UNC over the following 3 years...but that's just me.
 
Last edited:
I am not equating playing at the high school level with DI college basketball. My point is that those who have played organized basketball understand that it is very difficult to get into the flow and rhythm on the court and get in synch with your teammates when you are playing only 1 or 2 minutes each time over the course of a game.

Based upon my experience, I would rather be on the court for a single 10 minute stretch than be inserted for five 1 or 2 minute stretches.

Nickel averaged 25 minutes in 33 games for Vandy in the not weak team SEC last year. He averaged over 10pts/game and shot over 40% from the three point line.

At this point, I feel like I'm howling at the moon...
i can only speak for myself but i don't disagree at ALL that nickel is a good player who i wish that we had now.

he just wasn't nearly this good as a freshman or even as a sophomore or junior. regardless, i would've liked to have kept him all along because you could see the potential but he didn't keep the faith and bounced. and that is fairly understandable because he wouldn't have gotten anywhere near the 25 mpg he got as a sophomore at VT as a sophomore on our really good team in 2023-2024. he would've been behind ingram and ryan and competing with trimble, withers, wojcik, etc. for backup minutes.
 
Early in a season, most serious metrics should have shake ups. Or, to be more precise (because I don't know the details of how those metrics work), most serious metrics are consistent with early-season shake-ups. Once Feb rolls around, yes, there should be gradual change.
The best metrics (meaning every one but, apparently, BPI, which is known to be a bad metric) ease out preseason weighting added to enable the metrics to function before any/very many games are played.

The metrics should be updated each day (or even multiple times per day) when games are completed, so the ratings/rankings changing due to game data should be - on a systemic-level - should be moderate for any given update.

If a few teams are much, much better or worse than they were predicted to be before the season, we should certainly see those teams move rapidly within the greater ratings/rankings, but there should not be a "shake up" of the entire system.

In short, unless someone has designed a really bad metric, there should never be "shake ups" within the rankings as changes on a systemic-level should be designed to be gradual as the season progresses.
 
The best metrics (meaning every one but, apparently, BPI, which is known to be a bad metric) ease out preseason weighting added to enable the metrics to function before any/very many games are played.

The metrics should be updated each day (or even multiple times per day) when games are completed, so the ratings/rankings changing due to game data should be - on a systemic-level - should be moderate for any given update.

If a few teams are much, much better or worse than they were predicted to be before the season, we should certainly see those teams move rapidly within the greater ratings/rankings, but there should not be a "shake up" of the entire system.

In short, unless someone has designed a really bad metric, there should never be "shake ups" within the rankings as changes on a systemic-level should be designed to be gradual as the season progresses.
It depends on what you mean by a "shake up" but obviously if there are surprise teams or major disappointments, shake-ups are inevitable.

And arguably the purpose of a metric is to be accurate, not gradual. Suppose a team is underrated at the beginning of a season. Would you rather your metric recognize that immediately and "shake things up," or very gradually leak that information into the output? I would think the former, all else being equal (one advantage of gradual is that you aren't fooled by flashes, but that's a different set of issues).

I have no idea whether BPI is or isn't good, but I don't see "shakeups" as a major problem.
 
It depends on what you mean by a "shake up" but obviously if there are surprise teams or major disappointments, shake-ups are inevitable.

And arguably the purpose of a metric is to be accurate, not gradual. Suppose a team is underrated at the beginning of a season. Would you rather your metric recognize that immediately and "shake things up," or very gradually leak that information into the output? I would think the former, all else being equal (one advantage of gradual is that you aren't fooled by flashes, but that's a different set of issues).

I have no idea whether BPI is or isn't good, but I don't see "shakeups" as a major problem.
I would say that "shake ups" are when you have material changes that affect a significant portion of whatever you're measuring.

The purpose of the metric is to be accurate, but the biggest challenge to accuracy is the lack of game data. The CBB teams that go deep into the tournament only play ~40 games and that by the very end of the season. The solution to the issue of lack of game data is to introduce other data (historical performance, individual player data) into the model to help assist with the lack of game data. Additionally, individual game data is prone of outlier effects because of the random nature of one-off performances. So you do not want a model that overreacts to a minimal amount of game data when that data is prone to outlier effects.

For instance, both Michigan and Gonzaga are top teams this year, but when they played Michigan beat Gonzaga 101-61. If they played 1000 more times, I sincerely doubt that Michigan would win by that margin again. So you don't want your model to significant react to that one actual game and either put Michigan miles ahead of the rest of the field or drop Gonzaga way, way back toward the middle of the pack of CBB. Good models will account for "one game outcomes" that are likely outlier effects rather than accurate representations of team quality by reducing the impact of one particular game plus smoothing the game data using other data.

If a model is having major "shake ups" 10 games into the season, then that means that the model has failed because the non-game data isn't sufficient to make the model effective and game data is too limited to create an effective model largely alone until late into the season.
 
I would say that "shake ups" are when you have material changes that affect a significant portion of whatever you're measuring.

The purpose of the metric is to be accurate, but the biggest challenge to accuracy is the lack of game data. The CBB teams that go deep into the tournament only play ~40 games and that by the very end of the season. The solution to the issue of lack of game data is to introduce other data (historical performance, individual player data) into the model to help assist with the lack of game data. Additionally, individual game data is prone of outlier effects because of the random nature of one-off performances. So you do not want a model that overreacts to a minimal amount of game data when that data is prone to outlier effects.

For instance, both Michigan and Gonzaga are top teams this year, but when they played Michigan beat Gonzaga 101-61. If they played 1000 more times, I sincerely doubt that Michigan would win by that margin again. So you don't want your model to significant react to that one actual game and either put Michigan miles ahead of the rest of the field or drop Gonzaga way, way back toward the middle of the pack of CBB. Good models will account for "one game outcomes" that are likely outlier effects rather than accurate representations of team quality by reducing the impact of one particular game plus smoothing the game data using other data.

If a model is having major "shake ups" 10 games into the season, then that means that the model has failed because the non-game data isn't sufficient to make the model effective and game data is too limited to create an effective model largely alone until late into the season.
I agree with pretty much all of that. It's a balancing act, though. It's a basic machine-learning problem: how to closely fit the data but not overfit. I think overfitting is what you're describing as "the outlier effects." It seems to me to be an empirical question as to how much "shaking up" is ideal, and one that might vary from year to year (or at least between 3 year periods).

What I do: don't pay attention to those rankings 10 games into the season.
 
There's a reason no one takes BPI seriously.

There's also no reason that any serious metric should have a "shake up", instead change should be gradual.

Also, BPI has Gonzaga ranked second...two spots ahead of Michigan who is undefeated and beat them by 40 points. The only way you can rank Gonzaga ahead of Michigan at this point is if you think each team only gets a limited number of points each season and Michigan is burning through theirs too quickly.
Without offering any opinion on how BPI compares to any other rankings, if you look at the actual changes in the rankings the characterization of a "shake up" - which came from the click-baity article not ESPN - is really overstated. There really was just fairly normal movement around the rankings.
 
Without offering any opinion on how BPI compares to any other rankings, if you look at the actual changes in the rankings the characterization of a "shake up" - which came from the click-baity article not ESPN - is really overstated. There really was just fairly normal movement around the rankings.
Ah, fair enough.

I considered that could be the case, but decided not to let that possibility get in the way of a good rant.
 
Without offering any opinion on how BPI compares to any other rankings, if you look at the actual changes in the rankings the characterization of a "shake up" - which came from the click-baity article not ESPN - is really overstated. There really was just fairly normal movement around the rankings.
Yeah I didn't see any huge changes when I read the article posted, but looking at the BPI I can say that they are, in the parlance of the kids, dogwater.
 
Based upon my experience, I would rather be on the court for a single 10 minute stretch than be inserted for five 1 or 2 minute stretches.
Maybe, but if you aren't a part of the regular rotation, then seeing 10 minutes at a time has never been an option at Carolina. It just isn't realistic to expect Hubert to have given Tyler Nickel that much time as a freshman - and if he had - people would have complained about that as well.
 
Back
Top