Thanks to Snoop and Super for the analysis. It doesn't seem very helpful to use a model that assumes a 60-65% chance that Harris wins the popular vote. I understand there are externalities that can occur and must be accounted for in a statistical model, but I just can't give any credence to a model with that low a chance.
Well, you are assuming that she will win the popular vote. And maybe she will. But the whole point of using a model is to correct for biases in your assumptions. It might seem like Kamala is a slam dunk to win the popular vote . . . but if the polls are saying that the race is within 3 points nationally, and the polls are correct within their MOEs, then 65% would be a decent estimate of the probability. It might seem wack to you, but that's probably because it would seem wack that Harris was only up by 3 in the polls.
Again, I think a big part of the problem is the nature of the probabilistic estimate. Maybe it will help to use a concept from finance: risk versus uncertainty. Risk is considered to be variance that you can measure. Uncertainty is variance that you can't. For instance, a BB team down by 8 with 2 min to go has (IIRC) about a 10% chance of winning if they have the ball. So the winning team has a 10% risk of losing. Uncertainty would refer to the possibility that, say, someone slips a roofie into the winning team's gatorade and suddenly the players all get super groggy. That possibility is, we hope, extremely remote. But if you add up the all the remote possibilities, they could amount to something significant.
To build on this example, I remember watching the ESPN gamecast of a UNC-Duke game in 2020, the year the NCAA tourney got cancelled. The year we were awful because of injury and underperformance. I don't remember which game (I really don't remember that season much at all), but in one of them, UNC was up big on Duke in the second half. Like 20 points maybe. The gamecast said that UNC's chances to win were 99.5%. I told my wife that a UNC victory was actually a coin flip, because what the gamecast didn't know was 1) the ability of that team to self-destruct; and 2) the refs would often intervene for Duke in the waning minutes of a close game. Those factors would fall under the rubric of uncertainty. We don't know how to quantify the chance that the players would lose their collective minds, but it must have been significant because we did in fact lose.
So again, the 65% estimate includes both risk and uncertainty. It's counter-intuitive.