And I’m just questioning the reliability of a model that appears to be inconsistent with the observable data. Maybe his model is great. But if it is, it’s picking up on something right now that’s not showing up in the data.
With all due respect, don't you think the model knows more than you? First, it is fed way more data than you can see, at least if you have a day job (which you do, as I understand it). Second, people are bad at numerical estimates, at least compared to models. Third, the model incorporates a lot of learning from past elections that you're not privy to.
The model isn't necessarily right, but I would think the answer to that would be, "here are some other models that are just as good, and they say something similar." Not "that looks fishy to me." In fairness, the models have become black boxes. Silver explains his methodology, but he doesn't reveal all the hyper-parameters to his model so it can't be replicated. 538 does the same. So it's pretty hard to look at the models themselves and say, "this one is better."
Also, in fairness, "this looks fishy to me" is a constant temptation. I know I've done it when dissecting polls. "Unskewing the polls" is a natural tendency, but not a correct one unless the skew is obvious. For instance, if the NYT poll really did get a sample with 56% evangelicals, it would need unskewing -- but of course, it didn't.
Anyway, we're having a mini-Chevron discussion here, though perhaps we don't realize it. You're playing the judge who decides that the agency must be wrong because it is relying on, say, "sociological gobbeldygook," and I'm saying to defer to the experts. LOL.