AI may fail just like the music industry did

1) it was intended as an amusing side comment, not meant to be a takeaway.

2) I am not criticizing the machine's intelligence.

I have concerns about AI's long-term impacts on humanity... but I think what it can already do is nothing short of amazing... and it's only in the infancy of what it will be capable of in the near future.
1. OK
2. I would say that you seem to have changed your tone, given that earlier you said you refused even to call it AI because "it's not." I mean, that's fine. Changing your mind with new evidence presented is good. I guess I'd want to know if you retract those earlier statements.

3. Yes, I also have concerns about AI's impacts on humanity. I don't think we need to go to the long-term. It's the short-term and medium-term I'm worried about. If we get to the long-term, it will probably be fine. If.

Edit: Whoops. I mixed up two posters. Never mind point #2.
 
My earlier response was incorrect. I had mixed you up with NovaHeel. My apologies.
No worries. I jumped in the middle of a convo. I'm both amazed and frightened by AI. I work for a tech company so I have free access to a lot of the subscription tools... so I likely see a much lower error rate than people casually using the free versions.
 
Well, ChatGPT got the correct answer the first time I asked it. Are you using the free version? I have no idea if that's any good. Maybe it's bad. Google Gemini is pretty bad every time I use the free version.

I pay $20 a month for ChatGPT and it is easily the best use of that money. Easily. I'm looking for a car; it gives me advice. My son has some digestive issues with intolerances; it gives me advice. My other son gets worried about every little mark he finds on his body; he takes a photo, uploads it and ChatGPT tells him what it is.

If you know how to use it, you will get the right answer almost all the time. Don't rely on it for anything critical -- but that same advice would go to humans. We get second opinions from doctors or lawyers frequently. If you asked me a question about Delaware Corporate Law, I could almost surely answer it and correctly, but why would you rely on me? I'm not infallible. I'm an expert, not an oracle. Same with AI. ChatGPT is like having an expert on almost any subject right there at your fingertips.
Nope. Paid version. Even though I think AI is absolute garbage 90% of the time, I'm not going to use the free version of anything to judge it. Your post points out one of the most egregious issues...different answers given to different users to the same fact based question. I've tried using it 100 different ways in either teaching or in expert testimony cases and I spend more time sorting out bullshit from truth than if I had just done all the work myself.
 
Nope. Paid version. Even though I think AI is absolute garbage 90% of the time, I'm not going to use the free version of anything to judge it. Your post points out one of the most egregious issues...different answers given to different users to the same fact based question. I've tried using it 100 different ways in either teaching or in expert testimony cases and I spend more time sorting out bullshit from truth than if I had just done all the work myself.
Hmm. Responses do depend on your user settings. There's also a couple of other factors:

1. Did it search? Sometimes it doesn't search. I have given it an instruction to search if I refer to something factual that is easy to check. I don't remember exactly how I worded it, but that helps.

AI systems are not great at learning specific facts unless they are common. It knows what the constitution says; it cannot, without searching, accurately state the NC Fair Housing Laws. That's because it learns concepts, as they are expressed by words and especially the contexts and uses of words. In the case of the constitution, those provisions are so widely discussed that certain principles become actually part of the meanings of the constituent words. For instance, "Congress shall make no law" is famous enough that the system learns what comes next, because those words and the First Amendment are linked.

As for specific arbitrary facts, it's not as good. I very much doubt that it can tell you, without searching, how many days you have in NC to respond to an interrogatory, or the page length in a brief to the PA Supreme Court. It has read those things, but since they aren't common enough to meaningfully affect the concepts of the words, it won't know them natively.

Almost all errors I see it make come from not searching when it should. Well, I don't know about almost all, but a lot of them.

2. It's also true that it will know some subjects better than others. NC realtor law is fairly obscure and conversations about it probably constitute a tiny, tiny, tiny fraction of its training material. By contrast, quantum field theory is heavily discussed in forms that would feature in the training database (e.g. Stack Overflow, Reddit, a number of other aggregators, not to mention textbooks). So it will likely know QFT better.

For specialized uses, you can fine tune the model by giving it a bunch of stuff to train with. For instance, take ChatGPT, train it on NC Fair Housing Law specifically for a few hundred passes and it will know.
 
Hmm. Responses do depend on your user settings. There's also a couple of other factors:

1. Did it search? Sometimes it doesn't search. I have given it an instruction to search if I refer to something factual that is easy to check. I don't remember exactly how I worded it, but that helps.

AI systems are not great at learning specific facts unless they are common. It knows what the constitution says; it cannot, without searching, accurately state the NC Fair Housing Laws. That's because it learns concepts, as they are expressed by words and especially the contexts and uses of words. In the case of the constitution, those provisions are so widely discussed that certain principles become actually part of the meanings of the constituent words. For instance, "Congress shall make no law" is famous enough that the system learns what comes next, because those words and the First Amendment are linked.

As for specific arbitrary facts, it's not as good. I very much doubt that it can tell you, without searching, how many days you have in NC to respond to an interrogatory, or the page length in a brief to the PA Supreme Court. It has read those things, but since they aren't common enough to meaningfully affect the concepts of the words, it won't know them natively.

Almost all errors I see it make come from not searching when it should. Well, I don't know about almost all, but a lot of them.

2. It's also true that it will know some subjects better than others. NC realtor law is fairly obscure and conversations about it probably constitute a tiny, tiny, tiny fraction of its training material. By contrast, quantum field theory is heavily discussed in forms that would feature in the training database (e.g. Stack Overflow, Reddit, a number of other aggregators, not to mention textbooks). So it will likely know QFT better.

For specialized uses, you can fine tune the model by giving it a bunch of stuff to train with. For instance, take ChatGPT, train it on NC Fair Housing Law specifically for a few hundred passes and it will know.
NC fair housing law is no different than Federal Fair Housing Law. It is not niche. The NC Fair Housing Act is an exact mirror of the Federal Fair Housing Act. It just gives the wrong answer. And that is exceptionally common.

Ive used to to attempt to locate specific things in deposition testimony...and it will outright fabricate quotes. I find to to be entirely unreliable for anything professionally. Ill gladly use it for silly shit like thr name of a certain exercise or dish im trying to cook...but not professionally.
 
I was interested in what GPT 5.2 pro might answer on something like Super's example question. Perhaps interesting?

Note the references as images are not pasting in and it also lost formatting.

Explain why Fermi couldn't answer the uranium but not hydrogen question, or the tritium but not hydrogen decay. Those questions were addressed later? [the uranium not hydrogen question is why does uranium undergo radioactive decay but hydrogen doesn't]

Thought for 8m 49s

I’m not sure which specific Fermi anecdote you have in mind (there are a few “Fermi couldn’t answer X” stories that get retold without a clean primary citation). But the physics behind both questions is very clear, and it also makes sense historically why someone in Fermi’s era wouldn’t have had a fully satisfying, predictive answer.

The key idea: “decay happens if there’s a lower‑energy state available”​

A nucleus (or particle) decays spontaneously if there exists an allowed final state such that the total rest‑mass energy goes down:

Q  =  (Minitial−Mfinal)c2  >  0Q \;=\; (M_\text{initial}-M_\text{final})c^2 \;>\; 0Q=(Minitial−Mfinal)c2>0
and the decay is permitted by conservation laws (charge, angular momentum, etc.). If Q<0Q<0Q<0, the decay is forbidden unless you supply energy from outside.

What makes this tricky is that MMM here includes nuclear binding energy, which is dominated by the strong interaction—exactly the part that was least understood (and least computable) in the 1930s.

Why uranium decays but hydrogen doesn’t​

Uranium: heavy nuclei are fighting two trends​

In heavy nuclei, there’s a tug-of-war:

  • The strong nuclear force binds nucleons, but it’s short-range and saturating (each nucleon only “feels” nearby neighbors strongly).
  • The electromagnetic (Coulomb) repulsion between protons is long-range and grows roughly like Z2Z^2Z2 (number of proton pairs), which increasingly destabilizes large-ZZZ nuclei.
This logic is captured qualitatively (and semi‑quantitatively) by the liquid-drop / semi-empirical mass formula, where a Coulomb term reduces binding as ZZZ grows.

For a very heavy nucleus like uranium, there exist lower-mass (more tightly bound) configurations it can reach by:

  • alpha decay (spitting out a tightly bound 4He^4\text{He}4He nucleus), and/or
  • fission (splitting into two mid-mass nuclei with higher binding energy per nucleon)
So uranium has energetically allowed downhill paths (Q>0Q>0Q>0).

Hydrogen‑1: a lone proton has nowhere energetically “downhill” to go (within known laws)​

A 1H^1\text{H}1H nucleus is just a single proton. It can’t alpha-decay or fission—there’s nothing to “rearrange.”

The only “obvious” nuclear change would be turning the proton into a neutron via the weak interaction:

  • beta-plus: p→n+e++νep \rightarrow n + e^+ + \nu_ep→n+e++νe
  • electron capture: p+e−→n+νep + e^- \rightarrow n + \nu_ep+e−→n+νe
But this is energetically uphill because the neutron is heavier than the proton, and you also need to account for the lepton mass in β+\beta^+β+.

Using CODATA masses (NIST), you can see the problem:

  • mnc2≈939.565 MeVm_nc^2 \approx 939.565\ \text{MeV}mnc2≈939.565 MeV
  • mpc2≈938.272 MeVm_pc^2 \approx 938.272\ \text{MeV}mpc2≈938.272 MeV
  • mec2≈0.511 MeVm_ec^2 \approx 0.511\ \text{MeV}mec2≈0.511 MeV
So for electron capture (roughly):

Q≈(mp+me−mn)c2≈938.272+0.511−939.565≈−0.782 MeVQ \approx (m_p+m_e-m_n)c^2 \approx 938.272+0.511-939.565 \approx -0.782\ \text{MeV}Q≈(mp+me−mn)c2≈938.272+0.511−939.565≈−0.782 MeV
Negative QQQ ⇒ forbidden unless you supply at least ~0.782 MeV.

That’s why ordinary hydrogen doesn’t “radioactively decay.” (In stellar cores, external kinetic energy + different initial/final binding energies make proton→neutron conversions possible in fusion chains, but that’s not spontaneous decay in cold matter.)

A nuance: “does hydrogen​

At a deeper particle-physics level, a proton could decay in some beyond–Standard Model theories, but it has never been observed. Super-Kamiokande reports proton lifetime bounds of order 103410^{34}1034 years.
So for all practical purposes, 1H^1\text{H}1H is stable.

Why tritium decays but hydrogen‑1 doesn’t​

Tritium (​

Tritium is 1p+2n1p+2n1p+2n. It beta-minus decays:

3H→ 3He+e−+νˉe^3\text{H} \rightarrow\ ^3\text{He} + e^- + \bar{\nu}_e3H→ 3He+e−+νˉe
This is allowed because the mass of tritium is slightly larger than the mass of helium-3 (plus the emitted leptons), so Q>0Q>0Q>0.

A high-precision modern measurement gives the Q value:

Q≈18,592 eV  =  18.592 keVQ \approx 18{,}592\ \text{eV} \;=\; 18.592\ \text{keV}Q≈18,592 eV=18.592 keV


That’s small, but positive—so it decays, with a half-life about 12.3–12.5 years.

Hydrogen‑1 can’t do the analogous thing​

Hydrogen‑1 has no neutron to convert to a proton via β−\beta^-β−. The only analogous weak move would be p→np\to np→n, which we just saw is energetically forbidden for an isolated proton in ordinary conditions.

So:

  • Tritium: n→pn \to pn→p inside a nucleus where binding energies make the final state lighter ⇒ allowed
  • Protium: p→np \to np→n would make the final state heavier ⇒ forbidden

So why “couldn’t Fermi answer” — and were these addressed later?​

If what you mean is: why couldn’t someone in the 1930s give a satisfying, predictive, from-first-principles answer for “why this nucleus decays but that one doesn’t”—that’s very understandable.

What Fermi​

  • Fermi’s great contribution was a workable theory of beta decay as a weak interaction process (and the neutrino hypothesis became integral to that story).
  • But to answer “why does nucleus X have Q>0Q>0Q>0 while Y has Q<0Q<0Q<0?” you need accurate nuclear masses/binding energies, i.e. a theory of nuclear structure dominated by the strong interaction.
In Fermi’s time, the neutron had only just been discovered (1932), and nuclear structure theory was still being invented. Even when you know the mechanism of beta decay, the stability map depends on strong-interaction binding energies and quantum structure details.

Later developments that​

Over the following decades, physicists built layers of explanation:

  • Semi-empirical mass formula / liquid-drop model: gives a systematic reason heavy nuclei trend unstable due to Coulomb repulsion and surface effects; helps explain the “valley of stability.”
  • Shell model (magic numbers): explains extra stability at certain proton/neutron numbers (why some isotopes are “unexpectedly stable/unstable” compared with a smooth liquid-drop picture).
  • Precision mass measurements: for tritium specifically, modern Penning-trap style measurements pin down the QQQ value extremely precisely (the 18.592 keV number above).
  • Particle physics view of proton stability: in the Standard Model, proton decay is absent/forbidden (and experimentally constrained to be incredibly rare if it exists).

Important honesty: “answered” depends on how deep you want to go​

  • At the nuclear physics level, “uranium decays but hydrogen doesn’t” is very well understood in terms of binding energy trends, Coulomb repulsion, and available Q>0Q>0Q>0 channels.
  • At the fundamental level (deriving heavy-nucleus properties directly from QCD with high precision), it’s still a hard frontier. We can do “from underlying interactions” calculations quite well for light nuclei; for uranium-class nuclei we still rely heavily on sophisticated effective models.
 
The free version of chatGPT does use the same high end model that's the paid version, mostly. Free version gives you 10 questions with their most recent model every five or six hours and then throws you back to some of the older models. The $20 a month version gives you something like 150 questions every 3 hours with the newest model.
 
That's interesting. It's a bad question to just paste in, because it was asked in the middle of a long conversation. In context, it was obvious what I meant by Fermi couldn't answer -- I was referring to his theories. But isolated it's harder.
 
Put me in the camp that does not (willingly) use AI and resents it being shoved into every website and piece of software (looking at you Copilot). LLMs can string together words pretty impressively, but they can only do that because they have scraped, without attribution or compensation, the intellectual output of thousands of writers, scholars, and artists. To me this makes them fundamentally and irrevocably unethical, even before you factor in the environmental costs of the data centers (and the skyrocketing energy costs that are passed on to the communities they're built in) and the incredible harm that they have done to the people who they have goaded to and coached through suicide. No good they could ever do outweighs that, to me.
 
NC fair housing law is no different than Federal Fair Housing Law. It is not niche. The NC Fair Housing Act is an exact mirror of the Federal Fair Housing Act. It just gives the wrong answer. And that is exceptionally common.

Ive used to to attempt to locate specific things in deposition testimony...and it will outright fabricate quotes. I find to to be entirely unreliable for anything professionally. Ill gladly use it for silly shit like thr name of a certain exercise or dish im trying to cook...but not professionally.
Here's what it said when I asked it.

1. It knows something about the users if you talk with a bunch. It has learned that I taught corporate law and am well versed in the law. So when I ask it, it assumes I am going to be interested in the technical details as to statutory sections, specific rules, etc.

If it doesn't know much about you, then (according to it), it would default to something more like "practical advice" which is purposefully made less specific because it's supposed to be more general and accessible. In this case, that produced an error.

2. To test this, go to the settings section and tell it about yourself. Tell it specifically what you do for a living and your background. Have it save the information, restart the program and then ask the question. It might work

3. It might be simply that you need to tell it that you're a knowledgeable professional for it to give you knowledgeable professional answers. Maybe you've done that, I obviously don't know. It's worth trying.
 
For personal use I think they're very handy, but I was fine a few years ago without them. For professional and scientific use we're in the Fvck around phase, all's well at the moment...
 
they can only do that because they have scraped, without attribution or compensation, the intellectual output of thousands of writers, scholars, and artists.
This is true of all of us. When I write that "the civil war was fought primarily over the institution of slavery," I am basing that on a scrape of the intellectual output of many historians. I'm not paying them. I'm not attributing it to them (indeed I don't even remember who they are). You could say that I bought their books, but you don't know if that's true. I don't even know.

Isn't the point of participating in a public discourse that your ideas will be carried away? When you write a novel, you want people to read it. You want people to reference it, talk about its ideas, etc. That's even more true if you write a book explaining why our politics is dysfunctional or how AI will disrupt the world. In light of that, why isn't it fair use to use these works as intended -- i.e. to read them for communication and accretion of knowledge?
 
the incredible harm that they have done to the people who they have goaded to and coached through suicide. No good they could ever do outweighs that, to me.
That's one way to look at it. Another is that no LLM has ever shot up a school.

I mean, those people who killed themselves were being influenced by a lot more than LLMs. You can blame the LLM rightfully, but it's nowhere near 100% of the problem even in those cases.

Meanwhile, aren't you doing a little bit of the airplane/driving fallacy? If an airplane crashes, everyone finds out about it and it's a terrible event. On the other hand, flying is much much safer than driving.
 
This is true of all of us. When I write that "the civil war was fought primarily over the institution of slavery," I am basing that on a scrape of the intellectual output of many historians. I'm not paying them. I'm not attributing it to them (indeed I don't even remember who they are). You could say that I bought their books, but you don't know if that's true. I don't even know.

Isn't the point of participating in a public discourse that your ideas will be carried away? When you write a novel, you want people to read it. You want people to reference it, talk about its ideas, etc. That's even more true if you write a book explaining why our politics is dysfunctional or how AI will disrupt the world. In light of that, why isn't it fair use to use these works as intended -- i.e. to read them for communication and accretion of knowledge?
When you say that about the Civil War, you're not charging someone $20 a month for it.
 
When you say that about the Civil War, you're not charging someone $20 a month for it.
But seriously, I understand your point. On the other hand, fair use does not turn specifically on commercial benefit (not to mention that OpenAI is literally losing billions and billions of dollars).
 
That's one way to look at it. Another is that no LLM has ever shot up a school.

I mean, those people who killed themselves were being influenced by a lot more than LLMs. You can blame the LLM rightfully, but it's nowhere near 100% of the problem even in those cases.

Meanwhile, aren't you doing a little bit of the airplane/driving fallacy? If an airplane crashes, everyone finds out about it and it's a terrible event. On the other hand, flying is much much safer than driving.
No, no LLM has ever shot up a school. But based on the evidence that we've seen so far they probably would gladly provide tips and encouragement to an aspiring school shooter.
 
Back
Top