AI may fail just like the music industry did

Agreed and it can't happen fast enough.

I hate every single LLM. I use them for some things like writing emails and simple calculations but they all suck. They're nothing more than an algorithmic predictive chat bots yet people think they're actually having a conversation with them. I actually refuse to call them AI bc they're not.
I am going to push back here a bit.

First, AI is a field of computer science. The name is given to distinguish it from normal deterministic algorithms. But even some things like an algorithm that searches for the best move in a board game is considered AI (or at least was 20 years ago) even though it may not have any learning component and is therefore purely deterministic.

But, anyway, a neural network that recognizes a numeric digit in an image is AI. Is it intelligence as we know it? Of course not but it is in the field of AI.

The second point is that a chat bot is far more than just a token predictor. Very complex concepts become encoded within the massive neural networks during training, almost like magic. Are they really fundamentally different than biological intelligence? One could argue that we are just like a predictive chat bot.
 
I wonder how much of the Beatles catalog you and they have heard. Fwiw, my tastes have moved on but I still appreciate much of their music. I do question how anyone can think of them as merely pop, nttawwt.
They aren't "merely" pop but they are pop in the ways that matter. Not every single track, but considerably more often than not. I have no idea how much of the catalogue my students have heard -- it wasn't really the point of the exercise so I didn't ask. I've heard every Beatles album. It's been a while -- no person is under an obligation to continue listening to music they hate just so they can cogently describe how they hate it.

I'm not a Stones fan either, but I don't hate them. I respect some of their earlier work. It never sounds as if written for grandmas. It actually sustains tension -- sometimes even carrying it for . . . (gasp) a whole minute -- without dissolving it into banality. Most of all, it's not so dreadfully boring.

But anyway, this isn't a Beatles thread
 
Agreed and it can't happen fast enough.

I hate every single LLM. I use them for some things like writing emails and simple calculations but they all suck. They're nothing more than an algorithmic predictive chat bots yet people think they're actually having a conversation with them. I actually refuse to call them AI bc they're not.
When did you use LLMs in any meaningful capacity? My son studies computer engineering. Sometimes his professors are shit. When they give a shitty incomprehensible lecture, he goes to ChatGPT and it explains the concept. And given that it can answer questions posed to it, it converses.

I am using ChatGPT to learn quantum field theory. For instance, here's a representative interaction. I ask:

OK, so Fermi couldn't answer the uranium but not hydrogen question, or the tritium but not hydrogen decay. Those questions were addressed later? [the uranium not hydrogen question is why does uranium undergo radioactive decay but hydrogen doesn't]

It responds:

Yes. Fermi’s theory did not explain why some nuclei beta-decay and others do not. His theory assumed the decay was energetically allowed and then predicted how fast it happens and what the electron spectrum looks like. The question of which nuclei are unstable comes from nuclear binding energies, which was worked out later through nuclear structure theory.

Two separate pieces are involved.

First is the energetics. A beta decay can only occur if the total mass–energy of the final system is smaller than the initial one. In nuclear physics this is usually written using the “Q value”:
[math equations here, can't be rendered in this font set]

Hydrogen-1 (a single proton) cannot beta decay because

proton → neutron + positron + neutrino

would require energy; the neutron is heavier than the proton. The process is energetically forbidden.

Tritium, on the other hand, can beta decay:

3H→3He+e−+νˉ^3\text{H} \rightarrow ^3\text{He} + e^- + \bar\nu3H→3He+e−+νˉ [this group of symbols sketches the decay path]

because the helium-3 nucleus has slightly lower mass-energy than the tritium nucleus. That small difference (about 18.6 keV) is what powers the decay.

Second is nuclear structure. Even if the decay is energetically allowed, the rate depends on the detailed overlap between the initial and final nuclear states. This is described by the nuclear matrix element in Fermi’s formula. Some transitions are “allowed” and fast, while others are “forbidden” and extremely slow.

These ideas were developed after Fermi by nuclear physicists like Gamow and Teller. They classified beta transitions and showed how nuclear spin and parity affect the decay probability. So historically the pieces came together like this:

  1. Pauli (1930) – neutrino hypothesis.
  2. Fermi (1934) – dynamical theory of beta decay and decay rates.
  3. Nuclear mass measurements and nuclear models (1930s–1950s) – explanation of which nuclei lie in the “valley of stability.”
  4. Gamow–Teller theory – selection rules and nuclear structure effects.

Once nuclear masses were mapped out, it became clear that stable nuclei sit along a curve where the neutron–proton balance minimizes the nuclear energy. Nuclei away from that curve beta decay toward it.
So you are exactly right: Fermi solved the interaction mechanism, but the “uranium vs hydrogen” question belongs to nuclear structure physics, which developed in parallel afterward.


If that's mediocre non-intelligence, man I guess your law practice would blow my mind.
 
I hate every single LLM. I use them for some things like writing emails and simple calculations but they all suck. They're nothing more than an algorithmic predictive chat bots yet people think they're actually having a conversation with them. I actually refuse to call them AI bc they're not.
By the way, I pasted this into ChatGPT and it responded with what I consider a pretty mature response:

The complaint mixes a fair criticism with an overstatement. It is true that large language models are predictive systems trained to estimate the next token in a sequence. They do not possess beliefs, intentions, or awareness, and they generate responses by statistical inference over patterns learned during training. For that reason, treating them as conscious conversational partners is a category error. At the same time, describing them as “just predictive chatbots” understates the scale and sophistication of what is happening computationally. The models encode extremely high-dimensional statistical structure drawn from vast corpora of human language and can perform tasks that were historically considered difficult for machines: translation, code generation, summarization, and many forms of reasoning that emerge from that representation. The mechanism is prediction, but the capabilities that arise from it are not trivial.

The refusal to call them artificial intelligence depends mostly on how one defines the term. In computer science, “AI” has always been a broad umbrella for systems that perform tasks associated with human intelligence: search algorithms, expert systems, computer vision, reinforcement learning agents, and now large language models. None of those systems were conscious, yet the field has long used the term “AI” for them. If someone prefers to reserve “AI” for systems with general reasoning ability or self-awareness, that is a definitional preference rather than a technical correction. Within the discipline, LLMs clearly fall under AI because they are machine learning systems that approximate cognitive tasks involving language.

The more productive question is not whether the label “AI” is deserved but what these systems actually are and what they are good for. They are tools that compress patterns in human language and make those patterns accessible through interaction. That makes them useful for drafting text, exploring ideas, writing code, tutoring, and organizing information, but also prone to confident mistakes when pushed beyond the evidence in their training data. Treating them as oracles is misguided, yet dismissing them as trivial chatbots ignores the real technical progress they represent. The sensible position lies between those extremes: they are neither minds nor toys, but powerful statistical instruments that still require human judgment.
 
By the way, I pasted this into ChatGPT and it responded with what I consider a pretty mature response:

The complaint mixes a fair criticism with an overstatement. It is true that large language models are predictive systems trained to estimate the next token in a sequence. They do not possess beliefs, intentions, or awareness, and they generate responses by statistical inference over patterns learned during training. For that reason, treating them as conscious conversational partners is a category error. At the same time, describing them as “just predictive chatbots” understates the scale and sophistication of what is happening computationally. The models encode extremely high-dimensional statistical structure drawn from vast corpora of human language and can perform tasks that were historically considered difficult for machines: translation, code generation, summarization, and many forms of reasoning that emerge from that representation. The mechanism is prediction, but the capabilities that arise from it are not trivial.

The refusal to call them artificial intelligence depends mostly on how one defines the term. In computer science, “AI” has always been a broad umbrella for systems that perform tasks associated with human intelligence: search algorithms, expert systems, computer vision, reinforcement learning agents, and now large language models. None of those systems were conscious, yet the field has long used the term “AI” for them. If someone prefers to reserve “AI” for systems with general reasoning ability or self-awareness, that is a definitional preference rather than a technical correction. Within the discipline, LLMs clearly fall under AI because they are machine learning systems that approximate cognitive tasks involving language.

The more productive question is not whether the label “AI” is deserved but what these systems actually are and what they are good for. They are tools that compress patterns in human language and make those patterns accessible through interaction. That makes them useful for drafting text, exploring ideas, writing code, tutoring, and organizing information, but also prone to confident mistakes when pushed beyond the evidence in their training data. Treating them as oracles is misguided, yet dismissing them as trivial chatbots ignores the real technical progress they represent. The sensible position lies between those extremes: they are neither minds nor toys, but powerful statistical instruments that still require human judgment.
AI always tries to find a polite way to say "you're wrong."
 
AI screws up even simple questions in my field. As an example, ask AI if it is a Fair Housung violation in NC to discriminate based on age.

Hint...it is not.
 
AI screws up even simple questions in my field. As an example, ask AI if it is a Fair Housung violation in NC to discriminate based on age.

Hint...it is not.
Me: is it a fair housing violation to discriminate by age in North Carolina?

GPT: No. In North Carolina, age by itself is generally not a protected class under fair housing law, so discrimination based solely on a person’s age (for example, preferring older tenants over younger adults) is typically not a fair housing violation. The federal Fair Housing Act—which largely controls housing discrimination law nationwide—prohibits discrimination only on the basis of race, color, religion, national origin, sex, disability, and familial status.

North Carolina’s own fair housing statute largely mirrors the federal law and likewise prohibits housing discrimination based on race, color, religion, sex, national origin, disability, or familial status (families with children). Age is not included in that list. As a result, refusing to rent to someone because they are, say, 22 or 70 is usually not illegal under state or federal fair housing law unless it overlaps with another protected category.

One related category that is protected is familial status. Landlords cannot discriminate against families with children under 18 living with parents or guardians. At the same time, federal law expressly allows certain age-restricted communities, such as “55+” retirement housing, under the Housing for Older Persons Act. Those communities can legally limit residency by age if they meet statutory requirements.

In short: age discrimination alone is usually legal in housing in North Carolina, unless it is actually discrimination against families with children or violates some other protected category. If you want, I can also explain a few situations where “age rules” do end up violating fair housing law because they are really proxies for familial-status discrimination.
 
When did you use LLMs in any meaningful capacity? My son studies computer engineering. Sometimes his professors are shit. When they give a shitty incomprehensible lecture, he goes to ChatGPT and it explains the concept. And given that it can answer questions posed to it, it converses.

I am using ChatGPT to learn quantum field theory. For instance, here's a representative interaction. I ask:

OK, so Fermi couldn't answer the uranium but not hydrogen question, or the tritium but not hydrogen decay. Those questions were addressed later? [the uranium not hydrogen question is why does uranium undergo radioactive decay but hydrogen doesn't]

It responds:

Yes. Fermi’s theory did not explain why some nuclei beta-decay and others do not. His theory assumed the decay was energetically allowed and then predicted how fast it happens and what the electron spectrum looks like. The question of which nuclei are unstable comes from nuclear binding energies, which was worked out later through nuclear structure theory.

Two separate pieces are involved.

First is the energetics. A beta decay can only occur if the total mass–energy of the final system is smaller than the initial one. In nuclear physics this is usually written using the “Q value”:
[math equations here, can't be rendered in this font set]

Hydrogen-1 (a single proton) cannot beta decay because

proton → neutron + positron + neutrino

would require energy; the neutron is heavier than the proton. The process is energetically forbidden.

Tritium, on the other hand, can beta decay:

3H→3He+e−+νˉ^3\text{H} \rightarrow ^3\text{He} + e^- + \bar\nu3H→3He+e−+νˉ [this group of symbols sketches the decay path]

because the helium-3 nucleus has slightly lower mass-energy than the tritium nucleus. That small difference (about 18.6 keV) is what powers the decay.

Second is nuclear structure. Even if the decay is energetically allowed, the rate depends on the detailed overlap between the initial and final nuclear states. This is described by the nuclear matrix element in Fermi’s formula. Some transitions are “allowed” and fast, while others are “forbidden” and extremely slow.

These ideas were developed after Fermi by nuclear physicists like Gamow and Teller. They classified beta transitions and showed how nuclear spin and parity affect the decay probability. So historically the pieces came together like this:


  1. Pauli (1930) – neutrino hypothesis.
  2. Fermi (1934) – dynamical theory of beta decay and decay rates.
  3. Nuclear mass measurements and nuclear models (1930s–1950s) – explanation of which nuclei lie in the “valley of stability.”
  4. Gamow–Teller theory – selection rules and nuclear structure effects.

Once nuclear masses were mapped out, it became clear that stable nuclei sit along a curve where the neutron–proton balance minimizes the nuclear energy. Nuclei away from that curve beta decay toward it.
So you are exactly right: Fermi solved the interaction mechanism, but the “uranium vs hydrogen” question belongs to nuclear structure physics, which developed in parallel afterward.


If that's mediocre non-intelligence, man I guess your law practice would blow my mind.
I don't practice law anymore. I work for the government (DoD cosplaying as DoW) as a contracting officer and am required to use their shitty LLM for many things related to my job.

I'm not saying they don't have their utility, I'm saying they all suck at anything more than basic responses.

Sure they're decent at teaching things or writing an email (which is the bulk of what I use mine for) or simple calculations for running cost-type contract modification logs where the Excel workbook doesn't have the proper equation baked in already. I've also told my youngest son - a Jr in HS - use it to formulate several arguments for his ethics debate when he gets stuck or help with teaching him some random math skill that I have forgotten.

Most recently my GF bought an old Victorian townhouse in Pittsburgh where 3 of the main LLMs all told her some of the architectural points were from an adjoining structure from years ago and when I did a historical land search for the plat and plot it shows otherwise.

Basically, when it doesn't know something it just bullshits the most reasonable response instead of saying "yeah, not sure about that one"

So, yeah, I use one or more than one on a daily but am ALWAYS skeptical of answers that require anything more than a basic knowledgeable response.
 
FWIW, I'm not anti-LLM, I'm anti-LLM dependancy and I'm also not a fan of the way all the tech companies are forcing this shit down our throats as if it's the best thing EVAH! Which is nothing more than marketing horseshit.

On top of the above and not even touching the environmental disasters that data centers create from all the computing necessities required to run these LLMs, they also have created issues with the computer components becoming more expensive which makes the computer manufacturers increase their prices essentially pricing the lower income people out of the computer market increasing the wealth and knowledge disparity that already exists.
 
AI always tries to find a polite way to say "you're wrong."
Unless you ask it to not be polite.

Me:
In an extremely brief, snarky and condescending way, tell me your thoughts on this opinion: I hate every single LLM. I use them for some things like writing emails and simple calculations but they all suck. They're nothing more than an algorithmic predictive chat bots yet people think they're actually having a conversation with them. I actually refuse to call them AI bc they're not.

Chatgpt:
Ah yes, the classic “I use this tool regularly but insist it’s worthless and not real” stance. Very bold. LLMs are predictive models—that’s literally the point—but dismissing them as “just chatbots” while relying on them for work is a bit like saying calculators are useless because they only do math. As for the AI label: you’re arguing with decades of computer science terminology, not the machines. Good luck with that. 🙃
 
Unless you ask it to not be polite.

Me:
In an extremely brief, snarky and condescending way, tell me your thoughts on this opinion: I hate every single LLM. I use them for some things like writing emails and simple calculations but they all suck. They're nothing more than an algorithmic predictive chat bots yet people think they're actually having a conversation with them. I actually refuse to call them AI bc they're not.

Chatgpt:
Ah yes, the classic “I use this tool regularly but insist it’s worthless and not real” stance. Very bold. LLMs are predictive models—that’s literally the point—but dismissing them as “just chatbots” while relying on them for work is a bit like saying calculators are useless because they only do math. As for the AI label: you’re arguing with decades of computer science terminology, not the machines. Good luck with that. 🙃
LOL
 
I don't practice law anymore. I work for the government (DoD cosplaying as DoW) as a contracting officer and am required to use their shitty LLM for many things related to my job.

I'm not saying they don't have their utility, I'm saying they all suck at anything more than basic responses.

Sure they're decent at teaching things or writing an email (which is the bulk of what I use mine for) or simple calculations for running cost-type contract modification logs where the Excel workbook doesn't have the proper equation baked in already. I've also told my youngest son - a Jr in HS - use it to formulate several arguments for his ethics debate when he gets stuck or help with teaching him some random math skill that I have forgotten.

Most recently my GF bought an old Victorian townhouse in Pittsburgh where 3 of the main LLMs all told her some of the architectural points were from an adjoining structure from years ago and when I did a historical land search for the plat and plot it shows otherwise.

Basically, when it doesn't know something it just bullshits the most reasonable response instead of saying "yeah, not sure about that one"

So, yeah, I use one or more than one on a daily but am ALWAYS skeptical of answers that require anything more than a basic knowledgeable response.
1. The response I pasted in about your post was considerably less basic than almost everything you post here. That's not an attack on you; you likely could do better if, say, it was at all important and worth the time. But it is weird to see this criticism in this context.

2. I promise you: that passage I pasted about quantum field theory is the opposite of basic. It is an incredibly complicated set of theories and GPT knows it well. It knows it well enough to give cogent responses -- and as far as I can accurate ones -- to hypothetical questions, which show thought.

3. Try asking GPT law questions. You might be surprised. For instance, I asked "evaluate the constitutionality of the SAVE Act." This was the reply. If this was a law school exam question, it would easily be the best response in the class, at least at most law schools.

The Safeguard American Voter Eligibility (SAVE) Act would require documentary proof of U.S. citizenship—such as a passport or birth certificate—to register to vote in federal elections, and in some versions requires additional identification when voting. It would also alter voter-registration procedures and potentially require states to share voter-roll information with federal agencies.

Evaluating its constitutionality requires examining three main constitutional issues: Congress’s power over federal elections, the right to vote under the Reconstruction Amendments, and federalism concerns regarding state control of election administration.

First, Congress likely has substantial authority to regulate federal election procedures, which supports the bill’s constitutionality in principle. Article I, §4 of the Constitution (the Elections Clause) allows Congress to “make or alter” state regulations governing the time, place, and manner of federal elections. Courts have interpreted this power broadly, allowing Congress to enact nationwide rules such as the National Voter Registration Act and the Help America Vote Act. Under this framework, a requirement that voters provide proof of citizenship could plausibly be justified as a “manner” regulation aimed at enforcing the constitutional rule that only citizens may vote in federal elections. Because citizenship is already a lawful qualification for voting, supporters argue the act merely enforces an existing constitutional requirement rather than creating a new restriction.

Second, the strongest constitutional challenge would likely focus on burdens on the right to vote under the Fourteenth and Twenty-Fourth Amendments. The Supreme Court uses a balancing approach (the Anderson–Burdick framework) to evaluate election regulations. Laws imposing modest burdens on voters are usually upheld if justified by legitimate state interests, but severe burdens may be struck down. Critics argue that requiring documentary proof of citizenship could effectively disenfranchise eligible voters who lack passports or matching birth certificates—potentially millions of people—especially those who changed names after marriage or lack ready access to such documents. If courts concluded the documentation requirement significantly restricts voting while addressing only a minimal problem (non-citizen voting is extremely rare), they could find the burden disproportionate and unconstitutional.

Third, there is a federalism issue concerning how far Congress may intrude into state election administration. States traditionally control voter registration systems and election procedures, and some critics argue the SAVE Act effectively nationalizes voter-registration rules and constrains state discretion. However, precedent suggests Congress can regulate federal elections even if doing so affects state election systems. The main constitutional risk would arise if the law attempted to compel states to administer federal programs in ways that violate anti-commandeering principles or conflict with existing federal statutes such as the National Voter Registration Act.

In short, the SAVE Act is not obviously unconstitutional on its face, because Congress has broad authority to regulate federal election procedures. The real constitutional question would likely turn on how burdensome the documentation requirement proves in practice and whether courts view it as a reasonable enforcement mechanism or as an unjustified barrier to the fundamental right to vote. If enacted, it would almost certainly face immediate litigation, and its fate would likely depend on how courts apply the Anderson–Burdick balancing test to the specific provisions.
 
I continue to say “Why don’t we fix natural stupidity before we buy into artificial intelligence?!?”
Because at the moment there is no cure for stupidity. Plus, social complexities are nearing the limits of our cognitive capacities. We need AI. Of course, we have to take care that our innate stupidity doesn't fuck it up. I'm not a taker on that bet.
 
That's your takeaway, and you're criticizing the machine's intelligence?
1) it was intended as an amusing side comment, not meant to be a takeaway.

2) I am not criticizing the machine's intelligence.

I have concerns about AI's long-term impacts on humanity... but I think what it can already do is nothing short of amazing... and it's only in the infancy of what it will be capable of in the near future.
 
Well, ChatGPT got the correct answer the first time I asked it. Are you using the free version? I have no idea if that's any good. Maybe it's bad. Google Gemini is pretty bad every time I use the free version.

I pay $20 a month for ChatGPT and it is easily the best use of that money. Easily. I'm looking for a car; it gives me advice. My son has some digestive issues with intolerances; it gives me advice. My other son gets worried about every little mark he finds on his body; he takes a photo, uploads it and ChatGPT tells him what it is.

If you know how to use it, you will get the right answer almost all the time. Don't rely on it for anything critical -- but that same advice would go to humans. We get second opinions from doctors or lawyers frequently. If you asked me a question about Delaware Corporate Law, I could almost surely answer it and correctly, but why would you rely on me? I'm not infallible. I'm an expert, not an oracle. Same with AI. ChatGPT is like having an expert on almost any subject right there at your fingertips.
 
FWIW, I'm not anti-LLM, I'm anti-LLM dependancy and I'm also not a fan of the way all the tech companies are forcing this shit down our throats as if it's the best thing EVAH! Which is nothing more than marketing horseshit.

On top of the above and not even touching the environmental disasters that data centers create from all the computing necessities required to run these LLMs, they also have created issues with the computer components becoming more expensive which makes the computer manufacturers increase their prices essentially pricing the lower income people out of the computer market increasing the wealth and knowledge disparity that already exists.
AI is a tool. And like any tool, it has things it's good at and things it does poorly. However, the things that AI does poorly are being worked on day and night with billions of investment dollars being applied... so those things are improving on a daily basis.

The amount of information it can process and bring together to create an informed, rational response is almost beyond belief... even in a post-Google world. There are certainly errors, and it is stupid to accept everything it spits out as fact... but that has been true about the internet since day 1... and this tool is using the internet as its primary data source.

It's obvious why humans are resistant to it... particularly highly educated people who invested a lot of time and effort into becoming experts on topics. We were raised in a world where we were taught that computers could never replace a highly educated person... and it's not long before that is false.
 
Back
Top