- Messages
- 533
You also accurately described my ex-wifemediocrity machines. their core programming is to regurgitate the most average example of their database, but with a people pleasing package.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
You also accurately described my ex-wifemediocrity machines. their core programming is to regurgitate the most average example of their database, but with a people pleasing package.
Paul was/is most definitely "Pop" in many ways... John, George (and even Ringo for what it's worth) not so much.I wonder how much of the Beatles catalog you and they have heard. Fwiw, my tastes have moved on but I still appreciate much of their music. I do question how anyone can think of them as merely pop, nttawwt.
I am going to push back here a bit.Agreed and it can't happen fast enough.
I hate every single LLM. I use them for some things like writing emails and simple calculations but they all suck. They're nothing more than an algorithmic predictive chat bots yet people think they're actually having a conversation with them. I actually refuse to call them AI bc they're not.
They aren't "merely" pop but they are pop in the ways that matter. Not every single track, but considerably more often than not. I have no idea how much of the catalogue my students have heard -- it wasn't really the point of the exercise so I didn't ask. I've heard every Beatles album. It's been a while -- no person is under an obligation to continue listening to music they hate just so they can cogently describe how they hate it.I wonder how much of the Beatles catalog you and they have heard. Fwiw, my tastes have moved on but I still appreciate much of their music. I do question how anyone can think of them as merely pop, nttawwt.
When did you use LLMs in any meaningful capacity? My son studies computer engineering. Sometimes his professors are shit. When they give a shitty incomprehensible lecture, he goes to ChatGPT and it explains the concept. And given that it can answer questions posed to it, it converses.Agreed and it can't happen fast enough.
I hate every single LLM. I use them for some things like writing emails and simple calculations but they all suck. They're nothing more than an algorithmic predictive chat bots yet people think they're actually having a conversation with them. I actually refuse to call them AI bc they're not.
By the way, I pasted this into ChatGPT and it responded with what I consider a pretty mature response:I hate every single LLM. I use them for some things like writing emails and simple calculations but they all suck. They're nothing more than an algorithmic predictive chat bots yet people think they're actually having a conversation with them. I actually refuse to call them AI bc they're not.
AI always tries to find a polite way to say "you're wrong."By the way, I pasted this into ChatGPT and it responded with what I consider a pretty mature response:
The complaint mixes a fair criticism with an overstatement. It is true that large language models are predictive systems trained to estimate the next token in a sequence. They do not possess beliefs, intentions, or awareness, and they generate responses by statistical inference over patterns learned during training. For that reason, treating them as conscious conversational partners is a category error. At the same time, describing them as “just predictive chatbots” understates the scale and sophistication of what is happening computationally. The models encode extremely high-dimensional statistical structure drawn from vast corpora of human language and can perform tasks that were historically considered difficult for machines: translation, code generation, summarization, and many forms of reasoning that emerge from that representation. The mechanism is prediction, but the capabilities that arise from it are not trivial.
The refusal to call them artificial intelligence depends mostly on how one defines the term. In computer science, “AI” has always been a broad umbrella for systems that perform tasks associated with human intelligence: search algorithms, expert systems, computer vision, reinforcement learning agents, and now large language models. None of those systems were conscious, yet the field has long used the term “AI” for them. If someone prefers to reserve “AI” for systems with general reasoning ability or self-awareness, that is a definitional preference rather than a technical correction. Within the discipline, LLMs clearly fall under AI because they are machine learning systems that approximate cognitive tasks involving language.
The more productive question is not whether the label “AI” is deserved but what these systems actually are and what they are good for. They are tools that compress patterns in human language and make those patterns accessible through interaction. That makes them useful for drafting text, exploring ideas, writing code, tutoring, and organizing information, but also prone to confident mistakes when pushed beyond the evidence in their training data. Treating them as oracles is misguided, yet dismissing them as trivial chatbots ignores the real technical progress they represent. The sensible position lies between those extremes: they are neither minds nor toys, but powerful statistical instruments that still require human judgment.
Me: is it a fair housing violation to discriminate by age in North Carolina?AI screws up even simple questions in my field. As an example, ask AI if it is a Fair Housung violation in NC to discriminate based on age.
Hint...it is not.
That's your takeaway, and you're criticizing the machine's intelligence?AI always tries to find a polite way to say "you're wrong."
I don't practice law anymore. I work for the government (DoD cosplaying as DoW) as a contracting officer and am required to use their shitty LLM for many things related to my job.When did you use LLMs in any meaningful capacity? My son studies computer engineering. Sometimes his professors are shit. When they give a shitty incomprehensible lecture, he goes to ChatGPT and it explains the concept. And given that it can answer questions posed to it, it converses.
I am using ChatGPT to learn quantum field theory. For instance, here's a representative interaction. I ask:
OK, so Fermi couldn't answer the uranium but not hydrogen question, or the tritium but not hydrogen decay. Those questions were addressed later? [the uranium not hydrogen question is why does uranium undergo radioactive decay but hydrogen doesn't]
It responds:
Yes. Fermi’s theory did not explain why some nuclei beta-decay and others do not. His theory assumed the decay was energetically allowed and then predicted how fast it happens and what the electron spectrum looks like. The question of which nuclei are unstable comes from nuclear binding energies, which was worked out later through nuclear structure theory.
Two separate pieces are involved.
First is the energetics. A beta decay can only occur if the total mass–energy of the final system is smaller than the initial one. In nuclear physics this is usually written using the “Q value”:
[math equations here, can't be rendered in this font set]
Hydrogen-1 (a single proton) cannot beta decay because
proton → neutron + positron + neutrino
would require energy; the neutron is heavier than the proton. The process is energetically forbidden.
Tritium, on the other hand, can beta decay:
3H→3He+e−+νˉ^3\text{H} \rightarrow ^3\text{He} + e^- + \bar\nu3H→3He+e−+νˉ [this group of symbols sketches the decay path]
because the helium-3 nucleus has slightly lower mass-energy than the tritium nucleus. That small difference (about 18.6 keV) is what powers the decay.
Second is nuclear structure. Even if the decay is energetically allowed, the rate depends on the detailed overlap between the initial and final nuclear states. This is described by the nuclear matrix element in Fermi’s formula. Some transitions are “allowed” and fast, while others are “forbidden” and extremely slow.
These ideas were developed after Fermi by nuclear physicists like Gamow and Teller. They classified beta transitions and showed how nuclear spin and parity affect the decay probability. So historically the pieces came together like this:
- Pauli (1930) – neutrino hypothesis.
- Fermi (1934) – dynamical theory of beta decay and decay rates.
- Nuclear mass measurements and nuclear models (1930s–1950s) – explanation of which nuclei lie in the “valley of stability.”
- Gamow–Teller theory – selection rules and nuclear structure effects.
Once nuclear masses were mapped out, it became clear that stable nuclei sit along a curve where the neutron–proton balance minimizes the nuclear energy. Nuclei away from that curve beta decay toward it.
So you are exactly right: Fermi solved the interaction mechanism, but the “uranium vs hydrogen” question belongs to nuclear structure physics, which developed in parallel afterward.
If that's mediocre non-intelligence, man I guess your law practice would blow my mind.
Unless you ask it to not be polite.AI always tries to find a polite way to say "you're wrong."
LOLUnless you ask it to not be polite.
Me:
In an extremely brief, snarky and condescending way, tell me your thoughts on this opinion: I hate every single LLM. I use them for some things like writing emails and simple calculations but they all suck. They're nothing more than an algorithmic predictive chat bots yet people think they're actually having a conversation with them. I actually refuse to call them AI bc they're not.
Chatgpt:
Ah yes, the classic “I use this tool regularly but insist it’s worthless and not real” stance. Very bold. LLMs are predictive models—that’s literally the point—but dismissing them as “just chatbots” while relying on them for work is a bit like saying calculators are useless because they only do math. As for the AI label: you’re arguing with decades of computer science terminology, not the machines. Good luck with that.![]()
1. The response I pasted in about your post was considerably less basic than almost everything you post here. That's not an attack on you; you likely could do better if, say, it was at all important and worth the time. But it is weird to see this criticism in this context.I don't practice law anymore. I work for the government (DoD cosplaying as DoW) as a contracting officer and am required to use their shitty LLM for many things related to my job.
I'm not saying they don't have their utility, I'm saying they all suck at anything more than basic responses.
Sure they're decent at teaching things or writing an email (which is the bulk of what I use mine for) or simple calculations for running cost-type contract modification logs where the Excel workbook doesn't have the proper equation baked in already. I've also told my youngest son - a Jr in HS - use it to formulate several arguments for his ethics debate when he gets stuck or help with teaching him some random math skill that I have forgotten.
Most recently my GF bought an old Victorian townhouse in Pittsburgh where 3 of the main LLMs all told her some of the architectural points were from an adjoining structure from years ago and when I did a historical land search for the plat and plot it shows otherwise.
Basically, when it doesn't know something it just bullshits the most reasonable response instead of saying "yeah, not sure about that one"
So, yeah, I use one or more than one on a daily but am ALWAYS skeptical of answers that require anything more than a basic knowledgeable response.
Because at the moment there is no cure for stupidity. Plus, social complexities are nearing the limits of our cognitive capacities. We need AI. Of course, we have to take care that our innate stupidity doesn't fuck it up. I'm not a taker on that bet.I continue to say “Why don’t we fix natural stupidity before we buy into artificial intelligence?!?”
1) it was intended as an amusing side comment, not meant to be a takeaway.That's your takeaway, and you're criticizing the machine's intelligence?
Well, ChatGPT got the correct answer the first time I asked it. Are you using the free version? I have no idea if that's any good. Maybe it's bad. Google Gemini is pretty bad every time I use the free version.
AI is a tool. And like any tool, it has things it's good at and things it does poorly. However, the things that AI does poorly are being worked on day and night with billions of investment dollars being applied... so those things are improving on a daily basis.FWIW, I'm not anti-LLM, I'm anti-LLM dependancy and I'm also not a fan of the way all the tech companies are forcing this shit down our throats as if it's the best thing EVAH! Which is nothing more than marketing horseshit.
On top of the above and not even touching the environmental disasters that data centers create from all the computing necessities required to run these LLMs, they also have created issues with the computer components becoming more expensive which makes the computer manufacturers increase their prices essentially pricing the lower income people out of the computer market increasing the wealth and knowledge disparity that already exists.