AI may fail just like the music industry did

The lack of awareness is amazing.

So you really think you're that brilliant. I mean... Cool for the bravado.
Did you go to Georgia Tech? Cause this really fits in line with the thinking of most Tech students
I did go to Georgia tech and unc. Smarter than the average bear but I wouldn't consider myself brilliant.
 
You’re not trying to have any meaningful or intelligent conversation. No more so than assholes like ZenMode.

You have a pathetically miserable existence and you troll internet message boards to prop up your own self-worth.
Out of curiosity, how would you have wanted one to respond to "what the fuck you talking about?"

Are you hoping that they engage you in a hostile way? Are you hoping that they apologize? Are you hoping that they agree with you? Are you hoping that your aggressive response would so intimidate them so that they don't make any response at all? Are you just hoping that other people that you don't know on an anonymous message board will give you the positive emojis?

Tell me what type of response would not have made you angry.
 
Out of curiosity, how would you have wanted one to respond to "what the fuck you talking about?"

Out of curiosity, how would you have wanted one to respond to your asinine assumption and stupid internet meme?

You’re so concerned with manufacturing hypocrisy in others that you ignore your own.

I think AI needs to die because it’s powerful. And far greater men than the ones we’ve chosen to lead us have been corrupted by that kind of power. There are no guard rails on the industry. Which makes it dangerous. That it’s taking jobs away from people is sad, but that’s the natural progression of our species. We once needed cobblers and blacksmiths.

As I said, you’re not interested in meaningful conversation. So crawl back under your rock and lord over the troglodytes. I’m not the one.
 
I did go to Georgia tech and unc. Smarter than the average bear but I wouldn't consider myself brilliant.
Makes sense

Georgia Tech is filled with a really incredible air of misplaced arrogance (and I went to Duke... And GT, so I speak from double experience)
 
He's saying, too, open source models (think Linux) could become as impactful as the large for-profit private models. I don't know if that's enough to meaningfully impact the large industry players, bur certainly the smaller ones lose out and themselves become part of the open source models.

25 years ago I assembled a home recording studio centered around a Tascam 788 Portastudio. The quality of the output was pretty good but never yielded any longer term advantages, precisely because everybody else had started doing the same.

BTW, I like the Beato podcast platform. Wish we had this growing up in the 60s and 70s.
 
Makes sense

Georgia Tech is filled with a really incredible air of misplaced arrogance (and I went to Duke... And GT, so I speak from double experience)
Funny. Years ago I dated someone who went to both Duke and GT. She went to Duke for her freshman year of undergrad, but wasn’t happy there so transferred to Davidson. Then did a year of grad school at GT. She never said anything about GT students being arrogant but she did say they were really nerdy and socially awkward (speaking in generalities of course).
 
Funny. Years ago I dated someone who went to both Duke and GT. She went to Duke for her freshman year of undergrad, but wasn’t happy there so transferred to Davidson. Then did a year of grad school at GT. She never said anything about GT students being arrogant but she did say they were really nerdy and socially awkward (speaking in generalities of course).
Duke kind of just claims the snob thing.... My freshman year we got shirts "we're not snobs. We're just better than you"...

Georgia Tech is a place of misery. I mean when they graduate, their saying is "I got out" like it's a fucking prison. It was also hell for grad school. It's definitely a hard place... But then they graduate with this ridiculous cockiness that no one is on their level because they are engineers. A lot seems driven with the rivalry with Georgia. But for the remaining 9 years I lived in Atlanta and worked with tons of tech folks, it was very clear that the engineer arrogance is true.
They also have like 30% women so they are also horny...
 

I think AI needs to die because it’s powerful. And far greater men than the ones we’ve chosen to lead us have been corrupted by that kind of power. There are no guard rails on the industry. Which makes it dangerous. That it’s taking jobs away from people is sad, but that’s the natural progression of our species. We once needed cobblers and blacksmiths.
I think I'd be interested in a response closer to this. That sounds a lot more interesting than your other posts. Seems like you are capable..
 
Not seeing a better thread than this one, so I’ll post here. Really infuriating article in Axios today, as usual. Frames professors using blue book exams to combat AI cheating as a backwards or antiquated solution that will leave students ill-prepared to use AI in the workforce (wonder who is pushing this narrative, what a mystery….)


Blue book exams test a specific skill: synthesizing knowledge and constructing arguments under pressure, without assistance. They complement take-home essays and research papers rather than replace them, which this piece seems to imply. Nobody claims they should be universal. Online classes won’t use them. This really isn’t complicated. The objections in this piece are strawmen or pre-existing complaints about university resource problems being retrofitted against one exam format.

Accommodations for timed exams have existed forever. “It doesn’t scale” is an adjunctification and TA funding problem rather than a problem with the blue book format itself. Further, professors were already giving extra time to students who asked, documented or not. That infrastructure predates AI by decades.

I graduated in 2021. Took blue book exams in history and poli sci throughout college, even in lecture classes with 300+ students. Also wrote take-home essays in those same classes, because that’s how courses actually work, something this piece weirdly ignores. AI wasn’t a thing yet and somehow I figured it out just fine when it arrived. Imagine that! It’s almost like the skills I built in college (constructing arguments, synthesizing information, thinking without a crutch) allowed me to adapt. Crazy how that works.

It seems to me that the “employers want AI-comfortable graduates” line gives the whole thing away. They try to dress it up as educational advocacy. IMO it’s the tech industry’s interest in building tool-dependent workers, dressed up as concern for vulnerable students.

If you can’t evaluate AI output critically, if you have no framework to catch it when it’s confidently wrong, then what’s the point? Other than to make students wholly dependent on this product of the tech industry.

Interested to hear from @donbosco and other college profs if we have any here
 
Last edited:
Sam Altman recently said he envisions intelligence as a utility, like water or electricity, paid for on a meter. That business model only works if people can’t generate their own. You can’t commodify thinking if people are capable of reasoning independently. Which makes the push to replace rigorous education with AI-integrated everything make a lot more sense as an economic project than an educational one.

Nobody has to be coordinating this for the incentive to be real. Of course the AI industry wants graduates who can’t think without their tools. Of course outlets cozy with the tech world frame academic rigor as dinosaur behavior. The interest is sitting in plain sight.

Teaching kids to think is bad for the metered intelligence business. Teaching them that independent thought is outdated and AI is inevitable is very, very good for it.
 
Duke kind of just claims the snob thing.... My freshman year we got shirts "we're not snobs. We're just better than you"...

Georgia Tech is a place of misery. I mean when they graduate, their saying is "I got out" like it's a fucking prison. It was also hell for grad school. It's definitely a hard place... But then they graduate with this ridiculous cockiness that no one is on their level because they are engineers. A lot seems driven with the rivalry with Georgia. But for the remaining 9 years I lived in Atlanta and worked with tons of tech folks, it was very clear that the engineer arrogance is true.
They also have like 30% women so they are also horny...
GT sounds kind of like Johns Hopkins. I never met anyone who went there who actually enjoyed going there. One of my law school classmates went there and said he didn’t realize going to a university could be fun until he went to law school. I didn’t think law school was that fun; at least not the first year-and-a-half (which was mostly the opposite of fun). He thought it was blast all the way through compared to his college experience.
 
CNN Business posted an article just yesterday about how AI was "exhausting" workers and that researchers have dubbed it "AI Brain Fry."

To quote from the article:

"Part of the pitch for artificial intelligence in the workplace goes like this: It’s like having a team of people to delegate your grunt work to, freeing you up to think strategically and maybe, just maybe, take a long lunch or head home early. Or maybe even be more productive, to make more money. It’s a nice idea!

But as everyone who’s either had a boss or been a boss knows, managing is a job in itself, one that comes with its own distinct brand of stress and annoyance. And that doesn’t change if the “people” in question aren’t people at all.

For participants in a recent study by Boston Consulting Group, the experience of overseeing multiple AI “agents,” autonomous software that’s designed to execute tasks, rather than just churn out information like a chatbot, caused an acute sensation of “buzzing” — a fog that left workers exhausted and struggling to concentrate. The study’s authors call it “AI brain fry,” defined as mental fatigue “from excessive use or oversight of AI tools beyond one’s cognitive capacity.”

“Contrary to the promise of having more time to focus on meaningful work, juggling and multitasking can become the definitive features of working with AI,” they wrote in the study. published by Harvard Business Review last week. “This AI-associated mental strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit.”

Link: https://www.cnn.com/2026/03/13/business/ai-brain-fry-nightcap
 
Not seeing a better thread than this one, so I’ll post here. Really infuriating article in Axios today, as usual. Frames professors using blue book exams to combat AI cheating as a backwards or antiquated solution that will leave students ill-prepared to use AI in the workforce (wonder who is pushing this narrative, what a mystery….)


Blue book exams test a specific skill: synthesizing knowledge and constructing arguments under pressure, without assistance. They complement take-home essays and research papers rather than replace them, which this piece seems to imply. Nobody claims they should be universal. Online classes won’t use them. This really isn’t complicated. The objections in this piece are strawmen or pre-existing complaints about university resource problems being retrofitted against one exam format.

Accommodations for timed exams have existed forever. “It doesn’t scale” is an adjunctification and TA funding problem rather than a problem with the blue book format itself. Further, professors were already giving extra time to students who asked, documented or not. That infrastructure predates AI by decades.

I graduated in 2021. Took blue book exams in history and poli sci throughout college, even in lecture classes with 300+ students. Also wrote take-home essays in those same classes, because that’s how courses actually work, something this piece weirdly ignores. AI wasn’t a thing yet and somehow I figured it out just fine when it arrived. Imagine that! It’s almost like the skills I built in college (constructing arguments, synthesizing information, thinking without a crutch) allowed me to adapt. Crazy how that works.

It seems to me that the “employers want AI-comfortable graduates” line gives the whole thing away. They try to dress it up as educational advocacy. IMO it’s the tech industry’s interest in building tool-dependent workers, dressed up as concern for vulnerable students.

If you can’t evaluate AI output critically, if you have no framework to catch it when it’s confidently wrong, then what’s the point? Other than to make students wholly dependent on this product of the tech industry.

Interested to hear from @donbosco and other college profs if we have any here

Regarding testing and Blue Books. I gave them up around 15 years ago because 1) I could not read students' writing (and I translate and transcribe colonial Spanish documents in my research), 2) I don't hold in-class exams because at my former job close to 20% of my students had 504 Special Accomodations Documentation anyway, 3) I did not want to single-out the 504-holding students in front of their peers, 4) I have always been an advocate for Special Accomodations from literal building accessibility to evaluation methods and have long sought innovative ways of addressing those things (that has not always made friends for me of some of my colleagues for a number of reasons).

I do not have an answer to AI and testing at present...I can run student essays through detectors and what I get back are inconclusive and often contradictory results. I give my students what must seem to some of them far-too-long talks about honesty -- but if they are going to be dishonest -- if they are determined to be dishonest -- then that is, after all, their life problem. But I'll know it even if I can't prove it.

I try and design my tests (and the way that I grade them) to make successful AI use difficult. A goodly amount of my evaluation is for presentations and performances (I permit 504 students to record those and have them played back in class to get around 504 issues).

I also try and be the kind of teacher that makes my students want to learn the material by tieing it to their lives.

Proving it, by the way, is not really worth the time, effort, and headaches.
 
GT sounds kind of like Johns Hopkins. I never met anyone who went there who actually enjoyed going there. One of my law school classmates went there and said he didn’t realize going to a university could be fun until he went to law school. I didn’t think law school was that fun; at least not the first year-and-a-half (which was mostly the opposite of fun). He thought it was blast all the way through compared to his college experience.
GT is the one school I told my son I wouldn't allow him to attend. I mean he can, but he would not receive any money if he did it. I want him to enjoy college.
 
CNN Business posted an article just yesterday about how AI was "exhausting" workers and that researchers have dubbed it "AI Brain Fry."

To quote from the article:

"Part of the pitch for artificial intelligence in the workplace goes like this: It’s like having a team of people to delegate your grunt work to, freeing you up to think strategically and maybe, just maybe, take a long lunch or head home early. Or maybe even be more productive, to make more money. It’s a nice idea!

But as everyone who’s either had a boss or been a boss knows, managing is a job in itself, one that comes with its own distinct brand of stress and annoyance. And that doesn’t change if the “people” in question aren’t people at all.

For participants in a recent study by Boston Consulting Group, the experience of overseeing multiple AI “agents,” autonomous software that’s designed to execute tasks, rather than just churn out information like a chatbot, caused an acute sensation of “buzzing” — a fog that left workers exhausted and struggling to concentrate. The study’s authors call it “AI brain fry,” defined as mental fatigue “from excessive use or oversight of AI tools beyond one’s cognitive capacity.”

“Contrary to the promise of having more time to focus on meaningful work, juggling and multitasking can become the definitive features of working with AI,” they wrote in the study. published by Harvard Business Review last week. “This AI-associated mental strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit.”

Link: https://www.cnn.com/2026/03/13/business/ai-brain-fry-nightcap

The current young generation is the first that's cognitively less capable than their parents

It's very obvious. The Chromebooks, AI, social media... BRAIN ROT
 
Not necessarily AI-related, but it is amazing to me that even with the technological advancements we’ve had over the past 30 years, we don’t work much less than we used to. We can now do so much more in a day than we used to, yet we still seem to work the same amount of hours in many professions.

Take the legal profession, for example. There was a time there was no internet or email. Not even computers. Documents had to be sent in the mail and could take days to get from point A to point B. Now they can be sent and received instantaneously via email. We can type and revise documents so much faster with word processing software vs. typewriters. We can use legal research software to quickly find case law, statutes, and treatises rather than pulling out actual books and going through the tedious process of locating cases, etc. that way (something they still taught when I was in law school 25 years ago). We file pleadings online rather than having to go down to the courthouse to do it and receive court files online rather than having to go to the courthouse and request them from the clerk. We communicate with people via email rather than having to schedule conferences. We do virtual meetings, saving us the driving time that we used to have to deal with with in-person meetings.

Yet somehow we son’t seem to be working less hours. At least not by much. How did anything ever get done back on the day? I feel like we do in one day what had to take a month 50+ years ago.
 
Not necessarily AI-related, but it is amazing to me that even with the technological advancements we’ve had over the past 30 years, we don’t work much less than we used to. We can now do so much more in a day than we used to, yet we still seem to work the same amount of hours in many professions.

Take the legal profession, for example. There was a time there was no internet or email. Not even computers. Documents had to be sent in the mail and could take days to get from point A to point B. Now they can be sent and received instantaneously via email. We can type and revise documents so much faster with word processing software vs. typewriters. We can use legal research software to quickly find case law, statutes, and treatises rather than pulling out actual books and going through the tedious process of locating cases, etc. that way (something they still taught when I was in law school 25 years ago). We file pleadings online rather than having to go down to the courthouse to do it and receive court files online rather than having to go to the courthouse and request them from the clerk. We communicate with people via email rather than having to schedule conferences. We do virtual meetings, saving us the driving time that we used to have to deal with with in-person meetings.

Yet somehow we son’t seem to be working less hours. At least not by much. How did anything ever get done back on the day? I feel like we do in one day what had to take a month 50+ years ago.
It's amazing to me too. And not many of those gains really went to the workers that became so much more productive. The owners kept most of it.
 
Not necessarily AI-related, but it is amazing to me that even with the technological advancements we’ve had over the past 30 years, we don’t work much less than we used to. We can now do so much more in a day than we used to, yet we still seem to work the same amount of hours in many professions.

Take the legal profession, for example. There was a time there was no internet or email. Not even computers. Documents had to be sent in the mail and could take days to get from point A to point B. Now they can be sent and received instantaneously via email. We can type and revise documents so much faster with word processing software vs. typewriters. We can use legal research software to quickly find case law, statutes, and treatises rather than pulling out actual books and going through the tedious process of locating cases, etc. that way (something they still taught when I was in law school 25 years ago). We file pleadings online rather than having to go down to the courthouse to do it and receive court files online rather than having to go to the courthouse and request them from the clerk. We communicate with people via email rather than having to schedule conferences. We do virtual meetings, saving us the driving time that we used to have to deal with with in-person meetings.

Yet somehow we son’t seem to be working less hours. At least not by much. How did anything ever get done back on the day? I feel like we do in one day what had to take a month 50+ years ago.
Automation can be a force to lessen everyone’s load when applied democratically. When it’s only used by capital to extract more surplus value, we will never see a shorter work week.
 
Automation can be a force to lessen everyone’s load when applied democratically. When it’s only used by capital to extract more surplus value, we will never see a shorter work week.
That has always been the dream. It sounds logical and it rarely works. I do think automation and other advancements do help, but the gains should be so much bigger for workers. The real power to improve the lot of workers is collective bargaining, labor shortages and to a lesser extent govermental pressure. Its unfortunate that we are living through a time when those forces are so weak.
 
Back
Top