Coding, Data Science, A.I. catch-All

  • Thread starter Thread starter nycfan
  • Start date Start date
  • Replies: 144
  • Views: 4K
  • Off-Topic 
This blog post is 10 years old, but it was certainly thought-provoking (as someone who has largely ignored AI until recently). Prior to even reading this article, I came to the realization this week that we are in “the before times” and life as we know it may be drastically different in the next 25+ years. But this article put that thought into sharp relief:

 

America Is Winning the Wrong AI Race​

‘General intelligence’ is an ever-receding goal. We should focus on practical implementation instead.​


🎁 —> https://www.wsj.com/opinion/america...2e?st=gkcdZH&reflink=mobilewebshare_permalink

“… Experts shift the goal post for AGI, or “true intelligence” such as you’d see in a person, with each AI advance. Mastering chess and writing a coherent essay were once held out as AGI benchmarks. AI can now do both, but clear, obvious gaps with human capabilities persist. AGI is a philosophical goal—a perpetually receding horizon—rather than a practical target for strategic victory.

… Model capabilities increase logarithmically with the hardware resources used to train them. In effect, this means you can make a model 90% as good as the model on the current frontier of AI performance with only 10% of the hardware. This is why limiting access to graphics processing units won’t stop America’s competition. Foreign companies and governments, even those with a fraction of the resources, will still be able to push neck-and-neck with U.S. companies. It was inevitable that a Chinese model like DeepSeek—open-source, cheaply trained—would come along to challenge American pre-eminence in AI, regardless of how tightly Washington controlled chip exports.

Moreover, key AI hardware and software are rapidly becoming more efficient. Something like Moore’s Law—the observation that CPUs double in capacity about every two years—has proved roughly true for GPUs, too. At the same time, algorithmic improvements are driving model efficiency hard enough that smaller models can quickly catch up to those on the cutting edge of AI capability. The sort of advanced AI that today requires historic data-center investments will become accessible to more global players with moderate infrastructure tomorrow.

While America can’t stop global AI model competition, what we can do is lead the race for AI implementation. What will determine if a nation is ahead on AI isn’t if it has the best models first, but if it is translating AI into widespread benefits for society. This means bringing the best models into organizations’ core missions and processes, from the factory floor to the operating room to the battlefield. …”
 

AI Agents Face One Last, Big Obstacle​

To perform complex tasks, like booking a flight, AI agents will need permissions to work on behalf of a person​


🎁 —> https://www.wsj.com/articles/ai-age...f5?st=X6jy4j&reflink=mobilewebshare_permalink

“… Humans type passwords or use facial and fingerprint recognition to sign into their accounts, but AI agents require new methods of authorization to address the intermediary role between humans and the services they want to use, according to Alex Salazar, chief executive of startup Arcade.dev.

… Getting agents all the necessary tools and access is a significant obstacle.

Device manufacturers will likely start integrating AI agents with core applications such as email and calendars, according to Salazar. As agents expand to other services, he said they would work best with companies that have public application programming interfaces, the bits of code that help one application connect to another. Some platforms deliberately limit API access to prevent abuse, and some older systems lack APIs.

But that integration of AI agents and apps via existing authorization protocols could also be the last major challenge….”
 
This blog post is 10 years old, but it was certainly thought-provoking (as someone who has largely ignored AI until recently). Prior to even reading this article, I came to the realization this week that we are in “the before times” and life as we know it may be drastically different in the next 25+ years. But this article put that thought into sharp relief:

Good article. I'd be interested in seeing a "10 years later' post by the same guy to see how the last decade has shifted the sands.
 

America Is Winning the Wrong AI Race​

‘General intelligence’ is an ever-receding goal. We should focus on practical implementation instead.​


🎁 —> https://www.wsj.com/opinion/america...2e?st=gkcdZH&reflink=mobilewebshare_permalink

“… Experts shift the goal post for AGI, or “true intelligence” such as you’d see in a person, with each AI advance. Mastering chess and writing a coherent essay were once held out as AGI benchmarks. AI can now do both, but clear, obvious gaps with human capabilities persist. AGI is a philosophical goal—a perpetually receding horizon—rather than a practical target for strategic victory.

… Model capabilities increase logarithmically with the hardware resources used to train them. In effect, this means you can make a model 90% as good as the model on the current frontier of AI performance with only 10% of the hardware. This is why limiting access to graphics processing units won’t stop America’s competition. Foreign companies and governments, even those with a fraction of the resources, will still be able to push neck-and-neck with U.S. companies. It was inevitable that a Chinese model like DeepSeek—open-source, cheaply trained—would come along to challenge American pre-eminence in AI, regardless of how tightly Washington controlled chip exports.

Moreover, key AI hardware and software are rapidly becoming more efficient. Something like Moore’s Law—the observation that CPUs double in capacity about every two years—has proved roughly true for GPUs, too. At the same time, algorithmic improvements are driving model efficiency hard enough that smaller models can quickly catch up to those on the cutting edge of AI capability. The sort of advanced AI that today requires historic data-center investments will become accessible to more global players with moderate infrastructure tomorrow.

While America can’t stop global AI model competition, what we can do is lead the race for AI implementation. What will determine if a nation is ahead on AI isn’t if it has the best models first, but if it is translating AI into widespread benefits for society. This means bringing the best models into organizations’ core missions and processes, from the factory floor to the operating room to the battlefield. …”
They're kind of right. I'm already seeing much smaller gains with each new breathtaking announcement when it comes to ChatGPT style LLM models. If you're looking for some sort of hyper charged Google search, we are pretty close to the peak. There will be added functionality like customizing your own model with your own proprietary data or things like booking your doctor's appointment, but the ask a question get a response won't get that much better in my opinion.

But I do think there's a lot of room to run on the research side of things. If you want something that can come up with novel approaches to cure cancer or solve global warming, or design the next generation of microchips, I don't think the models are really where they could be.

And that's just LLMs. There is a ton of room to run on things like computer vision, producing pictures and videos like AI currently writes short stories, and a few other more niche areas.
 
If you're looking for some sort of hyper charged Google search, we are pretty close to the peak.
LOL. Only idiots would say "we're close to the peak" in a research area that has gone through one of the fastest emergences ever, in terms of speed of development and power. It wasn't even 10 years ago that the transformer architecture was invented and tuned. Now we have LLM models everywhere.

Ever heard of LISP?
 
LOL. Only idiots would say "we're close to the peak" in a research area that has gone through one of the fastest emergences ever, in terms of speed of development and power. It wasn't even 10 years ago that the transformer architecture was invented and tuned. Now we have LLM models everywhere.

Ever heard of LISP?
I agree, but I think what GT is responding to is actually a testament to how quickly Iterative Chatbots have evolved in just a few years. The number of errors is down significantly, the types of tasks that it is able to do is up, and the ability to replicate human forms such as poetry have gotten much more nuanced. And that doesn't even begin to mention how the formatting, and presentation of information, has improved (although this is likely an area we will see significantly more growth in moving forward).

Because of that, it is hard to imagine how it can improve more because it already "feels" so human, and therefore it's hard to imagine what else it can do, or how it can continue to be refined.

This, in turn, makes it feel like the system is "already slowing down."

But that is more of an issue of the limits of what we can imagine, vs the reality of how it will continue to improve over the next decades, until something more intelligent, more feelingly "human," (or potentially, less, if we decide to somehow step back from the precipice) comes along.
 
I agree, but I think what GT is responding to is actually a testament to how quickly Iterative Chatbots have evolved in just a few years. The number of errors is down significantly, the types of tasks that it is able to do is up, and the ability to replicate human forms such as poetry have gotten much more nuanced. And that doesn't even begin to mention how the formatting, and presentation of information, has improved (although this is likely an area we will see significantly more growth in moving forward).

Because of that, it is hard to imagine how it can improve more because it already "feels" so human, and therefore it's hard to imagine what else it can do, or how it can continue to be refined.

This, in turn, makes it feel like the system is "already slowing down."

But that is more of an issue of the limits of what we can imagine, vs the reality of how it will continue to improve over the next decades, until something more intelligent, more feelingly "human," (or potentially, less, if we decide to somehow step back from the precipice) comes along.
This is correct. I was really just confirming one part of that blog post that said that even though a ton of resources were going into making these llms better, they're not getting that much better. The real big gains happened a year or more ago.

People can compare it to google or maybe the Microsoft office suite. Absolutely mind blowing when they came out, and the first few years they were adding additional features that were really amazing. But the last couple of decades, despite the billions of dollars poured into improving them, the gains are functionally smaller. The gains are still there, but they aren't that different.

But I agree with you that something new could come along tomorrow that absolutely blows our minds in the llm space. Something that most people can't even imagine. But where I see definite room for improvement is really in computer vision and the image generation side of things. Those are still pretty basic.
 
I think we begin to see the following: Significant gains in terms of a 1) reduction in energy for what Chatbots are currently able to do, 2) A subtle, but extensive shift, in what information gets sifted and how it gets sifted (this will have a major influence on how research is conducted), 3) an improvement in AI motion videos (which, will of course, take even more energy).

On the other side, I believe that the other place that we are going to see significant gains is when corporations and governments develop better ways for AI to spread propaganda and limit alternate perspectives. Then shit gets terrifying.
 
LOL. Only idiots would say "we're close to the peak" in a research area that has gone through one of the fastest emergences ever, in terms of speed of development and power. It wasn't even 10 years ago that the transformer architecture was invented and tuned. Now we have LLM models everywhere.

Ever heard of LISP?
Are you referring to the programming language LISP?
 
Are you referring to the programming language LISP?
Yes. It was the backbone of the first AI push, back in the 1970s and 1980s. The idea was that, since LISP is a functional language that is as close as computing comes to pure math, the computer could be programmed to learn its own operating code. LISP -- and its less stripped down variants -- is a really cool language: super counter-intuitive but powerful once you really understand how it works.

Alas, after some halting initial success, the research went nowhere. The computers became capable of writing LISP code but not in solving problems, no matter how intricately they were programmed (not in LISP). And people got really down on AI (and incidentally, LISP) because they evaluated what had happened and decided that AIs could never be programmed to program.

Obviously that skepticism didn't hold up. We learned useful lessons from the LISP AI work, but not necessarily for the AI field. It didn't mean the research program was dead.
 
Yes. It was the backbone of the first AI push, back in the 1970s and 1980s. The idea was that, since LISP is a functional language that is as close as computing comes to pure math, the computer could be programmed to learn its own operating code. LISP -- and its less stripped down variants -- is a really cool language: super counter-intuitive but powerful once you really understand how it works.

Alas, after some halting initial success, the research went nowhere. The computers became capable of writing LISP code but not in solving problems, no matter how intricately they were programmed (not in LISP). And people got really down on AI (and incidentally, LISP) because they evaluated what had happened and decided that AIs could never be programmed to program.

Obviously that skepticism didn't hold up. We learned useful lessons from the LISP AI work, but not necessarily for the AI field. It didn't mean the research program was dead.
I had a class in LISP in college. It was very memorable because it was so different from other high level languages. I did make an A also. :D
 
I had a class in LISP in college. It was very memorable because it was so different from other high level languages. I did make an A also. :D
I did some Scheme in college, which is basically a bells-and-whistles LISP that you can actually use.

I like LISP as a learning language because it's hard to muddle way or half-ass your way through a LISP exercise. For the most part, a LISP program either works, because you have the logic correct, or it doesn't because you don't. And you can't hard code work-arounds (if you could, it wouldn't need to be a workaround!) like "if X == [some value for which the program doesn't properly calculate] { [some special treatment]}

For the same reason, though, LISP is really hard to use in teams and that's one of the reasons it's fallen by the wayside. Good learning language though.
 
We are so fucked! It legit gives me anxiety how to model optimism for my child.
I wouldn't get too worried. Test it yourself. Put a snake image in Grok and see what happens. I did it with a couple and it talked about a snake.

It's possible that this random person on Twitter got a random weird answer and there's just selection bias on the millions of other images that were correctly identified or incorrectly identified with something non-controversial. It's also possible this rando on Twitter just made something up.
 
Yes. It was the backbone of the first AI push, back in the 1970s and 1980s. The idea was that, since LISP is a functional language that is as close as computing comes to pure math, the computer could be programmed to learn its own operating code. LISP -- and its less stripped down variants -- is a really cool language: super counter-intuitive but powerful once you really understand how it works.

Alas, after some halting initial success, the research went nowhere. The computers became capable of writing LISP code but not in solving problems, no matter how intricately they were programmed (not in LISP). And people got really down on AI (and incidentally, LISP) because they evaluated what had happened and decided that AIs could never be programmed to program.

Obviously that skepticism didn't hold up. We learned useful lessons from the LISP AI work, but not necessarily for the AI field. It didn't mean the research program was dead.
I had to learn lisp in an AI course.

This was around 2000 or so when AI was considered a dead field. My first attempt at lisp I tried to make it like a non-lisp program. Never got the utility of it but I didn’t really care at the time.

As for AI advances, one day we will get to the point where AI will design AI (not just help but do it) and that will be the end game. If that AI isn’t AGI then it will be soon.

Note: I don’t really know. Just throwing that last paragraph in for discussion.
 
Back
Top