That is interesting, and the comments are interesting too. But the result sounds gloomy even if it happens in 2040 instead of this decade.
Lots of this stuff is over my head, like his comment "After 2030, AI progress has to mostly come from algorithmic progress." I think he's saying that our computational power is so amazing now that if we don't have AGI now, we never will without algorithmic progress, but i'd think algorithmic process is a certainty. There was a time before Retrieval Augmented Generation (RAG) was a thing, and LLMs had just very specific pools of knowledge to dip into, but RAG opened up some gates. I don't see why more novel concepts to chain all these things together, or improve ways of chaining won't easily come out of the woodwork in terms of design.
I saw "The Thinking Game" - documentary about DeepMind, a bit of a bio about their CEO, and the way they move from training against rules-based games (pong, then checkers, then Go) to training about make-believe environments (the way baby humans have to learn from a novel world) was fascinating, and the outcome was an AI that can be competitive in games with loose rules like Starcraft.
1h 24m
www.imdb.com