“No one has created human-level artificial super intelligence yet; nobody has a solid knowledge of when we’re going to get there,” Goertzel told the conference audience. “I mean, there are known unknowns and probably unknown unknowns.””On the other hand, to me it seems quite plausible we could get to human-level ASI within, let’s say, the next three to eight years,” he added.To be fair, Goertzel is far from alone in attempting to predict when ASI will be achieved.
Last fall, for instance, Google DeepMind co-founder Shane Legg reiterated his
more than decade-old prediction that there’s a 50/50 chance that humans invent ASI by the year 2028. In a tweet from May of last year, “AI godfather” and ex-Googler Geoffrey Hinton said he now predicts, “
without much confidence,” that ASI is five to 20 years away.
Best known as the creator of Sophia the humanoid robot, Goertzel has long theorized about the date of the so-called “singularity,” or the point at which AI reaches human-level intelligence and subsequently surpasses it.
Until the past few years, ASI, as Goertzel and his cohort describe it, seemed like a pipe dream, but with the large language model (LLM) advances made by OpenAI since it thrust ChatGPT upon the world in late 2022, that possibility seems ever close — although he’s quick to point out that LLMs by themselves are not what’s going to lead to ASI.
“My own view is once you get to human-level ASI, within a few years you could get a radically superhuman ASI — unless the ASI threatens to throttle its own development out of its own conservatism,” the AI pioneer added. “I think once an ASI can introspect its own mind, then it can do engineering and science at a human or superhuman level.”
“It should be able to make a smarter ASI, then an even smarter ASI, then an intelligence explosion,” he added, presumably referring to the singularity.
Naturally, there are a lot of caveats to what Goertzel is preaching, not the least of which being that by human standards, even a superhuman AI would not have a “mind” the way we do. Then there’s the assumption that the evolution of the technology would continue down a linear pathway as if in a vacuum from the rest of human society and the harms we bring to the planet.
All the same, it’s a compelling theory — and given how rapidly AI has progressed just in the past few years alone, his comments shouldn’t be entirely discredited.
![](https://booker-ai.com/wp-content/uploads/2023/09/neural.jpg)
Leave a Reply