The story of AI is beginning a new chapter. Is the next AI breakthrough around the corner, asks Agilebase’s CTO Oliver Kohll?
In a 13,000-word essay on his blog last week, Anthropic CEO Dario Amodei declared that an AI breakthrough is around the corner.
According to Amodei, we are 14 months away from the development of “Powerful AI.” He defines that as “smarter than a Nobel Prize winner” in fields like biology and engineering.
His essay has caused a stir in the tech world. Unlike other AI boosters such as Open AI’s Sam Altman or Elon Musk, Amodei is seen as an “AI doomer.” He has spoken about the risks AI can do. His Amazon-backed company Anthropic makes Claude.ai, a Gen AI model that accentuates safety first. Anthropic was the first Gen AI firm to come up with a “Responsible Scaling Policy” in 2023. (He has just updated it.) Other Gen AI firms felt obliged to follow suit
AI CEOs have been saying, “major changes are around the corner” for a while now. They all need money to stay competitive in a race to dominate this exciting but wildly expensive market. Techno evangelism boots the stock price.
Anthropic needs more investment, too. At the time of writing, the firm is seeking an extra $40m. That could be the reason Amodei posted his essay. But before we dismiss him as a mere enthusiast, it’s worth noting that until now, he has been a voice of restraint in the hyperbolic world of AI.
Insiders have noticed that Amodei is scaling up his firm. He clearly believes we are heading into a different world soon. Is this another turning point for AI?
IT’S GOING TO HAPPEN SOON
The most striking thing about Amodei’s essay is his timeline. “Powerful AI” which others have called “General Intelligence”, could be with us as soon as 2026, says Amodei. That’s 14 months away. How is that possible?
He believes AI is creating a “compressed 21st century.”
“Powerful AI will soon speed up scientific progress to such an extent that humanity experiences a century’s worth of advancements within a mere five to ten years,” he says.
Amodei predicts that AI will revolutionize healthcare. He cites the success of recent AI tools AlphaFold and AlphaProteo, which have made significant contributions to biological research. AI will help develop cures for infectious diseases, he says. It will even cure cancer.
But there’s more. AI will aid in understanding the complex neural mechanisms underlying mental illness, says Amodei. It could lead to more effective interventions to help cure depression, schizophrenia, and addiction.
He goes on to say we can use AI to optimize the distribution of health interventions, particularly in developing countries. This would lead to more fair access to healthcare worldwide. AI could also boost economic growth in developing nations, closing the gap between developed and developing countries.
A world with improved health, reduced poverty, and greater well-being would be more conducive to democratic values and international cooperation.
Amodei can sound grandiose: “It is worth looking at this list and reflecting on how different the world will be if all of it is achieved seven to twelve years from now,” he says. “It would be an unimaginable humanitarian triumph, the elimination all at once of the scourges that have haunted humanity for millennia.”
Amodei’s “manifesto” has been described as the most positive vision for tech to emerge from Silicon Valley in years.
However, he admits that “externalities” may constrain progress. Clinical trials take time, and AI may face a political backlash. He is aware of the risks of AI. Democracies may need to block bad actors from getting their hands on GPUs and semiconductors, he says. They may need to freeze out authoritarian regimes to stop them from developing AI for surveillance and control.
Amodei says we must be at the frontier to have a say in the future. If his firm, Anthropic, can lead AI, he can force firms that want to compete to ensure their models are safe. He concludes his essay by hinting that AI may be the final tech we ever need to invent.
It is quite a pitch, but is it right?
BRITTLE REASONING
In the week Amodei published his essay, engineers at Apple published one of their own. It came to different conclusions about Gen AI.
Instead of grand claims, the Apple engineers emphasized the current limitations of AI.
They said that while models like Open AI or Claude.ai may exhibit impressive performance on standardised benchmark tests, they rely on probabilistic pattern matching rather than true understanding of concepts.
To support this claim, Apple engineers conducted a study. They tested Gen AI on a collection of grade-school-level mathematical word problems.
The researchers created two modified datasets of straightforward maths problems. The first replaced specific names and numbers with new values. For instance, a problem about “Sophie getting 31 building blocks” could become “Bill getting 19 building blocks.” The researchers argued that this change shouldn’t affect the model’s ability to solve the problem if it understood the underlying mathematical concepts.
In the second data set, the engineers introduced irrelevant details. A problem about picking kiwis might include the detail that “five of them were smaller than average.” This aimed to test whether the models could discern relevant information from distractions.
The results were disappointing for AI boosters. The models showed a significant drop in test scores, ranging from a 10% drop in scores for the first data set to a 65% drop for the second.
They found that current AIs excel at pattern matching but fail to prove a true understanding of the concepts involved. They mimic reasoning steps observed in their training data rather than employ formal logical reasoning.
AI expert Gary Marcus called this “brittle reasoning.” He says fundamental improvements in AI reasoning will be needed before AI can tackle complex real-world problems.
A PERSONAL VIEW
Marvin Minsky co-founded the Massachusetts Institute of Technology’s AI laboratory in 1959. Eleven years later, he wrote: “In three to eight years, we’ll have a machine with the general intelligence of an average human being. The machine will begin to educate itself with fantastic speed. It will be at genius level in a few months, and a few months after that, its powers will be incalculable.”
He was a few years off in his prediction. But by the same measure, when Willis Havilland Carrier invented the first modern air conditioner in 1902, no one predicted: “Oh wow. This is going to create the modern city of Phoenix, Arizona.” Yet it came to pass.
Clearly, the future is hard to predict. Even so, I welcome Amodei’s positivity, even if it might be self-serving. But I worry that all the hype obscures reality. Given the world’s population, it is likely there are more geniuses on planet Earth at one time than there have ever been before. And yet, we face profound challenges. Adding more geniuses—albeit synthetic ones—may not be the answer.
For me, AI boosterism—or doomerism, for that matter—does AI a disservice. It obscures our understanding of what AI is good at and not good at.
AI’s value is not amazingly clever. It is good at doing something simple that you don’t want to waste your own time on.
My firm, Agilebase, will soon start work with a charity, the Square Food Foundation. Their staff aren’t technical, but they are motivated. They are the perfect people to take advantage of our AI-driven no-code system. In our first meeting, they will use AI to build a back-office system in minutes.
In this way, AI can make the world more equal. In coding and other realms, studies show that AI improves the performance of less accomplished people more than it does more accomplished people. If you are an immigrant trying to write in a new language, for example, AI raises your abilities to average.
This is AI’s value today. What happens tomorrow, no one knows. Not even Anthropic CEO Dario Amodei.