Dario Amodei, the CEO of $19 billion AI startup Anthropic, doesn’t think humanity is in any immediate danger from evolving artificial intelligence models and tools. But he has a problem with the justification some his peers use to pooh-pooh the risks.
Speaking at an AI conference in San Francisco on Wednesday, Amodei took issue with the idea that AI models are, and will always be, mere chatbots with limited abilities. He criticized in particular a version of this mindset espoused by Marc Andreessen, the famed venture capitalist and champion of unrestricted AI, who has famously dismissed concerns by arguing that AI is really just math. “Restricting AI means restricting math, software, and chips” Andreessen tweeted in March.
That logic doesn’t hold up, in Amodei’s view; because everything in the world can be classified as math, he said.
“Isn’t your brain just math? A neuron fires and sums up calculations, that’s math, too,” he said on-stage during Eric Newcomer’s Cerebral Valley conference on Wednesday. “Like, we shouldn’t be afraid of Hitler, it’s just math. The whole universe is math.”
Amodei, a former OpenAI vice president who left in 2021 to start rival LLM firm Anthropic, is among a group of AI executives who openly warn of the technology’s potential risks, from rogue models to bad actors. The CEO supports some regulation of the AI industry and Anthropic even backed a controversial California bill to that end, which was ultimately vetoed.
Andreessen, whose VC firm has invested billions of dollars in scores of AI companies including OpenAI and Elon Musk’s Xai, is on the other side: an AI “boomer,” demanding unfettered development of AI technology by individual companies. “’Regulation’ of AI (math) is the foundation of a new totalitarianism,” the VC wrote last year. He also called AI safety critics, or doomers, “a cult.” A representative for Andreessen declined to comment on Amodei’s criticism.
Amodei acknowledged during the conference that the AI models of today “are not smart enough… not autonomous enough” to pose much serious risk to people. But he noted that the technology is evolving quickly, with AI “agents” that are capable of acting autonomously on behalf of a human command. As those still-nascent tools come to the forefront, Amodei said that the public will have a deeper sense of what AI is capable of and of the potential harms.
“People laugh today when chatbots say something a little unpredictable,” Amodei said. “But we’re gonna have to do a better job of controlling the agents than that.”
Are you a tech company employee or someone with insight or a tip to share? Contact Kali Hays securely through Signal at +1-949-280-0267 or at kali.hays@fortune.com.
Leave a Reply