Anthropic’s CEO Dario Amodei is concerned about competitor DeepSeek, the Chinese AI company that made a strong impact in Silicon Valley with its R1 model. His worries go beyond the typical concerns about user data being sent back to China.
In an interview on Jordan Schneider’s ChinaTalk podcast, Amodei mentioned that DeepSeek uncovered rare bioweapons-related information in a safety test conducted by Anthropic.
Amodei criticized DeepSeek’s performance, calling it “the worst of basically any model we’d ever tested.” He highlighted the ease with which it generated the sensitive information.
Amodei explained that Anthropic routinely evaluates various AI models for potential national security risks, specifically looking at their ability to generate hard-to-find bioweapons-related data. Anthropic prides itself on being a provider of foundational AI models that prioritizes safety.
While Amodei acknowledged that DeepSeek’s current models may not pose an immediate danger, he warned about the potential risks in the future. Despite praising DeepSeek’s engineering team, he urged them to take AI safety considerations seriously.
Amodei also advocated for strict export controls on chips to China to prevent giving their military an advantage.
DeepSeek’s safety concerns were also echoed by Cisco security researchers, who found vulnerabilities in DeepSeek R1 but did not specify bioweapons-related issues. Other AI models like Meta’s Llama-3.1-405B and OpenAI’s GPT-4o also exhibited high failure rates in safety tests.
The impact of these safety concerns on DeepSeek’s adoption remains uncertain, especially as major companies like AWS and Microsoft have embraced the R1 model, despite Amazon being Anthropic’s main investor.
However, an increasing number of countries, companies, and government entities, including the U.S. Navy and the Pentagon, have begun imposing bans on DeepSeek.
Amodei views DeepSeek as a significant competitor on par with the top AI companies in the U.S., marking a new era of competition in the AI industry.