Anthropic has released a new cybersecurity-focused AI model called Claude Mythos Preview, potentially mending its strained relationship with the US government after refusing to develop technology for mass surveillance or autonomous weapons. The model represents a strategic pivot toward security applications that may align with government interests. This development highlights the ongoing tension between AI ethics and government demands in national security contexts.
Background
Anthropic is an AI safety company known for developing Claude AI, which has faced government criticism over ethical concerns regarding surveillance and autonomous weapons applications. The company previously drew lines against developing technology for mass surveillance or fully autonomous lethal systems.
- Source
- The Verge
- Published
- Apr 18, 2026 at 04:14 AM
- Score
- 6.0 / 10