Anthropic experienced two significant accidental data exposures within a week, including leaking internal files about an unannounced AI model and exposing nearly 2,000 source code files for its Claude Code tool. The company described these as human error packaging issues rather than security breaches, though they reveal sensitive architectural details. These incidents contrast with Anthropic's public identity as a careful, responsible AI developer.
Background
Anthropic is an AI research company known for its focus on AI safety and responsible development, positioning itself as a cautious alternative to more aggressive AI firms. Claude Code is their command-line tool that uses AI to assist developers with coding tasks.
- Source
- TechCrunch
- Published
- Apr 1, 2026 at 07:58 AM
- Score
- 6.0 / 10