Anthropic identified three separate issues in the Claude Code harness that caused quality degradation over the past two months, not the AI models themselves. A notable bug involved session clearing that occurred repeatedly instead of just once after idle periods, making Claude appear forgetful. This highlights the complexity of harness bugs in agentic systems beyond model non-determinism.
Background
Claude Code is Anthropic's AI coding assistant, part of the growing ecosystem of AI-powered development tools. System harnesses refer to the infrastructure wrapping AI models that manage sessions, memory, and user interactions.
- Source
- Simon Willison
- Published
- Apr 24, 2026 at 09:31 AM
- Score
- 6.0 / 10