News

Anthropic Reveals Claude Code Quality Issues Stem from Harness Bugs, Not AI Models

Anthropic Reveals Claude Code Quality Issues Stem from Harness Bugs, Not AI Models

April 24, 2026 – Anthropic has addressed recent reports concerning the declining quality of Claude Code outputs. It has been confirmed that the high volume of user complaints over the past two months regarding Claude Code providing poorer quality results were indeed rooted in genuine problems.

Anthropic's postmortem analysis revealed that the models themselves were not at fault. Instead, three separate and complex issues within the Claude Code's "harness" – the framework that integrates the model with external systems – were directly responsible for the material problems affecting users.

One particular issue highlighted in their detailed explanation stands out: On March 26, a change was implemented to clear Claude's older "thinking" from sessions that had been idle for over an hour. The intention was to reduce latency when users resumed these sessions. However, a critical bug caused this clearing mechanism to repeatedly activate with every turn for the remainder of the session, rather than just once. This led Claude to appear forgetful and repetitive to users.

Many developers, including the article's author, frequently leave Claude Code sessions idle for an hour or even longer before returning to them. The author estimates spending more time prompting in these "stale" sessions than in newly initiated ones, indicating a significant impact of this bug on their workflow and user experience.

For those building agentic systems, this detailed report offers crucial insights. It underscores the profound complexity of bugs that can affect AI harnesses, even when setting aside the inherent non-deterministic nature of the models themselves. Understanding such integration challenges is vital for robust AI system development.

↗ Read original source