Anthropic Admits Engineering Errors Behind Claude Code’s Performance Drop Amid User Backlash
AI lab Anthropic reveals missteps causing Claude Code issues, sparking user cancellations and fueling compute shortage rumors

Ad
Anthropic, the AI company valued at $380 billion, has confirmed that a series of engineering mistakes caused a significant decline in the performance of its popular Claude Code tool, which led to widespread user dissatisfaction and subscription cancellations over the past month.
This admission comes after weeks of mixed messaging from Anthropic, initially blaming users and later citing user-focused changes, which has damaged trust and intensified speculation that the company is struggling with compute resource shortages amid soaring demand.
Ad
Engineering Missteps Behind Claude Code’s Decline
Anthropic detailed three key engineering errors responsible for Claude Code’s recent performance issues: a reduction in reasoning effort to cut latency, a bug causing the model to discard its reasoning history mid-session, and a restrictive system prompt limiting response length. All issues were fixed by April 20, and usage limits were reset on April 23.
Ad
User Backlash and Trust Erosion
Many users expressed frustration over Anthropic’s initial denial and slow response, accusing the company of gaslighting. Some power users have canceled subscriptions, cybersecurity experts warned of degraded code quality, and industry insiders labeled Claude Code unreliable for complex tasks.
When you’re paying a lot of money for a product and it actually makes your job harder, to the point where people make you start questioning the quality of your own work, it really becomes a problem.—Muratcan Koylan, Sully.ai technical staff
Ad
Compute Constraints and Industry Competition
Anthropic acknowledged that unprecedented demand has stretched its infrastructure, especially during peak hours, and is rapidly expanding compute capacity through partnerships with Amazon and Google. However, rumors persist that the company is facing compute shortages, limiting rollout of its new Mythos model and introducing usage caps.
Rival OpenAI has criticized Anthropic’s compute strategy, suggesting it operates on a smaller scale. Meanwhile, OpenAI recently launched GPT-5.5 and boasts millions of active users, intensifying the competitive pressure on Anthropic.
Ad
Impact on Code Quality and Developer Confidence
Independent analyses revealed a notable drop in code quality from Anthropic’s latest Claude models, with increased security vulnerabilities compared to OpenAI’s offerings. Cybersecurity experts warn that novice developers relying on Claude risk introducing serious defects into production code.
I’m glad they are trying to address this, but a month to get this out is crummy.—Dave Kennedy, CEO of TrustedSec
Ad
Looking Ahead: Rebuilding Trust and Scaling Responsibly
Anthropic has pledged greater transparency and faster communication regarding future updates to Claude Code. The company is focused on scaling compute resources responsibly to meet demand and restore user confidence as it prepares for a potential public offering later this year.
The coming months will be critical for Anthropic to retain its developer base and compete effectively against OpenAI in the rapidly evolving AI landscape.



