Anthropic’s Claude Code, an AI coding tool, has had its full architecture and unreleased features leaked for the second time in over a year. The leak exposed internal model performance data and gave competitors access to a detailed unreleased feature roadmap. Anthropic claims it was a release packaging issue caused by human error, not a security breach.
The leaked code contained dozens of feature flags, including the ability for Claude to review its latest session and transfer learnings across conversations. A “persistent assistant” running in background mode also allowed Claude Code to work even when a user was idle. Remote capabilities were already rolled out for Claude Code users.
This leak provides valuable insight into Anthropic’s plans for longer autonomous tasks, deeper memory, and multi-agent collaboration. It could be a boon for the company’s enterprise push, which is its core revenue driver.
The incident highlights the importance of securing AI systems and tools, not just for competitors but also for companies using these tools in their own attacks. The leak may not sink Anthropic, but it gives competitors a free engineering education on how to build production-grade AI coding agents.
Source: https://www.axios.com/2026/03/31/anthropic-leaked-source-code-ai