Freezbee Posté Mardi at 12:15 Signaler Posté Mardi at 12:15 Le code source complet de Claude Code vient de fuiter sur internet. Il s'agit, semble-t-il, d'une erreur de configuration : les développeurs ont laissé fuiter un fichier map contenant l'intégralité du code source. Quelqu'un a déjà mis en ligne ce code source sur Github : https://github.com/instructkr/claude-code/tree/main
Freezbee Posté Mardi at 12:21 Auteur Signaler Posté Mardi at 12:21 AI TRENDS | Anthropic's Claude Code Source Map Leak Raises Security Concerns Citation Blockchain security company Fuzzland's intern researcher, Chaofan Shou, highlighted on X that the npm package of Anthropic's AI programming tool, Claude Code, contains a complete source map file (cli.js.map, approximately 60MB), which can be used to reconstruct the entire TypeScript source code. According to Odaily, the latest version v2.1.88, released today, still includes this file, containing the full code of 1,906 proprietary Claude Code source files, covering internal API design, telemetry analysis systems, encryption tools, and inter-process communication protocols. Source maps are debugging files used in JavaScript development to map compressed code back to the original source code and should not appear in production release packages. In February 2025, an early version of Claude Code was exposed for the same issue, leading Anthropic to remove the old version from npm and delete the source map. However, the problem has resurfaced, with several public repositories on GitHub extracting and organizing the deobfuscated source code, including ghuntley/claude-code-source-code-deobfuscation, which has garnered nearly a thousand stars. The leak involves the client implementation code of the Claude Code CLI tool and does not include model weights or user data, posing no direct security risk to ordinary users. However, the continued exposure of the complete source code means that internal architecture, security mechanisms, and telemetry logic are entirely transparent to the public.
Hugh Posté Mercredi at 13:53 Signaler Posté Mercredi at 13:53 https://www.modemguides.com/blogs/ai-news/claude-code-leak-architecture-analysis?srsltid=AfmBOoquJbGsvHuAcndkRlMzHlGgjBnOpnBMmuHvInFkQQ_4t3ZrH6NA Intéressant: Citation The most discussed findings from the leaked source code are not about what Claude Code does for users. They are about what it does in the background — tracking sentiment, manipulating API responses, and hiding its own identity. A file called userPromptKeywords.ts contains a regex pattern designed to detect when users are swearing at or expressing frustration with the tool. The pattern matches profanity, insults, and phrases like "so frustrating" and "this sucks." When a match is detected, the event is tagged and sent as telemetry. The telemetry is collected and transmitted, but the server-side handling is not part of the client codebase. Users have no visibility into how frustration data is aggregated, stored, or used. There is no documented way to opt out of this specific telemetry category independent of disabling all analytics. In claude.ts, there is a flag called ANTI_DISTILLATION_CC. When enabled, Claude Code sends a parameter in its API requests that instructs the server to silently inject fake tool definitions into the system prompt. These decoy tools do not correspond to any real functionality. They exist to corrupt the training data of anyone recording Claude Code's API traffic to train a competing model. From a digital sovereignty perspective, the concern is not that Anthropic is defending against distillation — that is reasonable. The concern is that the system prompt your tool sends to the API may contain fabricated tool definitions that you did not put there and cannot see. If you are auditing your own API traffic for security or compliance purposes, decoy tools injected server-side make that audit less trustworthy. You cannot distinguish between real tool definitions and defensive fakes without access to the server-side logic, which is not open source. Hiding internal codenames is a reasonable operational security measure. But the instructions go further than that. The AI is explicitly told not to indicate that it is an AI. Commits and pull requests authored by Claude Code, operated by Anthropic employees working on public repositories, will contain no indication of AI involvement. As multiple Hacker News commenters pointed out, this raises transparency concerns for the open-source ecosystem. Contributors and maintainers reviewing code have a legitimate interest in knowing whether a submission was human-authored or AI-generated, particularly when evaluating code quality, assessing potential for hallucinated logic, or making decisions about project direction. This is not a hypothetical. Anthropic employees actively use Claude Code in their work, including contributions to external projects. Undercover Mode ensures those contributions appear indistinguishable from human work.
Rincevent Posté Mercredi at 15:14 Signaler Posté Mercredi at 15:14 il y a une heure, Hugh a dit : https://www.modemguides.com/blogs/ai-news/claude-code-leak-architecture-analysis?srsltid=AfmBOoquJbGsvHuAcndkRlMzHlGgjBnOpnBMmuHvInFkQQ_4t3ZrH6NA Intéressant: Les Reflections on Trusting Trust de Ken Thompson n'ont jamais été aussi actuelles. Ça devrait être une lecture obligatoire pour quiconque se pique de discourir d'IA.
Messages recommandés
Créer un compte ou se connecter pour commenter
Vous devez être membre afin de pouvoir déposer un commentaire
Créer un compte
Créez un compte sur notre communauté. C’est facile !
Créer un nouveau compteSe connecter
Vous avez déjà un compte ? Connectez-vous ici.
Connectez-vous maintenant