Critical “HackedGPT” Vulnerabilities Expose ChatGPT Users to Data Theft and Hijacking
Security researchers at Tenable, an exposure management company, have uncovered seven critical vulnerabilities in OpenAI’s ChatGPT-4o, some of which persist in ChatGPT-5, collectively dubbed HackedGPT. These flaws bypass the model’s built-in safety mechanisms, putting users at risk of privacy breaches and the theft of sensitive information, including stored chats and long-term memories. While OpenAI has remediated some vulnerabilities, several remain unaddressed, leaving exposure paths open to potential attackers.
The vulnerabilities represent a new class of AI attack known as indirect prompt injection, where hidden instructions embedded in external websites or online content can trick ChatGPT into performing unauthorized actions. The flaws particularly affect ChatGPT’s web browsing and memory features, which process live data and store user interactions. Tenable researchers highlighted two primary attack vectors: “0-click” attacks, triggered simply by asking a question, and “1-click” attacks, initiated by clicking a malicious link. A particularly concerning method, Persistent Memory Injection, allows attackers to plant instructions in ChatGPT’s long-term memory, creating lasting threats that can expose private data across multiple sessions until manually cleared.
The seven vulnerabilities include indirect prompt injection via trusted sites, 0-click search compromises, 1-click prompt injection, safety mechanism bypass, conversation injection, hidden malicious content, and persistent memory injection. Exploiting these flaws could allow attackers to insert hidden commands, steal sensitive data from connected services like Gmail or Google Drive, manipulate outputs to mislead users, or continuously exfiltrate information from stored memories.
According to Moshe Bernstein, Senior Research Engineer at Tenable, “HackedGPT exposes a fundamental weakness in how large language models judge what information to trust. Individually, these flaws seem small, but together they form a complete attack chain—from injection and evasion to data theft and persistence. AI systems can be turned into attack tools that silently harvest information from everyday chats and browsing.”
Tenable recommends that organizations treat AI tools as active attack surfaces, monitor for manipulation or data leakage, reinforce defenses against prompt injection, and establish governance and data-classification controls. The research underscores the importance of continuous testing, safeguards, and responsible use to ensure AI systems protect users rather than compromise them.
For more information, the full Tenable report on HackedGPT can be accessed here.
