Tenable Research has discovered seven critical vulnerabilities in OpenAI’s ChatGPT platform, putting hundreds of millions of users at risk. These vulnerabilities allow attackers to exfiltrate private user data without user interaction. The researchers found that the platform’s architecture creates liability vectors when features intended for convenience become a means for malicious activities.
The vulnerabilities are categorized into three main areas: indirect prompt injection, zero-click attacks, and memory poisoning. Attackers can exploit these weaknesses by injecting malicious instructions into web content, search results, or user prompts, which can lead to data leakage. The researchers demonstrated full attack chains covering GPT-4o and GPT-5 models.
The implications of these vulnerabilities are severe, particularly for organizations that adopt ChatGPT as a knowledge base, assistant, or agent. Data leakage from internal chat/memory systems could expose business secrets, personal employee information, or client data. To address this risk, security professionals must assume adversarial context and implement monitoring & defense layers.
Tenable’s findings highlight the need for AI vendors to prioritize security in their architecture design. The company urges organizations to treat LLM workflows as enterprise-grade risk and to consider prompt-injection risks as a first-order concern.
Source: https://www.linkedin.com/pulse/tenable-uncovers-critical-chatgpt-vulnerabilities-nif3e