A new type of cyber threat, known as “Man-in-the-Prompt,” has been discovered that can compromise interactions with popular generative artificial intelligence (AI) tools. The attack uses a simple browser extension to inject malicious prompts into the AI systems, allowing attackers to steal data and cover their tracks.
According to research by LayerX Security, any browser extension with access to the AI tool’s prompts can exploit this vulnerability. The attack has been tested on top commercial LLMs, including ChatGPT, Gemini, Copilot, and Claude, with proof-of-concept demos provided for ChatGPT and Google Gemini.
The problem lies in the input window of AI chatbots, which is accessible from the page’s DOM (Document Object Model). This means that any browser extension with access to the DOM can read, modify, or rewrite our requests to the AI without us noticing. The extension doesn’t even need special permissions.
LayerX Security experts warn that 99% of business users have at least one extension installed in their browser, making this risk exposure very high. To mitigate this threat, individual users and businesses must take precautions, such as only using trusted extensions and keeping their browsers up to date.
The vulnerability falls under the broader category of prompt injection, a serious threat to AI systems according to the OWASP Top 10 LLM 2025. This attack highlights the importance of considering user interface and browser environment security when protecting AI systems.
As AI continues to integrate into personal and business workflows, this simple HTML text field can become the Achilles heel of the entire system. The LayerX report emphasizes that education and awareness are key to improving cybersecurity in this area.
Source: https://securityaffairs.com/181211/cyber-crime/man-in-the-prompt-the-invisible-attack-threatening-chatgpt-and-other-ai-systems.html