Google’s Antigravity Chatbot Tool Found Vulnerable to Backdoor Attacks

Google’s new Antigravity chatbot tool for creating artificial intelligence agents has been found vulnerable to backdoor attacks by security researchers. Despite Google not calling the issue a security bug, experts warn that users are at risk of being compromised if they use the tool without proper measures.

Antigravity is an integrated application development environment (IDE) that allows users to create autonomous AI agents for tasks such as code research and bug fixes. However, a vulnerability discovered by Mindgard allows attackers to install malware on a user’s system by creating a malicious source code repository. This can happen even if the user doesn’t open the malicious repository, but rather just has it installed in their workspace.

Experts recommend that developers work with their security teams to ensure they vet and assess AI-assisted tools before introducing them to their organization. They also advise users to treat AI development environments as sensitive infrastructure and control what content, files, and configurations are allowed into them.

Google is now working to address the issue and has posted known issues publicly on its Antigravity Known Issues page. The company’s spokesperson said that it takes security seriously and encourages reporting of all vulnerabilities so it can identify and fix them quickly.

Source: https://www.csoonline.com/article/4097698/security-researchers-caution-app-developers-about-risks-in-using-google-antigravity.html