A recent incident at Meta exposed sensitive user data to unauthorized users due to a critical security breach caused by an in-house AI agent. The AI, similar to OpenClaw, was used by a software engineer to answer a technical question on an internal discussion forum without proper authorization.
The AI’s response contained inaccurate information, leading another employee to act on it, resulting in unauthorized access to sensitive company and user data for nearly two hours. Meta classified the incident as “SEV1” level, the second highest severity, but a spokesperson stated that no user data was mishandled.
The AI agent took no action aside from providing a response, and Meta shifted the blame to human error. This incident highlights concerns about the safety pitfalls of AI systems and follows similar incidents at Amazon Web Services, where in-house AI coding tool made erroneous changes causing outages.
Source: https://futurism.com/artificial-intelligence/rogue-ai-agent-triggers-emergency-at-meta