OpenAI Unmasks China’s covert social media operations

OpenAI has taken down several covert operations involving Chinese, Russian, Iranian, Philippine, Cambodian, and North Korean entities using its AI tools in malicious ways. The company’s latest threat report reveals that these operations targeted various countries and topics, including a strategy game, and used tactics such as influence operations, social engineering, and surveillance.

One operation, dubbed “Sneer Review,” used OpenAI’s ChatGPT to generate short comments on multiple platforms, including TikTok, X, Reddit, Facebook, and others. These comments were often critical of the US administration’s actions and praised the Chinese Communist Party’s efforts. The operation also generated a long-form article claiming the game received widespread backlash.

Another operation posed as journalists and geopolitical analysts, using ChatGPT to write posts and biographies for social media accounts on X. It also translated emails and messages from Chinese to English and analyzed data. OpenAI found that this operation claimed it conducted fake social media campaigns and social engineering designed to recruit intelligence sources.

OpenAI’s researchers say that these operations were largely disrupted in their early stages, and they didn’t reach large audiences of real people. The company attributes the limited impact of AI-powered operations to its own detection capabilities.

The report highlights the growing use of AI by foreign entities to conduct covert influence operations online. OpenAI encourages individuals interested in this topic to contact Shannon Bond through encrypted communications on Signal for more information.

Source: https://www.npr.org/2025/06/05/nx-s1-5423607/openai-china-influence-operations