Google has announced a new AI-powered photo editing tool called “Reimagine” for its Pixel 9 devices. This tool allows users to select any non-human object or portion of an image and generate something in that space using a text prompt. While the results can be convincing and uncanny, our testing revealed some disturbing possibilities. With a bit of creative prompting, we were able to generate images featuring car wrecks, smoking bombs in public places, and even what appeared to be bloody corpses.
This raises concerns about the potential for manipulated images to spread misinformation online. Google’s policies prohibit certain types of content, but it seems that users can work around these guardrails with some creative prompting. The company has emphasized that its Generative AI tools are designed to respect user intent and have clear policies in place. However, our testing suggests that there is still room for improvement.
The lack of robust tools to identify manipulated images online is also a concern. When editing an image with Reimagine, there is no watermark or obvious indicator that the image is AI-generated. While Google uses a more robust tagging system called SynthID for 100% synthetic images, this only applies to images created using Pixel Studio and not Magic Editor.
In the past, adding deceptive elements to photos required expertise and access to expensive software. However, with Reimagine, all it takes is a bit of text and a new Pixel phone. This ease of manipulation raises concerns about the spread of misinformation online.
It’s essential to apply extra skepticism when encountering images online, especially those that seem too good (or disturbing) to be true. While Google’s AI policies aim to prevent abuse, it’s crucial to continually refine these safeguards to ensure the integrity of online content.
Source: https://www.theverge.com/2024/8/21/24224084/google-pixel-9-reimagine-ai-photos