ByteDance has unveiled an AI model capable of generating realistic full-body deepfake videos from a single image and audio track, raising concerns about the increasing sophistication of deepfake technology. Unlike existing models that only animate faces or upper bodies, OmniHuman-1 can create seamless gestures and facial expressions.
The model was trained on 19,000 hours of human motion data and has been tested with videos of AI-generated TED Talks and a talking Albert Einstein. Results show that the output looks more natural due to its ability to support different body proportions and aspect ratios.
Tech firms like Google and Meta are working on tools to better detect deepfakes, as the technology becomes harder to identify. Researchers at ByteDance’s OmniHuman-lab project page published test videos showcasing the model’s capabilities.
While AI-generated content has raised security concerns, with instances of harassment, fraud, and cyberattacks fueled by deepfake videos and voice clones, lawmakers have introduced legislation to tackle the misuse of this technology.
Source: https://www.businessinsider.com/bytedance-omnihuman-ai-generated-deepfake-videos-2025-2