Runway, backed by major investors including Salesforce, Google, and Nvidia, has released Gen-4, a high-fidelity AI-powered video generator that claims to set a new standard for the industry. The model can generate consistent characters, locations, and objects across scenes, maintain coherent world environments, and regenerate elements from different perspectives.
Gen-4 allows users to create new images and videos with consistent styles, subjects, locations, and more, all without the need for fine-tuning or additional training. It excels in generating highly dynamic videos with realistic motion and superior prompt adherence.
The model’s narrative capabilities have been tested by creating scenes with consistent characters across lighting conditions using a reference image of those characters. Users can provide images of subjects and describe the composition of the shot they want to generate.
Runway claims that Gen-4 represents a significant milestone in visual generative models’ ability to simulate real-world physics. However, the company faces stiff competition from OpenAI and Google, and a lawsuit alleging that its training data is derived from copyrighted artwork without permission.
Despite the challenges, Runway hopes to hit $300 million in annualized revenue this year following the launch of products like an API for its video-generating models. The release of Gen-4 marks a significant development in the film and TV industry, with a 2024 study estimating that over 100,000 U.S. entertainment jobs will be disrupted by generative AI by 2026.
Source: https://techcrunch.com/2025/03/31/runway-releases-an-impressive-new-video-generating-ai-model