AI is growing fast, and the competition is getting even more intense.
It was only back in November 2022, when OpenAI unveiled ChatGPT, a conversational AI model that rapidly captivated the public and marked a pivotal moment in the AI revolution. This groundbreaking development ignited a surge of advancements in AI, leading to innovations across various domains.
Building upon this momentum, Runway has recently introduced 'Gen-4,' a state-of-the-art AI model designed to revolutionize media generation and storytelling.
Gen-4 enables creators to generate consistent characters, objects, and scenes across diverse lighting conditions and locations, all from a single reference image. This capability empowers filmmakers, advertisers, and content creators to craft cohesive narratives with unprecedented ease and precision.
The release of Gen-4 signifies a significant leap forward in AI-driven video generation, offering highly dynamic videos with realistic motion and superior prompt adherence. By maintaining coherent world environments and preserving distinctive styles and cinematographic elements, Gen-4 sets a new benchmark for AI applications in creative industries.
Today we're introducing Gen-4, our new series of state-of-the-art AI models for media generation and world consistency. Gen-4 is a significant step forward for fidelity, dynamic motion and controllability in generative media.
Gen-4 Image-to-Video is rolling out today to all paid… pic.twitter.com/VKnY5pWC8X— Runway (@runwayml) March 31, 2025
On a webpage on its website explaining Gen-4, Runway said that:
"All without the need for fine-tuning or additional training."
Using visual references, combined with instructions, Gen-4 allows you to create new images and videos with consistent styles, subjects, locations and more. Allowing for continuity and control within your stories.
To test the model’s narrative capabilities, we have put together… pic.twitter.com/IYz2BaeW2U— Runway (@runwayml) March 31, 2025
The idea is that, Gen-4 is designed to address key challenges in AI-generated video production.
A primary advancement of Gen-4 is its ability to maintain consistent characters and objects across multiple shots. In previous AI-generated films, scenes often appeared as loosely connected, dream-like sequences, lacking realistic continuity. Gen-4 overcomes this by allowing creators to use a single reference image within Runway's interface to ensure character and object consistency throughout various scenes.
Runway has showcased example videos where the same character appears consistently across different settings and lighting conditions, and identical objects are portrayed in varied contexts while retaining their appearance.
Additionally, Gen-4 enables filmmakers to capture the same environment or subject from multiple angles within a sequence—a feature that was challenging with earlier models like Gen-2 and Gen-3.
The previous model, Gen-3, released in June 2024, extended video lengths from two to ten seconds and improved coherence over Gen-2.
Gen-4 builds upon these advancements by offering enhanced consistency and control, marking a significant step forward in AI-driven video generation.
The Herd is a short film following a young man being chased through a field of cows at night. It was created using Gen-4 and just a few image references to build out each of the shots of the characters and the misty field of cows. It was then combined with Act-One to bring the… pic.twitter.com/DM1KPWRzMf
— Runway (@runwayml) March 31, 2025
It was back in February 2023, when Runway introduced Gen-1, its inaugural AI video synthesis model.
Initially, Gen-1's creations were more experimental, serving as intriguing demonstrations of AI's potential in video generation.
However, through continuous enhancements, the tool has evolved, finding practical applications in real-world creative projects. Gen-1 allows users to generate new videos by applying the composition and style of an image or text prompt to the structure of an existing video, effectively enabling video-to-video synthesis.
And the technology behind AI continues to advance, so do the products that are built with it.
And here, Gen-4 marks a significant step forward in generative AI, showcasing the technology's capacity to transform and enhance video content creation.
Gen-4 Image-to-Video is now rolling out to all paid plans and Enterprise customers. References will be available soon.
7/8— Runway (@runwayml) March 31, 2025
"Gen-4 excels in its ability to generate highly dynamic videos with realistic motion as well as subject, object, and style consistency with superior prompt adherence and best-in-class world understanding," the company said.
"Runway Gen-4 represents a significant milestone in the ability of visual generative models to simulate real-world physics."
Gen-4, like all video-generating models, was trained on a vast number of examples of videos to “learn” the patterns in these videos to generate new footage.
As AI continues to evolve, Gen-4 competes directly against OpenAI Sora and some other notables, like Adobe Firefly Video Model, Alibaba Wan-2.1, Luma AI's Dream Machine, Kuaishou Kling AI, researchers from Tsinghua University and Zhipu AI with CogVideoX, MiniMax Video-01, ByteDance OmniHuman-1 and Goku, and some others.
Together, they exemplify the transformative potential of artificial intelligence in reshaping how people create and consume media.
These innovations not only enhance creative workflows but also open new horizons for storytelling, enabling narratives that were once beyond imagination.
Learn more at the link below about how our research, product and creative teams worked in close collaboration to build Gen-4 for real-world productions and pipelines.https://t.co/BE5G9Wic7C
8/8 pic.twitter.com/4ZwcT1NTZl— Runway (@runwayml) March 31, 2025
It's worth noting that Runway has faced scrutiny regarding the sources of its training data.
The company has chosen not to disclose specific details about its data sources, citing concerns over competitive advantage and potential intellectual property (IP) issues. This lack of transparency has led to allegations that Runway utilized copyrighted content without permission to train its AI models.
Reports indicate that Runway's Gen-3 model was trained using thousands of YouTube videos and pirated films without obtaining explicit consent from content creators. A leaked internal document revealed that the company collected videos from various YouTube channels, including those of major media outlets and independent creators.
These practices have resulted in legal challenges. Runway, along with other generative AI companies, is facing lawsuits from visual artists alleging unauthorized use of their copyrighted works for AI training. The companies argue that their actions are protected under the fair use doctrine, but the outcomes of these cases remain uncertain.
The controversy surrounding Runway underscores the broader debate in the AI industry about ethical data usage and the balance between innovation and intellectual property rights. As AI continues to integrate into creative fields, addressing these concerns is crucial to ensure fair practices and maintain trust within the artistic community.
Further reading: Midjourney Can Finally Generate Consistent Characters Across Multiple AI-Generated Images