Runway Introduces 'Gen-3 Alpha' AI Video Generator, Further Blurring Reality And Imaginary

Runway Gen-3 Alpha

While OpenAI might have stolen most headlines about generative AI. But no, OpenAI is not the only player around.

After releasing ChatGPT and wowed the world, OpenAI has then introduced Sora, a video model which at the time, appeared to be lightyears ahead of the rest of the AI video industry.

At first, it appears that Sora has successfully captivated the market, thanks to its dominance in the field, and in the news.

But that doesn't mean others cannot compete.

In fact, some products from rivals aren't far behind, and some can also match OpenAI's products, if not excel them.

Following Luma AI in releasing Dream Machine, a Sora-esque AI video model that is available for anyone to try and use, Runway is catching up with its own generative AI-powered text-to-video generator.

The company calls it the 'Gen-3 Alpha', which is said to a be base model for video generation AI, is literally Runway's own take on generative AI with Sora-level capabilities.

While Runway is quite shy from the limelight OpenAI is having and enjoying, the New York City-based Runway ML, also known as Runway, was among the earliest startups to focus on realistic high-quality generative AI models.

But the company has remained subtle, and tend to avoid the commotion the market is experiencing.

Things change when highly realistic AI video generators, namely OpenAI’s Sora and Luma AI’s Dream Machine model were becoming the word-of-mouth of the internet.

Runway is hitting back in the generative AI video wars in a big way, with Gen-3 Alpha, which it says is the "first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training."

The company added that Gen-3 Alpha is "a step towards building General World Models," or AI models that can "represent and simulate a wide range of situations and interactions, like those encountered in the real world."

The term "General World Models" is the company's belief that the next major advancements in AI will come from systems that better understand the visual world and its dynamics.

And those statements don't disappoint.

This is because Gen-3 Alpha is indeed more than capable of rivaling Sora or Dream Machine, or anything else similar.

Gen-3 Alpha allows users to generate high-quality, detailed, highly realistic video clips of 10 seconds in length, with high precision and a range of emotional expressions and camera movements.

Runway is an old-timer in this space at this point, and Gen-3 Alpha does deliver.

It does expand upon its Gen-1 and Gen-2 models.

And that being said, Runway has shared that this AI model is "trained jointly on videos and images," and "was a collaborative effort from a cross-disciplinary team of research scientists, engineers, and artists."

In all, the method should unlock capabilities for imaginative transitions and precise key-framing of elements in users' scenes.

Runway noted that it has already been "collaborating and partnering with leading entertainment and media organizations to create custom versions of Gen-3," which "allows for more stylistically controlled and consistent characters, and targets specific artistic and narrative requirements, among other features."

"Gen-3 Alpha excels at generating expressive human characters with a wide range of actions, gestures, and emotions, unlocking new storytelling opportunities," Runway said.

The company even touted it as the "new frontier for high-fidelity, controllable video generation."

This happens because this Gen-3 Alpha is better at depicting motion.

Additionally, Gen-3 Alpha is more adept at ensuring that the frames of a video are consistent with one another.

It's faster too.

According to Runway, the AI has been optimized to reduce the amount of time it takes the model to generate videos.

Early reports suggest that Gen-3 Alpha is capable of generating a 10-second clip in 90 seconds.

And for safety, Runway said that it's developing a new set of safety features for the model to ensure that it’s not used to generate harmful content.

As part of the effort, the company will add a provenance system based on C2PA standard, which makes it possible to equip a multimedia file with metadata that not only indicates whether it was AI-generated, but also provides other information such as when it was created.

C2PA stores this metadata in a format designed to block tempering attempts.

The system will modify videos created using Gen-3 Alpha with information indicating they were generated by AI.

Initially, this Gen-3 Alpha is introduced as the underlying model that powers Runway’s text-to-video, image-to-video, and text-to-image tools as well as control tools like Motion Brush, Advanced Camera Controls, and Director Mode.

With Gen-3 Alpha, Runway is entering the AI-powered, text-to-video generator, heating up the race, which further blur the line between reality and imaginary.

The company said that the AI is a "leap forward in technology represents a significant milestone in our commitment to empowering artists, paving the way for the next generation of creative and artistic innovation."

It's worth noting that Runway, just like most others, refrained from detailing, or properly disclosing the data sets its AI was trained on, of whether any of the training materials were obtained through paid licensing deals or just from scraping the web.

A spokesperson for the company only said that the Gen-3 Alpha was trained on videos and images by an "in-house research team that oversees all of our training," with the spokesperson adding that "we use curated, internal datasets to train our models."

Critics argue AI model makers, including Runway, should be paying the original creators of their training data through licensing deals.

Copyright infringement lawsuits have been made, but AI model companies have always claim that they're legally allowed to train on any publicly posted data.

Published: 
20/06/2024