
Runway has introduced a feature that lets users send an AI-generated character to join video calls in their place.
The tool works with Zoom, Google Meet, and Microsoft Teams, allowing the character to appear as a full participant rather than a static image or pre-recorded clip. It listens to the meeting audio, processes what others say, recognizes participant names from the video feed, and responds in real time with generated video and audio that includes lip-synced speech, facial expressions, and natural gestures.
The setup is straightforward and happens inside Runway's web interface.
Users first select or create a character from the Characters section. These characters start from a single reference image: photorealistic, stylized, human, or even non-human, and can be customized with details like voice style, personality description, and a knowledge base of relevant information.
No extra training is required.
Once the character is ready, the user pastes the meeting link and clicks to join. After a short delay, the character enters the call and begins participating autonomously.
"The whole setup takes about 60 seconds," said Runway in a dedicated documentation page.
You can now send a Runway Character to a video call in your place. Here's how:
Step 1: Choose or Create a Runway Character
Step 2: Paste your video meeting link (Zoom, Google Meet or Teams)
Step 3: Click "Join Meeting"
Try it today at the link below. pic.twitter.com/owcFkURuvp— Runway (@runwayml) April 14, 2026
A short demonstration video shared by the company shows the feature in action during what looks like a casual team stand-up.
An orange cat character named Mochi, created to represent its owner Elaine, joins three human colleagues. The cat speaks directly to the group, provides updates on recent work, reacts to questions, and even teases a teammate in a sassy tone that matches the personality prompt entered earlier. The other participants see the cat's video feed alongside their own live cameras, and the conversation flows without anyone needing to intervene on the AI’s behalf.
This capability builds on Runway's earlier work in video generation models, particularly those focused on character consistency and real-time interaction.
The characters rely on the company's General World Model, known as GWM-1, which handles the underlying generation of coherent visuals and dialogue.
In practice, the system requires an API key from Runway's developer portal and uses credits that correspond to the length of generated video.
According to its GitHub page, new accounts receive a starter amount, roughly thirty minutes, sufficient for testing a few short meetings.
The feature sits at the intersection of existing AI video tools and emerging real-time agents.
It extends beyond simple avatars by enabling contextual conversation and visual presence, though it still depends on the quality of the initial character setup and the meeting’s audio clarity.
For now, it is accessed through a dedicated web app or the main Runway platform, and users can mute or remove the character at any time from the control panel.
As video-call software continues to incorporate more AI elements, this kind of autonomous participant represents one practical step toward delegating routine interactions to digital stand-ins.
As this feature matures, its real-world impact will depend less on technical novelty and more on how people choose to integrate it into daily routines.
Early users have already begun testing it for low-stakes internal meetings, status updates, and even casual check-ins, where the presence of a consistent digital stand-in can reduce the friction of constant video appearances without fully removing the participant from the conversation. The system's ability to maintain character continuity across sessions suggests a future in which these AI companions could evolve into longer-term digital colleagues rather than one-off proxies.
At the same time, the introduction of autonomous video participants quietly reframes some of the unspoken rules of remote work.
Meetings have always been about attention, tone, and shared context; delegating any part of that presence to an AI invites new considerations around authenticity, accountability, and the boundaries between human and machine contribution.
For now, the tool remains an optional layer rather than a replacement, available to anyone with a Runway account and a few minutes to set up a character. Its quiet arrival signals a broader shift already underway: the gradual normalization of AI not just as a creative aid or assistant, but as a visible participant in the everyday spaces where decisions are made and relationships are built.