Background

'Reimagine, Perform, Change, Redesign': Luma AI’s ‘Modify Video’ Makes Creativity Editable

Luma Modify Video

In conventional video production, altering the environment around a character isn’t as simple as it sounds.

Want to turn a sunny street into a moody alleyway? Or swap a classroom for a spaceship? People can’t just tweak a few settings—they have to start from scratch. That means reshooting the scene, re-rendering assets, and possibly using entirely new software or tools just to make it work. Every change demands more time, more budget, and more compromises.

It’s a process that’s not only slow and costly, but also creatively limiting.

For these reasons, directors and editors often have to choose between sticking with what they’ve already shot, or facing the uphill battle of redoing it all. As a result, bold ideas are left on the cutting room floor—not because they’re impossible, but because they’re impractical.

Luma AI wants to change this workflow with what it calls 'Modify Video,' a groundbreaking feature within its Dream Machine platform, designed to revolutionize video editing by allowing creators to transform scenes without reshooting.

Powered by Ray2, the tool enables users to reimagine environments, lighting, and textures while preserving the original performance, motion, and camera work.

In a blog post:

"We believe professional creatives should be able to reimagine environments, lighting, and texture without losing the integrity of the performance, motion, camera or character. With Modify Video, you can keep what matters and evolve everything else."

Luma’s new tool makes it possible to extract full-body, facial, and lip-sync motion from any video clip. This motion data can be used to animate new characters, objects, or even camera movements that stay in sync with the original footage. For example, an actor’s performance can be applied to a CG creature, or objects can follow the same movement path as the original subject.

The tool also supports restyling, retexturing, and swapping environments. Users can change the overall look of a scene—such as lighting, setting, or time of day—while keeping the original motion and framing intact.

Using AI, the tool can also target specific elements within a scene, allowing changes to things like clothing, faces, props, or skies without affecting the entire shot. This reduces the need for traditional techniques like green screens or manual rotoscoping.

Unlike tools that rely solely on prompts or apply simple filters, Modify Video is built to give users detailed control over an entire shot's timeline. It works by analyzing performance signals such as body pose, facial expressions, and scene structure. This allows the system to determine what elements should remain unchanged and what can be modified or reimagined.

No green screens, no rotoscoping, no reshoots.

With Modify Video, users can influence the output with visual references, first-frame images, or text prompts—but the process always centers around the original video as the foundation.

Key capabilities include:

  • Preserves motion and action: Tracks details like pose, facial movement, and lip sync to maintain the integrity of the original performance.
  • Multiple output variants: Generate different versions from the same source motion, useful for testing styles or getting client approvals.
  • Prompt-optional interface: Control the results using images or visual input instead of relying entirely on text-based instructions.
  • Native resolution support: Compatible with common formats like 16:9 at 720p, making it easy to integrate into standard workflows.
  • Structured presets: Offers three preset modes—Adhere, Flex, and Reimagine—each defining how much the original scene is transformed.

While the tool has its own limitations, like the way it cannot generate full videos from scratch (like OpenAI's Sora or Google's Veo 3) and rely on an input video to begin, it requires users to have a video they want, before using Modify Video.

But beyond that, Modify video is more than able to create subtle edits to full scene transformations, bridging the gap between what's captured and what's imagined.

Published: 
04/06/2025