
At this year’s Adobe MAX 2025, the spotlight landed on a new wave of generative-AI tools that aim to re-imagine how we edit photos, video and audio.
Under the banner of “Sneaks,” which showcases experimental previews of future creative workflows, Adobe unveiled a set of innovations that make formerly labor-intensive tasks feel effortless and intuitive.
Sneaks aren’t publicly available to use, and they’re not guaranteed to become official features in Adobe’s Creative Cloud software or Firefly apps.
However, Adobe has again showed the capacity of AI, and how the technology can aid in productivity, in way that was previously unimaginable, daunting, expensive, if not impossible.
First off, Project Frame Forward, is essentially a tool that demonstrates how a change in a single frame of a video can ripple through the entire clip.
Rather than painstakingly masking objects frame by frame, editors simply annotate the first frame, for example, selecting and removing a person, and the tool applies the background correction across the full sequence.
Moreover, it supports insertion of new elements by drawing a placement and typing a prompt, e.g., “add a puddle reflecting the cat’s movement,” and the system generates context-aware scenes for the entire video.
This kind of workflow promises to drastically cut editing time and open up new creative possibilities.
Another key innovation is Project Light Touch, which gives creators unprecedented control over lighting in still images.
Want to turn a daytime interior into a moody nighttime scene? Or bend light around an object, change the direction or diffusion of shadows, tweak the color temperature or introduce RGB-style highlights?
Project Light Touch places all these controls in post-production, enabling users to drag a light source, adjust its warmth or angle, and see the environment respond in real time.
Another take, is Project Motion Map.
The feature allows users to bring their illustrations to life.
Motion Map uses AI to analyze static vector graphics and automatically animate them in ways that feel intentional and expressive, with no keyframes or manual rigging required.
On the audio side, Project Clean Take tackles one of the most tedious parts of video and podcast production: correcting speech and cleaning up background noise.
Even if a speaker mispronounces a word, or whether users want to change someone’s tone from flat to more enthusiastic, Project Clean Take uses generative AI to preserve the voice’s unique character, replace words or adjust emotion.
It can also separate background audio into individual elements so editors can silence or modify specific tracks.
Then, there is Project Sound Stager, which should help creators design sound like never before.
It uses AI to analyze a video’s visuals, pacing, and emotional tone, to then automatically generates layered soundscapes using expert sound design logic. Users can even collaborate conversationally with an AI "sound designer" to tweak the final mix.
Beyond these, the Sneaks slate included tools like Project Surface Swap.
This feature uses AI to swap materials/textures seamlessly, keeping lighting and perspective intact.
Another sneak, is Project Turn Style.
This feature allows users to manipulate 2D images as though 3D.
What this means, users can rotate, reangle, or reposition elements within an image, all while maintaining their natural texture, lighting, and detail.
After that, there is the Project Trace Erase, which makes object removal a lot, lot easier.
The tool doesn't just erase things from an image. Using AI, it can understand the objects meant for deletion, and using diffusion transformer models, it can both the objects and their shadows, reflections, and even the environmental distortions. In other words, the tool allows context-aware edits with almost no manual cleanup
Another feature is Project Scene It.
The tool allows the blending of both precision and artistry, allowing users to control both the structure and style of 3D scenes. Built on Image-to-3D and 3D-to-Image technologies, it allows tagging of individual objects with reference images, preserving each object’s unique look while freely moving it in 3D space.
And lastly, there is Project New Depths, which allows users to edit photos with spatial awareness and depth.
It’s worth noting: these tools are experimental.
What this means, they're there at Adobe MAX because Adobe wants to showcase how its tools can inspire, test ideas, and gather feedback, and not all will necessarily appear in upcoming releases of Adobe’s Creative Cloud or Firefly toolset.
But many of today’s features began as Sneaks, suggesting strong likelihood that some, in some form, will land for creators.
For creators, editors, bloggers or gadget-enthusiasts, this moment matters. It signals a shift: editing is becoming less about manual labor and more about intention and creativity.
Whether users wish to remove distractions from footage, reshaping light in an on-site shot, or fine-tuning spoken audio in a blog-video, these tools hint at workflows that are faster, more accessible, and certainly a lot more playful.