Adobe recently announced generative AI tools for video editing within Premiere Pro that can supercharge any content creator or productivity workload.
Adobe announced that a slew of AI-fueled workflows were coming to Premiere Pro – one of the most popular video editing software used by professionals and amateurs alike. The end goal of all these AI tools is to streamline video editing. The tools will allow users to do things like extending a shot and adding or removing objects from a scene.
Watch the trailer video here:
A video is a bunch of static image frames stitched together in a compressed container. AI video editing is, to grossly oversimplify it, the process of mass-editing sequential images. Current tools for image manipulation, such as inpainting in Stable Diffusion, are already very advanced in extending scenes, adding or removing objects, and even more (like changing colors or expressions).
The problem is that doing this over hundreds of thousands of frames while maintaining consistency is a huge ask for any AI image editor. Current video AI tools that can do such a task are very limited (quality is lower whereas the training times are insanely high).
Adobe has been working on its own AI tools in a big way. With these generative AI tools for videos, Adobe is going to change how video editing works forever. This is mainly because Premiere Pro is a commercial software, which isn’t always running on very advanced hardware like Threadrippers and M3 Max chips. Even if you have a decent Windows laptop, you can leverage the AI tools to do the same job as a professional video editor with these tools now.
And don’t think this is going to be limited to video editing. The announcement makes it clear that audio editing will also improve. Premiere offers extensive audio editing tools (though the main Adobe software for audio work is Audition).
Commercial adoption of these tools and a streamlined, stable development timeline that can only be ensured by a big corporation such as Adobe cements the use case of AI in everyday video editing and dethrones the half-baked solutions we have today.
Powering all this is not a random LLM built by some other organization proprietarily and just linked to Adobe with additional fine-tuning. Under the hood, these AI workflows as well as other AI workflows Adobe has in the rest of its application suite are powered by Adobe Firefly. The main use of Firefly so far has been using something like Stable Diffusion within Photoshop (text to image, image editing, image replacements, generative fill, etc.). The same tech, now scaled up, will be editing hundreds of frames in one go to offer video editing tools.
- Better Videos: Typically, we have to add frozen frames to extend a scene. Now, you can extend clips with proper motion. Also, removing and adding props from scenes is going to improve almost every kind of video based on real-life footage.
- Non-Destructive: Adobe makes it clear that the edits are non-destructive, meaning even if you’ve made a major change to your footage, you can revert to the original without having to re-import the file.
- Third-Party Models: One of the best things is that Adobe is adding support for developers to add their own AI tools or variations of existing tools. This means if you’re extending a video, you can use Adobe’s default model or a third-party’s model (such as OpenAI’s Sora for creating b-roll footage), which might be more fine-tuned for a particular use case.
- Watermarking: Adobe’s commitment to transparency means each video exported with AI tools will carry a watermark (because if it wasn’t clear to you already, generative AI for videos can be used for a lot of bad, bad things) if AI was used, also noting which model was used exactly.