AI-generated videos aren’t just the future: They’re here, and they’re scary. AI companies are rolling out tech that can produce realistic videos from simple text prompts. Adobe is just the latest, and their AI-generated videos are impressive—even if the demos are brief.
Adobe Firefly Video Model
Firefly Video Model is a little different than others we’ve seen before. Most AI video generators work like AI image generators: You write out a prompt of what you’d like the model to make, then the model produces an output based on its training set.
That’s still happening here, as you can ask the model to produce a specific video. But Adobe is incorporating more AI video editing tools to the mix than something like OpenAI’s Sora. For instance, Adobe says you’ll be able to use camera controls, like angle, motion, and zoom, to “fine-tune” videos. In one of the demonstrated prompts, Adobe tells the AI to produce a video with “dramatic dolly zoom camera effect,” while the sidebar shows multiple camera controls, including shot size, camera angle, and motion controls. You could, in theory, generate a video, click the “Handheld” motion option to add a shaky-cam look to the product, and control the intensity of that shake via a slider that appears in this menu.
The company also shows off examples of how this technology can be added to real video content: Adobe says you will be able to extend existing clips in your timeline using AI-generated video, through the Premiere Pro beta. The goal, according to the company, is to fill gaps in your timeline, so if you have a shot that isn’t long enough, AI can lengthen it artificially. The model is also reportedly capable of turning images into videos, as well. If you have a picture or a drawing you want to use as reference for an AI-generated video, you can use that in place of a text prompt.
You can also use the tool to generate 2D and 3D animated effects to your videos. The demo video shows off a 2D motion effect applied to a real video of a person dancing, while another example shows the word “TIGER” made of fur over a field, blowing in the wind.
Adobe makes a point to say the video model is trained on works in the public domain, and is designed to be “commercially safe.” That is, of course, in stark contrast with other players in the AI space, like OpenAI, Midjourney, and Stability AI, many of whom are being sued for allegedly using copyrighted materials to train their AI models.
But any goodwill Adobe may have won from this decision may be cancelled out by the outrage over its policies, which seem to suggest the company can access any work users save to Creative Cloud for the purpose of training non-generative AI programs. Sure, it’s great that Firefly doesn’t steal from artists and isn’t going to get creatives in commercial trouble, but if you need to give up your own creative privacy to use it, is it worth it?
These tools will be available in Creative Cloud, Experience Cloud and Adobe Express, as well as via firely.adobe.com. Adobe has a waitlist to be notified when Firefly Video Model is available in beta, which you can sign up for here.
The bottom line
Here’s the thing: The products in Adobe’s video look good. If you were watching the video out of context, you might not realize that most—if any—of the demo shots presented were, in fact, totally artificially generated.
But Adobe cleverly only shows most clips for a second or two at most, which makes it difficult to get a sense of how well the generator really works. The quality of the subjects is solid and convincing, but without seeing how well the model replicated motion, or how realistic the outputs remain over the course of, say, a minute, it’s tough to say how this model will stack up against others.
The longest clip I’ve seen from Adobe is this four- or five-second video of a reindeer: It’s pretty darn realistic, and the wide-angle lens with a handheld feel probably helps sell the effect.
It’s possible Adobe has made some breakthroughs in the quality of AI-generated video. It’s also possible these videos will be subject to the same flaws existing generators have, and will fall apart under time and scrutiny. Once Adobe shares longer demo videos, or rolls out the beta, we’ll have a better idea.