The latest in technology, Marketing and Startups.

Adobe is also developing generative video.

Adobe says it is making an AI model that can make videos. But it doesn’t say when this model will come out or much else about it besides the fact that it exists.

Premiere Pro, Adobe’s primary video editing suite, will incorporate Adobe’s model, a part of the company’s expanding Firefly family of generative AI products, later this year. It will be a kind of response to OpenAI’s Sora, Google’s Imagen 2, and models from a growing number of startups in the new generative AI video space.

Adobe’s model, like many other generative AI video tools on the market today, makes footage from scratch (either a request or reference pictures). It also powers three new features in Premiere Pro: object addition, object removal, and generative extension.

They’re pretty easy to understand.

With object addition, users can pick a part of a video clip (like the upper third or lower left corner) and type in a request to add objects to that part. In a meeting with Eltrys, an Adobe representative showed a real suitcase still full of diamonds that Adobe’s model had made.


Object removal gets rid of things in clips, like coffee cups or boom mics that are in the background of a shot.



A creative extension adds a few frames to the beginning or end of a clip. Adobe wouldn’t say how many frames, though. A generative extension is not intended to create entire scenes. Instead, it adds buffer frames to match the music or prolongs a shot for an additional beat, enhancing the emotional impact.



These creative AI tools always raise concerns about deepfakes. Adobe has announced the addition of content credentials to Premiere, a feature that enables the identification of AI-generated media. Adobe’s Content Authenticity Initiative supports content credentials as a standard for tracking where media comes from. This feature is already present in Photoshop and Adobe’s Firefly image-making models. Premiere will not only display the AI-generated material, but it will also identify the AI model that generated it.

I questioned Adobe about the types of data, such as pictures, movies, and other materials, utilized to train the model. The company wouldn’t say how it was paying people who helped with the data set, or even if it was paying them at all.

A Bloomberg story from last week revealed that Adobe compensates shooters and artists up to $120 on its stock media site, Adobe Stock, for submitting brief video clips that aid in training its video generation model. According to reports, the pay ranges from about $2.62 per minute of video to about $7.25 per minute, with higher rates indicating better footage.

That would be different from how Adobe currently works with the photographers and artists in Adobe Stock, whose work it uses to train its picture generation models. The company gives those contributors a yearly bonus, not a one-time payment, based on how much material they have in stock and how it’s being used. However, the method for the bonus isn’t clear, and the donors aren’t sure to get it every year.

According to Bloomberg, OpenAI, a competitor in generative AI video, uses freely available web data, including YouTube movies, to train its models, demonstrating a very different approach. Neal Mohan, the CEO of YouTube, recently said that using YouTube movies to train OpenAI’s text-to-video creator would be against the platform’s terms of service. This shows how weak OpenAI’s and others’ legal case for fair use is.

Companies like OpenAI are breaking IP law by teaching their AI on protected content without giving credit or payment to the owners. Like its sometimes generative AI competitors, Shutterstock and Getty Images (which also have deals to license model training data), Adobe seems determined to avoid this end. With its IP indemnification policy, Adobe positions itself as a clearly “safe” choice for business customers.

When it comes to money, Adobe isn’t saying how much customers will have to pay to use the new video-making tools in Premiere. Adobe is likely still working out the prices.

The business did say that the payment system will be similar to the creative points system it used for its first Firefly models.

For people who pay for Adobe Creative Cloud, their creative credits refresh at the start of each month. Depending on the plan, they get anywhere from 25 to 1,000 credits per month. Generally, more credits are required for more complex tasks, such as producing high-quality pictures or multiple images.

The main question I have is whether Adobe’s video features that use AI will be worth the money they cost.

A lot of people have said that Firefly’s picture generation models aren’t as good as Midjourney, OpenAI’s DALL-E 3, and other rival tools. Since it doesn’t have a release date, it’s unlikely that the video model will escape the same fate. It also doesn’t help that Adobe wouldn’t let me see live demos of object removal, object addition, and dynamic extend. Instead, they only showed me a pre-recorded sizzle reel.

Adobe says it is in talks with third-party companies about adding their video generation models to Premiere as well. Adobe might take this action to safeguard its interests, and it could enhance features such as generative extensions and other tools.

One of them is the company OpenAI.

Adobe says it is collaborating with OpenAI to find ways to help Sora work with Premiere. (An OpenAI partnership makes sense since the AI company has recently made moves to get into Hollywood.) It’s also interesting that OpenAI CTO Mira Murati will be at the Cannes Film Festival this year. Other early partners include Pika, a startup that makes AI tools to make and edit movies, and Runway, one of the first companies to sell a generative video model.

A representative for Adobe said that the business would be willing to work with other groups in the future.

To be clear, these connections are not yet a finished product. They are more of a thought experiment. Adobe told me several times that they’re still in “early preview” and “research” and that users shouldn’t expect to be able to play with them any time soon.

And I think that sums up the mood of Adobe’s video presser.

With this news, it’s clear that Adobe is trying to say that it’s thinking about creative video, even if it’s just for now. Avoiding the race for creative AI would be a foolish decision, as it could potentially lead to the loss of a valuable new revenue stream, provided the numbers align in Adobe’s favor. After all, it costs a lot to train, run, and serve AI models.

But, to be honest, the ideas it shows aren’t very interesting. The company has a lot to show now that Sora is out in the world, and more new ideas are almost certainly on the way.

Juliet P.
Author: Juliet P.

Share this article
0
Share
Shareable URL
Prev Post

EU rights organizations said Meta’s ‘consent or pay’ method must not trump privacy.

Next Post

BlackRock and Jio Financial are going after India’s wealth management business.

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
Subscribe to our newsletter
Get notified about our latest news and insights