By paying artists for clips, Adobe is avoiding copyright issues while accumulating the data required to challenge OpenAI's Sora demo. (Image: Adobe)Adobe is on a mission to assemble a ton of video footage to train its own artificial intelligence system that can generate videos from text descriptions. This move comes after OpenAI flexed its AI muscle by unveiling a video generation model called Sora.
Adobe, the company behind popular creative tools like Photoshop and Illustrator, has been actively integrating generative AI capabilities into its software lineup over the past year. They’ve rolled out AI features that can whip up images and illustrations based on text prompts, which have already been used billions of times by creators.
However, when OpenAI showed off Sora, it raised concerns among investors that Adobe could potentially be disrupted by this cutting-edge video generation technology. To address this, Adobe has stated that it’s working on similar video-to-text capabilities, with plans to share more details later in 2023.
But to build a powerful video generation model, Adobe needs a massive amount of training data. That’s why, according to a Bloomberg report, the company is offering to pay photographers and artists in its network $120 for submitting short video clips depicting people engaged in everyday activities like walking, expressing emotions like joy or anger, or interacting with objects.
Adobe is specifically requesting clips showing simple stuff like closeups of hands, feet, or eyes. They’ve also made it clear that copyrighted material, nudity, or offensive content is a no-go.
While the pay may seem modest at around $2.62 per minute of video on average, it could go up to roughly $7.25 per minute depending on the submission.
There’s been a heated debate around the sources of this training data, with a recent report alleging that tech giants like OpenAI and Google fed their AI millions of hours of YouTube videos, potentially violating creators’ copyrights.
Adobe, on the other hand, has seemingly avoided stepping into legal grey waters by aiming to train its models primarily using its extensive library of stock media and by directly compensating contributors for photos and videos when necessary.