Adobe’s new terms of service say it won’t train AI on customers’ work
Adobe is overhauling the terms customers must agree to when using its apps in an effort to win back trust — and clarify that it won’t train AI on their work. The change, announced via a new blog post, comes after a week of backlash from users who feared that an update to Adobe’s terms of service would allow their work to be used for AI training.
The new terms of service are expected to roll out on June 18th, and aim to better clarify what Adobe is permitted to do with its customers’ work, according to Adobe’s President of Digital Media, David Wadhwani.
“We have never trained generative AI on our customer’s content, we have never taken ownership of a customer’s work, and we have never allowed access to customer content beyond what’s legally required,” Wadhwani said to The Verge.
Adobe faced widespread scrutiny from creatives over the last week after its customers were alerted to language in its Terms of Service update that discussed AI. Customers interpreted Adobe’s vague language to mean the company was allowing itself to freely access and use customers’ work to train Adobe’s generative AI models. That wasn’t the case — and Adobe’s policies around training weren’t changing — but Adobe’s chief product officer, Scott Belsky, acknowledged that the wording was “unclear” and that “trust and transparency couldn’t be more crucial these days.”
“In retrospect, we should have modernized and clarified the terms of service sooner”
Wadhwani says that the language used within Adobe’s TOS was never intended to permit AI training on customers’ work. “In retrospect, we should have modernized and clarified the terms of service sooner,” Wadhwani says. “And we should have more proactively narrowed the terms to match what we actually do, and better explained what our legal requirements are.”
A chunk of the creative community has long-standing beef with Adobe over its alleged industry monopoly, its subscription-based pricing models, and its use of generative AI. The company trained its own Firefly AI model on Adobe Stock images, openly licensed content, and public domain content to avoid some of the ethical concerns surrounding generative AI, but several artists have found images that reference their work on Adobe’s stock platform — making it hard to trust the protections in place.
“We feel very, very good about the process,” Wadhwani said in regards to content moderation surrounding Adobe stock and Firefly training data, but acknowledged it’s “never going to be perfect.” Wadhwani says that Adobe can remove that content violates its policies from Firefly’s training data, and that customers can opt-out of automated systems designed to improve the company’s service.
Adobe said in its blog post that it recognizes “trust must be earned,” and is taking on feedback to discuss the new changes. Greater transparency is a welcome change, but it’s likely going to take some time to convince scorned creatives that it doesn’t hold any ill intent. “We are determined to be a trusted partner for creators in the era ahead. We will work tirelessly to make it so.”