The controversy began when Adobe
"Our automated systems may analyze your Content and Creative Cloud Customer Fonts (defined in section 3.10 (Creative Cloud Customer Fonts) below) using techniques such as
This vague wording has sparked fears that any user-generated content, including works under non-disclosure agreements (NDAs), could be accessed and utilised by Adobe's AI systems for training purposes. Further, the changes had reportedly already taken effect by February, despite Adobe only choosing to notify users now.
The backlash was swift and fierce. Artists and designers took to social media to voice their concerns. One notable comment came from artist @SamSantala, who posted on X, "I can't use Photoshop unless I'm okay with you having full access to anything I create with it, INCLUDING NDA work?" This sentiment echoes a broader fear that sensitive and confidential projects may no longer be secure.
The main concern is not just about the potential misuse of personal or professional work but also about the implications for privacy and intellectual property rights. For many creatives, the idea that their work could be used as training data for AI, without explicit consent or compensation, is unacceptable.
In response to the uproar, Adobe later issued a statement clarifying that it does not use unpublished user content to train its Firefly AI models. The company emphasised that only content stored in their Creative Cloud, and not content stored locally on users' devices, would be accessed. Furthermore, Adobe highlighted that only public content, such as contributions to Adobe Stock and submissions for Adobe Express, is used to train its algorithms.
Interestingly, Adobe Chief Product Officer Scott Belsky noted that the company has had “something like this in TOS” for over a decade and that the new modifications were minor at best. He also mentioned that the company’s legal team is working on clearing up the confusion caused by the ambiguous language.
"The focus of this update was to be clearer about the improvements to our moderation processes that we have in place," explained Adobe in a recent blog post. "Given the explosion of generative AI and our commitment to responsible innovation, we have added more human moderation to our content submissions review processes."
According to the post, Adobe applications only access the work for pertinent functions such as creating thumbnails and previews, developing their machine-learned features such as ‘Photoshop Neural Filters’ and ‘Remove Background,’ and screening for illegal content such as child sexual abuse material. The company also clarified that it will never assume ownership of a customer’s work.
However, the issue of using user content for AI training remains murky. While Adobe assures that only public domain data is used, the potential for copyrighted material to slip through remains a concern. This issue is not unique to Adobe; other AI tools, such as those from Midjourney, have faced similar allegations of copyright infringement.
As Adobe navigates this controversy, it faces the challenge of balancing the improvement of its AI tools with the need to respect user privacy and intellectual property. For now, Adobe users are left grappling with the implications of the new terms, and many are reconsidering their use of the software for sensitive and confidential projects.