Adobe has a new tool that makes it easier for creatives to be reliably credited for their work, even if somebody takes a screenshot of it and reposts it across the web. The Content Authenticity web app launching in public beta today allows invisible, tamper-resistant metadata to be embedded into images and photographs to help identify who owns them.
The new web app was initially announced in October and builds on Adobe’s Content Credentials attribution system. Artists and creators can attach information directly into their work, including links to their social media accounts, websites, and other attributes that can be used to identify them online. The app can also track the editing history of images, and helps creatives to prevent AI from training on them.
The Content Authenticity web app is “currently free” while in beta, according to Adobe, though the company hasn’t mentioned if this will change when it becomes generally available. All you need is an Adobe account (which doesn’t require you to have an active Creative Cloud subscription).
Any images you want to apply Content Credentials to don’t need to have been edited or created using one of Adobe’s other apps. While Adobe apps like Photoshop can already embed Content Credentials into images, the Content Authenticity web app not only gives users more control over what information to attach, but also enables up to 50 images to be tagged in bulk rather than individually. Only JPEG and PNG files are supported for now, but Adobe says that support for larger files and additional media, including video and audio, is “coming soon.”
Creators can also use the app to apply tags to their work that signal to AI developers that they don’t have permission to use it for AI training. This is far more efficient than opting out with each AI provider directly — which usually requires protections to be applied to each image individually — but there’s no guarantee that these tags will be acknowledged or honored by every AI company.
Adobe says it’s working with policymakers and industry partners to “establish effective, creator-friendly opt-out mechanisms powered by Content Credentials.” For now, it’s one protection of many that users can apply to their work to prevent AI models from training on it, alongside systems like Glaze and Nightshade. Andy Parsons, Senior Director of Content Authenticity at Adobe, told The Verge that third-party AI protections are unlikely to interfere with Content Credentials, allowing creatives to apply them to their work harmoniously.
The Content Authenticity app isn’t just for creative professionals, however, as it allows anyone to see if images they find online have Content Credentials applied, just like the Content Authenticity extension for Google Chrome that launched last year. The web app’s inspect tool will recover and display Content Credentials even if image hosting platforms have wiped it, alongside editing history where available which can reveal whether generative AI tools were used to make or manipulate the image.
The bonus is that the Chrome extension and inspection tool don’t rely on third-party support, making content easy to authenticate on platforms where images are routinely shared without attribution. With increasingly accessible AI editing apps also making manipulations harder to detect, Adobe’s Content Authenticity tools may also help to prevent some people from being misled by convincing online deepfakes.