Skip to main content

Fashion Forward Fridays: Innovations in Apparel Embellishments

Adobe released NEW Content Credentials as a “nutrition label” for digital content. This system offers users a comprehensive view of the content’s components and provides information about the creator, creation time and date, tools utilized, and subsequent edits. The strength of Content Credentials lies in their permanence. Through metadata and watermarking techniques, even if the data or content undergoes modifications, the mark of the credentials should remain. So, at the ink kitchen, we put that to the test. You can watch the exploration in the newest installment of °F AI here:

The commitment of significant players in the generative AI space, such as OpenAI and Meta, to implement disclosure practices for labeling AI-generated images marks a step towards universal adoption. This industry-wide movement aims to create a future where all AI-generated images are appropriately labeled. Sounds great, right? The exploration we did showed you can “copy file” or “apply image” to remove the metadata. Opting in is voluntary. And the Content Creation tool only recognizes Adobe Generated images. I even opened one of the “6-fingered” images from the early days; any human at this point can identify as AI, but the content creation is not recognized as AI because it came from outside the Adobe world.

Adobe claims, “As the [generative AI] tools become democratized, low-cost or free, and the quality of those low-cost or free tools produces becomes indistinguishable from reality, of course, [Content Credentials are] evermore important.” Yet the tool is siloed into the Adobe world, which ultimately does not address the AI issue as a whole. While others are adopting disclosure methods by just changing platforms (at this point), they can be skirted.

Adobe’s AI models, including those used in Firefly, are trained exclusively on Adobe Stock Images and public domain content with expired copyrights. Deepa Subramaniam, Adobe’s VP of product marketing, explains the multi-layered approach to responsible AI: “So between those checks, the ability to add that metadata in, and the way that the image model is trained on openly licensed content — we’re not scraping the open internet — the Firefly image model couldn’t create a Mickey Mouse or a Donald Trump because it’s never essentially seeing a Mickey Mouse or Donald Trump”

It feels like more of the same, after-the-fact regulation that is not offering a solution. Is it a step in the right direction, though? What do you think?

Comments

Leave a comment