In a recent announcement, Snap revealed its intention to incorporate watermarks into AI-generated images on its platform.
A translucent version of the Snap logo with a sparkle emoji will be included as a watermark on any AI-generated image that is exported from the app or saved to the camera roll.
According to the company’s support page, removing the watermark from images is considered a violation of its terms of use. It remains uncertain how Snap will identify how to remove these watermarks. We have reached out to the company for additional information and will provide an update once we receive a response.
In addition to Microsoft, Meta, and Google, other major players in the tech industry have implemented measures to categorize or distinguish images generated using AI-driven technologies.
At present, Snap offers paying subscribers the ability to create or modify AI-generated images using Snap AI. With its selfie-focused feature, Dreams, users can enhance their pictures using AI technology.
The company detailed its safety and transparency practices regarding AI in a blog post. It clarified that AI-powered features, such as lenses, are visually indicated by a sparkle emoji marker.
The company has also incorporated context cards into AI-generated images produced with tools like Dream in order to provide users with more comprehensive information.
In February, Snap collaborated with HackerOne to implement a bug bounty program to thoroughly test its AI image-generation tools.
We strive to ensure that all Snapchatters, regardless of their background, have fair and equal access to all features within our app, including our AI-powered experiences. The company stated that they are implementing additional testing to reduce any potential bias in AI results.
Snapchat has taken steps to enhance AI safety and moderation following the launch of its “My AI” chatbot in March 2023. During its initial release, the chatbot faced some controversy, as certain users were able to engage it in discussions about sensitive topics such as sex, drinking, and other potentially unsafe subjects. Afterwards, the company implemented controls in the Family Center that allow parents and guardians to closely monitor and limit their children’s interactions with AI.