Meta may apply ‘penalties’ to users who fail to disclose use of generative AI for images

Meta's new policies require users to disclose and label AI-generated content with the potential for "penalties" if they do not.
Meta's new policies require users to disclose and label AI-generated content with the potential for "penalties" if they do not.

Meta will roll out new standards surrounding AI-generated content on Facebook, Instagram and threads over the coming months, according to a Jan. 6 company blog post.

Content that’s identified as AI-generated, due to metadata or other intentional watermarking, will be given a visible label. Users on Meta platforms will also get the option to flag unlabeled content suspected of being AI-generated.

Crowd-sourcing

If any of this sounds familiar, it’s because it mirrors Meta’s early content moderation practices. Prior to the onset of the era of AI-generated content, the company (then Facebook), developed a user-facing system for reporting content that violated the platform’s terms of service.

Fast-forward to 2024 and Meta’s equipping users across its social networks with tools to flag content again, tapping into what may be the world’s largest consumer crowd-sourcing force.

This also means that creators on the company’s platforms will have to label their own work as AI-generated, whenever applicable, or there could be consequences.

According to the blog post:

“We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so.”

Detecting AI-generated content

Meta says whenever its built-in tools are used to create AI-generated content, that content receives a watermark and label clearly indicating its origin. However, not all generative AI systems have these guardrails embedded.

The company says it’s working with other companies via consortium partnerships — to include Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock — and will continue to develop methods for detecting invisible watermarks at scale.

Unfortunately, these methods may only apply to AI-generated images. “While companies are starting to include signals in their image generators,” reads the blog post, “they haven’t started including them in AI tools that generate audio and video at the same scale.”

Per the post, this means Meta cannot currently detect audio and video generated by AI at scale — including deepfake technology.

Related: Meta unveils Artemis chip to boost AI, cut Nvidia ties — Report