Meta announces new measures to address manipulated media and AI-generated content after feedback from the Oversight Board and extensive policy review processes involving public opinion surveys and expert consultations.
Moving forward, Meta will implement "Made with AI" labels on a wider range of video, audio, and image content when industry-standard AI image indicators are detected or when individuals disclose that they're uploading AI-generated content.
This decision aligns with the Oversight Board's recommendation to provide transparency and additional context rather than removing content, thereby avoiding unnecessary restrictions on freedom of speech.
Meta also acknowledges that its previous approach was too narrow, focusing primarily on AI-generated videos. With the evolution of technology, especially in the realm of realistic AI-generated content like audio and photos, Meta recognizes the need to address a broader range of manipulated media.
These changes aim to give users more information and context about the content they encounter on Meta platforms.
READ NEXT : Meta AI Image Generator Branded as 'Racist' After Failing to Generate Images of Interracial Couples
When Will Meta Label AI-generated Content?
Meta plans to begin labeling AI-generated content in May 2024 and will cease removing content solely based on its manipulated video policy in July. This phased approach allows users time to understand the self-disclosure process and adjust to the new labeling system.
The decision-making process behind these changes involved extensive consultations with stakeholders worldwide, including over 120 stakeholders across 34 countries, and public opinion research involving more than 23,000 respondents in 13 countries.
Most stakeholders supported the implementation of warning labels for AI-generated content depicting people saying things they did not say.
Join the Conversation