As the world gears up for elections across 50 counties including India, Meta, the parent company of social media giants Facebook and Instagram, has unveiled significant alterations to its policies regarding digitally created content.
These changes aim to tackle deceptive content generated by artificial intelligence (AI) and to mitigate its potential impact on the polls.
Scheduled to roll out in May, Meta’s initiative includes the introduction of “Made with AI” labels for AI-generated videos, images, and audio shared on Facebook and Instagram, a British daily reported.
Monika Bickert, Vice President of Content Policy at Meta, disclosed these updates in a recent blog post.
Additionally, Meta plans to apply distinct and more noticeable labels to digitally altered media that presents a heightened risk of materially deceiving the public, irrespective of whether AI was employed in its creation.
According to a company spokesperson, Meta will promptly implement the enhanced “high-risk” labels while transitioning its approach towards manipulated content.
Instead of solely removing select posts, Meta aims to retain such content while furnishing viewers with insights into its creation process.
Previously, Meta had revealed plans to identify images generated by external generative AI tools through embedded invisible markers.
However, no definitive start date was provided at that time.
The revised labelling strategy will extend to content posted on Facebook, Instagram, and Threads.
Notably, Meta’s other platforms, including WhatsApp and Quest virtual-reality headsets, adhere to distinct regulations.
These policy adjustments arrive months ahead of the upcoming US presidential election in November, amid warnings from tech researchers regarding the potential influence of generative AI technologies.
Political campaigns, particularly in regions like Indonesia, have already begun leveraging AI tools, challenging existing guidelines set forth by Meta and leading generative AI provider OpenAI.
An oversight board urged Meta to extend its policy to include non-AI content, stressing that such content can be equally misleading.
It also advocated for the inclusion of audio-only content and videos portraying actions or statements fabricated or falsely attributed to individuals.