The Signal

Serving the College since 1885

Wednesday May 15th

Meta says it will start to label AI-generated images on Facebook and Instagram

<p><em>Meta announced it will label AI-generated images on Facebook, Instagram and Threads (Photo courtesy of </em><a href="https://commons.wikimedia.org/wiki/File:Meta_Platforms_Inc._logo_(cropped).svg" target=""><em>Wikimedia Commons</em></a><em> / “Meta Platforms Inc. logo (cropped)” by Meta Platforms. PD text logo. December 1, 2021).</em></p>

Meta announced it will label AI-generated images on Facebook, Instagram and Threads (Photo courtesy of Wikimedia Commons / “Meta Platforms Inc. logo (cropped)” by Meta Platforms. PD text logo. December 1, 2021).

By Janjabill Tahsin
Staff Writer 

Meta, the parent company of Facebook, Instagram and Threads, announced on Feb. 6 that AI-generated images will be labeled to indicate they were created with AI tools. These labels will appear in all languages supported by each app within the coming months, allowing users to sort between what is real and what is not. 

The decision was made under growing pressure from tech companies — both those that build AI software and those that host its outputs — to address the potential risks of AI, from election misinformation to nonconsensual fake nudes of celebrities. Deepfakes, artificial images or videos of fake events, are of increasing concern to both regulators and experts alike, especially with upcoming elections around the world. These experts have warned that such synthetic media could be used to hinder voters’ ability to make informed decisions by manipulating information, according to NPR

Meta and other companies in their industry have been working to develop common standards for identifying AI-generated content. This was also partly a result of an executive order that President Joe Biden signed in October, pushing for digital watermarking and labeling of AI-generated content, according to AP News.

Meta’s labeling system for AI-generated images includes putting visible markers on the images and both invisible watermarks and metadata embedded within the file to help other platforms identify them. Once companies like Google, Microsoft, OpenAI, Adobe, Midjourney and Shutterstock implement these markers, Meta’s labeling system will then be applied to their content as well. Photorealistic images created with Meta’s AI model are already labeled “Imagined with AI.” 

However, other image generators, including open-source models, may never incorporate these kinds of markers. Recognizing this challenge, Meta said that it is developing tools that can automatically detect AI-generated content, even without embedded watermarks or metadata. 

While Meta’s labeling system currently focuses on static photos, due to the ongoing development of tools to identify AI-generated audio and video, Meta will start requiring users to disclose when they post a realistic, digitally created or altered audio or video. Failure to do so may penalize accounts. 

“If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context,” said Nick Clegg, president of global affairs at Meta, in a blog post. 

This expands on Meta’s political ad policy, introduced in November, which requires political advertisers around the world to disclose if they digitally generated or altered images, videos or audio with third-party AI software to depict people and events. 

Last fall, Google said that AI labels were coming to YouTube and its other platforms. Creators must disclose when they post realistic AI-generated content, alerting viewers when watching a video made with AI. YouTube will also allow people to request videos to be removed from the site if they use AI to simulate an identifiable person, including their face or voice, under its privacy tools. Along with YouTube, TikTok said it would start testing automatically applying labels to content that it detects was created or edited with AI.

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” said Clegg. “People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI.”




Comments

Most Recent Issue

Issuu Preview

Latest Cartoon

5/3/2024