Deutsch한국어日本語中文EspañolFrançaisՀայերենNederlandsРусскийItalianoPortuguêsTürkçe
Portfolio TrackerSwapCryptocurrenciesPricingIntegrationsNewsEarnBlogNFTWidgetsDeFi Portfolio Tracker24h ReportPress KitAPI Docs

Meta Announces Expanded Policy for Labeling AI-Generated Images

11M ago
bullish:

0

bearish:

0

Share
Loading...

Meta, formerly known as Facebook, has unveiled an expanded policy to address the challenges associated with the proliferation of AI-generated content on its platforms, including Facebook, Instagram, and Threads. 

In response to the growing use of generative artificial intelligence, the company will now implement warning labels on AI-generated images to inform users of their artificial origins. While this move is seen as a positive step, concerns remain about its effectiveness in combatting malicious actors and the limitations of the technology.

Warning labels on AI-generated images

Meta’s latest policy shift involves tagging AI-generated images, whether created using its own generative AI tools or those provided by other companies, with labels indicating their artificial nature. This move is designed to reduce the risk of misinformation and disinformation being spread by AI-generated images passed off as real photos.

 Meta has been labeling images produced by its AI as “Imagined with AI,” primarily by identifying the digital watermark embedded by its algorithms into the output.

Challenges and limitations

Despite Meta’s commitment to implementing these warning labels, significant challenges remain. One notable challenge is that many AI image-generation tools do not watermark their output, making it easy for malicious actors to manipulate and disseminate AI-generated content without disclosure. 

Moreover, the technology to reliably detect AI-generated images and label them is still in development, and no universally deployed standard exists.

The Coalition for Content Provenance and Authenticity (C2PA), an initiative founded by Adobe, has tried aligning companies’ work on content provenance. However, the technology developed thus far is not foolproof. 

Researchers have demonstrated the ease with which watermarks can be broken or added to non-AI-generated images, undermining their effectiveness.

Hany Farid, a professor at the UC Berkeley School of Information who advises the C2PA initiative, cautions that bad actors may use AI tools that do not watermark their output to deceive users further. 

He suggests that a combination of identification methods, such as watermarking and hash-based technology, may be necessary to robustly identify AI-generated content. Watermarking is less developed for AI-generated media beyond images, such as audio and video.

Expanding policies and oversight

While Meta’s new policies represent a step forward in combating fake content, they are not limited to AI-generated material. The company’s Oversight Board, composed of independent experts, recently upheld Meta’s decision to leave up a video of President Joe Biden that had been edited to create a misleading impression. 

This decision highlights the need for Meta to revise and expand its rules for “manipulated media” to encompass more than just AI-generated content.

Meta’s spokesperson, Kevin McAlister, acknowledges the need for a broader approach, stating, “While companies are starting to include signals in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale.” 

As such, Meta is working on adding a feature for users to disclose when sharing AI-generated video or audio so that appropriate labels can be applied.

11M ago
bullish:

0

bearish:

0

Share
Manage all your crypto, NFT and DeFi from one place

Securely connect the portfolio you’re using to start.