×
Meta Tweaks AI Labeling After Mislabeling Edited Photos as Artificial
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Meta is updating its approach to labeling AI-generated content after its “Made with AI” tags were confusing users by incorrectly flagging some lightly edited photos as AI-made.

Key changes to Meta’s AI labeling policy: Meta is tweaking its AI labeling system in response to user feedback and guidance from its Oversight Board:

  • The “Made with AI” label will be changed to “AI info” across Meta’s apps, which users can click for more context
  • Meta is working with industry partners to improve its labeling approach so it better aligns with user expectations

The problem with the previous labeling system: Meta’s AI detection relied heavily on metadata to flag AI content, leading to issues:

  • Photos that were lightly edited in Photoshop were being labeled as AI-made, even if they weren’t fully generated by AI tools like DALL-E
  • Metadata indicating minor AI edits could be easily removed, allowing actual AI images to go undetected

Challenges in identifying AI content: There is currently no perfect solution for comprehensively detecting AI images online:

  • Metadata can be a flawed indicator, as it can be added to minimally edited photos or stripped from actual AI images
  • Ultimately, users still need to be vigilant and learn to spot clues that an image may be artificially generated

Balancing AI integration with transparency: As Meta pushes forward with AI tools across its platforms, it is grappling with how to responsibly label AI content:

  • Meta first announced plans to automatically detect and label AI images in February, also asking users to proactively disclose AI content
  • However, the initial labeling system led to confusion and frustration among users whose legitimately captured and edited photos were tagged as AI

Broader implications:

Meta’s challenges with accurately labeling AI content highlight the complex issues platforms face as AI-generated images become increasingly commonplace online. While Meta is taking steps to refine its approach based on user feedback, the difficulty in distinguishing lightly edited photos from wholly artificial ones underscores the need for a multi-pronged approach.

Technical solutions like metadata analysis will likely need to be combined with ongoing efforts to educate users about the hallmarks of AI imagery. Ultimately, maintaining transparency and trust as AI proliferates will require collaboration between platforms, AI companies, and users themselves.

Meta Changes 'Made With AI' Policy After Mislabeling Images

Recent News

Super Micro stock surges as company extends annual report deadline

Super Micro Computer receives filing extension from Nasdaq amid strong AI server sales, giving the manufacturer until February to resolve accounting delays.

BlueDot’s AI crash course may transform your career in just 5 days

Demand surges for specialized training programs that teach AI safety fundamentals as tech companies seek experts who can manage risks in artificial intelligence development.

Salesforce expands UAE presence with new Dubai AI hub

Salesforce expands its footprint in Dubai as the UAE advances its digital transformation agenda and emerges as a regional technology hub.