
Meta’s New AI Feature Sparks Concerns Over Photo Privacy
Meta has recently initiated a test feature on Facebook, asking users to allow its AI to access their unpublished photo libraries. Reported by TechCrunch, this move is part of Meta’s broader strategy to enhance its generative AI models. The company, owning platforms like Instagram, has acknowledged using public posts’ photos and texts since 2007 for AI training purposes, sparking concerns over personal data usage.
Facebook users have encountered pop-ups requesting permission for “cloud processing” of photos, intended to leverage AI for “restyling” images, thematic grouping, and creating personalized content like travel collages. Meta assures that this data will not be used for ad targeting, although their AI Terms of Service indicate the use of personal information to advance AI technologies.
Despite denials that unpublished photos are used for AI training, Meta’s opt-in feature requires acceptance of these terms. The AI accesses media and facial features for content creation, considering time, location, and thematic elements. The company has not confirmed or denied plans to expand this feature beyond Facebook or whether unpublished data will eventually contribute to AI model training.
U.S. users lack opt-out provisions as Meta uses public data without explicit notification. Europe’s stricter privacy laws enable Instagram and Facebook users to opt-out of data scraping. Artists have long expressed concerns over AI training on publicly available images, fearing imitation of their styles could undermine their livelihoods. This debate over AI ethics and privacy continues as Meta tests these AI-driven features.