In a series of Threads posts this afternoon, Instagram head Adam Mosseri expresses concerns about the trustworthiness of online images generated by AI. He emphasizes the importance of considering the source of content and suggests that social platforms should assist users in verifying the credibility of what they see.
Mosseri acknowledges the challenge of accurately labeling AI-generated content on internet platforms and suggests providing context on the source of the content to aid users in assessing its reliability.
To ensure the accuracy of information, Mosseri advises users to verify claims or images from reputable sources, similar to how one would approach interactions with chatbots or AI-driven search engines. Currently, Meta’s platforms lack the context features discussed by Mosseri, although the company has hinted at upcoming changes in content moderation guidelines.
Mosseri’s suggestions align with user-led moderation practices seen on other platforms like X and YouTube, as well as Bluesky’s custom moderation filters. It remains to be seen if Meta will implement similar features, considering its history of adopting innovations from Bluesky.