Meta Hosts ‘Nudify’ Deepfake Ads to Target Users: Report

A recent chart presented by Meta during its ongoing antitrust trial with the Federal Trade Commission (FTC) sparked widespread discussion about the evolving nature of social media. (Image credit social media)
Meta Under Fire for Hosting ‘Nudify’ Deepfake Ads: CBS News Investigation Reveals
By TRH News Desk
NEW DELHI, June 7, 2025 – Meta has come under scrutiny after a CBS News investigation uncovered hundreds of advertisements promoting “nudify” AI apps — tools used to create sexually explicit deepfakes — running across its platforms, including Instagram and Facebook.
According to CBS News’ MoneyWatch report published on June 6, Meta took action to remove the ads following the investigation. A Meta spokesperson told CBS News via email: “We have strict rules against non-consensual intimate imagery; we removed these ads, deleted the Pages responsible for running them and permanently blocked the URLs associated with these apps.”
The report highlighted that many of the ads were discovered on Instagram’s “Stories” feature, brazenly promoting tools claiming to let users “upload a photo” and “see anyone naked.” Some even featured deepfake images of celebrities, including Scarlett Johansson and Anne Hathaway, in sexually suggestive poses. One ad included the tagline: “How is this filter even allowed?” beneath a nude deepfake image.
In several instances, the ads directed users to third-party websites offering animated deepfakes of real individuals performing sexual acts, said CBS News in its probe.
“The applications marketed on these sites reportedly charged users between $20 and $80 to access exclusive and advanced features. Some redirected users to Apple’s App Store, where similar nudify apps were available for download,” added the report.
A CBS analysis of Meta’s ad library revealed that these promotions were spread widely across Meta’s suite of platforms — Facebook, Instagram, Threads, Messenger, and Meta Audience Network, the company’s broader ad reach ecosystem. “The presence of such ads raises serious concerns about consent, user safety, and potential exploitation of minors,” added CBS in its findings.
Alarmingly, CBS News also found that at least one of the websites promoted on Instagram did not ask for age verification before allowing users to upload and alter images.
Citing a March 2025 study by the nonprofit Thorn, CBS News noted that 41% of teens surveyed had heard of “deepfake nudes,” and 10% said they knew someone personally targeted by such content.
Revelations came amid rising concerns among educationists that school-going children are prompted to sexually explicit contents while they use internet for studies, games or social media platforms. In the US, the law mandates take down of sexually explicit contents within 48-hour. Similar rules have been introduced in India while the tech giants freely duck the law of the land while citing alibi that the contents are generated by third parties.
The findings have amplified calls for tighter content moderation, regulation of AI tools, and stronger safeguards for children and vulnerable users online. Critics argue the case highlights the urgent need for tech platforms to take more responsibility in policing harmful and exploitative content made possible by rapidly advancing AI capabilities.
Follow The Raisina Hills on WhatsApp, Instagram, YouTube, Facebook, and LinkedIn