Meta’s Oversight Board stated on Thursday that the company’s rules were “not sufficiently clear” regarding the prohibition of sexually explicit AI-generated depictions of real people and recommended changes to prevent such imagery from circulating on its platforms. This decision came after the board reviewed two AI-generated pornographic images of famous women, which were posted on Meta’s Facebook and Instagram. The board operates independently but is funded by Meta.
Meta responded by stating it would review the board’s recommendations and update any adopted changes.
In its report, the board identified the two women only as public figures from India and the United States due to privacy concerns. The board determined that both images violated Meta’s rule against “derogatory sexualized photoshop,” which is classified as a form of bullying and harassment, and stated that Meta should have promptly removed them.
In the case involving the Indian woman, Meta failed to review a user report of the image within 48 hours, causing the ticket to be closed automatically without action. The user appealed, but Meta again declined to act, only reversing its decision after the board took up the case. For the American celebrity, Meta’s systems automatically removed the image.
“Restrictions on this content are legitimate,” the board said. “Given the severity of harms, removing the content is the only effective way to protect the people impacted.” The board recommended that Meta update its rule to clarify its scope, noting that the term “photoshop” is “too narrow” and suggesting the prohibition should cover a wide range of editing techniques, including generative AI.
The board also criticized Meta for not adding the Indian woman’s image to a database that enables automatic removals, as was done in the American woman’s case. According to the report, Meta relies on media coverage to decide when to add images to the database, a practice the board called “worrying.”
“Many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance,” the board stated.
credit AP