Facebook and Instagram’s response to fake porn is being reviewed by a watchdog

Facebook and Instagram’s response to fake porn is being reviewed by a watchdog

The Meta Monitoring Board is set to assess companies’ handling of fake pornography amid growing concerns that artificial intelligence is fueling an increase in the creation of fake and explicit imagery as a form of harassment.

The watchdog said on Tuesday that it would review Meta’s handling of two explicitly generated images of female figures, one from the United States and one from India, to assess whether the company had appropriate policies and practices in place to deal with such content. — and whether it enforces those policies consistently around the world.

The threat of AI-generated pornography has gained attention in recent months, with celebrities including Taylor Swift, as well as US high school students and other women around the world, falling victim to online abuse. Widely accessible generative AI tools have made it faster, easier and cheaper to create such images. Meanwhile, social media platforms make it possible to spread these images quickly.

“Fake pornography is a growing cause of online gender-based harassment and is increasingly being used to target, silence and intimidate women – both online and offline,” Meta Monitoring Board Co-Chair Helle Thorning-Schmidt said in a statement. “We know that Meta is faster and more effective in moderating content in some markets and languages than others,” said Thorning-Schmidt, who is also the former prime minister of Denmark. “Taking one case from the US and one from India, we want to see whether Meta protects all women around the world in a fair way.”

The Meta Monitoring Board is an entity made up of experts in areas such as freedom of expression and human rights. It is often described as a kind of Supreme Court for Meta, as it allows users to appeal content decisions on the company’s platform. The board makes recommendations to companies on how to handle specific content moderation decisions, as well as broader policy recommendations.

As part of its review, the board will evaluate one example of an AI-generated nude image resembling a public figure from India that was shared to Instagram by an account that “only shares AI-generated images of Indian women.”

A user reported the image as pornographic, but the report was automatically closed after it did not receive a review by Instagram within 48 hours. The same user appealed Instagram’s decision to leave the image, but the report was again not reviewed and automatically closed. After the Supervisory Board notified Meta of its intention to take up the case, the company decided it had allowed the image to remain in error and removed it for violating bullying and harassment rules, according to the board.

The second case involved an AI-generated image of a naked woman groping, which was posted to a Facebook group for AI creation. The image is meant to resemble an American public figure, who is also mentioned in the image’s caption.

The same image was previously posted by a different user, after which it was submitted to policy experts who decided to remove it for violating bullying and harassment rules, “specifically for ‘photoshop or sexually offensive drawings.'” The image was later added. to a photo matching bank that automatically detects when a rule-breaking image is reposted, so the second user’s post is automatically removed.

As part of this latest review, the Watchdog is seeking public comments — which can be submitted anonymously — about fake pornography, including how such content can harm women and how Meta responds to posts featuring AI-generated explicit imagery. The public comment period closes on April 30.

About Kepala Bergetar

Kepala Bergetar Kbergetar Live dfm2u Melayu Tonton dan Download Video Drama, Rindu Awak Separuh Nyawa, Pencuri Movie, Layan Drama Online.

Leave a Reply

Your email address will not be published. Required fields are marked *