Tuesday 16 October, 2018

Facebook sets out what content it forbids

(Image: AP: Facebook logo)

(Image: AP: Facebook logo)

Facebook has for the first time published a detailed explanation of the kind of content it does and does not permit on its site.

The social media giant has in the past shared brief “community standards” about what people can and cannot post, but the regulations published on Tuesday reveal far more comprehensive information about the policy behind the moderation process.

The move, which the company describes as a bid to be more open, comes as Facebook faces unprecedented scrutiny about its data-collection and privacy policies amid the Cambridge Analytica scandal. Its CEO Mark Zuckerberg recently faced two days of questioning by Congress over the data leak row.  

Facebook currently uses a combination of artificial intelligence and reports from users to identify content that violates its standards. Posts are then reviewed by the 7,500 moderators who work around the world.

In a blog post, Facebook’s Vice President of Global Policy Management Monika Bickert said there were two main reasons for publishing the guidelines upon which they based their decisions.  

“First, the guidelines will help people understand where we draw the line on nuanced issues. Second, providing these details makes it easier for everyone, including experts in different fields, to give us feedback so that we can improve the guidelines - and the decisions we make - over time.”

Facebook has been criticised in the past both for removing content and allowing it to remain.

There was uproar, for example, in 2016 when it censored what is generally regarded as an iconic image from the Vietnam War of a naked child running from a napalm attack. The image was later permitted.

But Facebook has also been accused of failing to block hate speech in Myanmar against the Muslim minority - something that UN investigators say fuelled ethnic conflict.   

Users, meanwhile, have complained of a lack of explanation when posts have been removed.

Bickert said the company was introducing an appeals system so users whose posts had been taken down could challenge the decision, and was working on a way for people to ask why content they objected to remained up.

“We believe giving people a voice in the process is another essential component of building a fair system,” Bickert wrote.

The guide has sections covering subjects that include criminal behaviour, threats, bullying, child safety, sexual exploitation, nudity, hate speech and graphic violence.

It sets out how Facebook defines terror and hate-speech groups, and its section on hate speech shows the nuances moderators must consider when assessing posts.

“Sometimes people share content containing someone else's hate speech for the purpose of raising awareness or educating others. Similarly, in some cases, words or terms that might otherwise breach our standards are used self-referentially or in an empowering way. When this is the case, we allow the content… Where the intention is unclear, we may remove the content,” the section reads.

Bickert said the policies would continue to evolve and acknowledged that mistakes would be made “because our processes involve people, and people are fallible”.