Facebook has refuted suggestions that its army of content moderators have to use inaccurate and disorganised information in its ongoing fight against fake news and extremist content.
The New York Times reported late last week that Facebook currently uses “ad hoc” and “secretive” methods for content moderations. It claimed that moderators often have to wade through thousands of pages of documents with inaccurate and out-of-date info to guide its decision-making.
Facebook hit back at the weekend, stating that it has to update policies on a regular basis due to the fast-changing nature of the processes involved, and said that the debate about how content should be moderated must be centred on facts and nothing else.
A statement from Facebook said: “The Times is right that we regularly update our policies to account for ever-changing cultural and linguistic norms around the world. But the process is far from ‘ad hoc.’”
Facebook said that policies are updated based on trends that its reviewers uncover, in addition to feedback from within the company and third parties. It added that a global forum involving “young engineers and lawyers” held every two weeks is used primarily for discussion about how policies could be changed.
Facebook has been at the heart of data privacy, fake news and brand safety issues that have become prevalent on social media during the last two years. It now uses around 30,000 people to uphold safety on Facebook, with half of that number devoted to reviewing content on a regular basis.