Facebook has been forced to defend its content removal policies after criticism from government ministers in Israel regarding “inciteful” posts and videos published on the social media platform. Public Security Minister Gilad Erdan claims that Facebook is not doing enough to take down content that is considered to be a threat to security.
Facebook did not directly respond to the comments made by Erdan at the weekend, but it has issued a statement reaffirming its commitment to removing any hateful content, such as direct threats and terrorist speeches. It also revealed that it has been working closely with the government in Israel to stamp out any threatening posts.
Israel’s Justice Minister, Ayelet Shaked, has urged social networking sites to remove content pre-emptively if it is considered to be a potential security threat, as she believes that manual reporting by users is not sufficient to tackle the threats. Facebook said that it continues to work with policymakers and safety organisations across the globe to ensure that the platform is safe for everybody to use.
Facebook added in a statement: “We have a set of community standards designed to help people understand what’s allowed on Facebook. We call on people to use our report tool if they find content they believe violates these rules, so that we can examine each case and take quick action.”
Censoring content
The war of words between Israel and the social media giant follows the escalation of violence in the country, which has prompted Prime Minister Benjamin Netanyahu to pursue plans to provide the government with the necessary powers to remove any threatening social content. Israel claims that it has recently flagged more than 70 posts deemed “inciteful,” but only around a third of those have been removed by Facebook.
Content removal policies have become more complex in recent years, as it can often be difficult to determine what exactly is extremist content. Facebook complied with a Turkish court order last year that demanded that a page offending Prophet Muhammad should be blocked, adding afterwards that it was policy to remove content if it breaks the law within a specific country. Google, Twitter and Facebook also agreed to assess reports made by users in Germany within 24 hours in order to take down any hate speeches.