Facebook late Wednesday unveiled new efforts it has taken to combat hate speech and misinformation in Myanmar, where Facebook has fueled ethnic violence against the Rohingya population.
Facebook on Wednesday said in a blog post that employees traveled to Myanmar, also known as Burma, over the summer to better understand the situation. It has also hired more than 60 Myanmar language experts to review content and plans to increase that to 100 by the end of the year.
Lawmakers, human rights activists and the United Nations have criticized the role Facebook has played in Myanmar’s crisis. Facebook’s pledge to be more involved is part of its broader defense against the spread of controversial or false information on its network globally. Chief executive Mark Zuckerberg has pledged to hire more staff to review posts for hate speech.
But Facebook product manager Sara Su said that people alone are not able to catch all bad content. Much of Facebook’s effort relies on artificial intelligence, which Zuckerberg has pointed to as a tool that social media companies can use to parse a high volume of posts and flag potential problems.
However, AI is far from capable of monitoring and evaluating hate speech or false information. Zuckerberg has said that it will take five or 10 years to train AI to recognize the nuances.
The technology is being tested in Myanmar, where a low rate of posts are flagged by Facebook users for potential policy violations. Facebook on Wednesday said that artificial intelligence is now able to flag 52 percent of all the content it removes in Myanmar before it is reported by users.
Facebook did not give an estimate of how many pieces of content it’s removed, making it difficult to assess the scale of the problem. But in an independent investigation, Reuters found over 1,000 posts, comments, images and videos calling for violence against the Rohingya people in the last week.
The company said it is also enforcing in Myanmar a recently updated policy addressing “credible violence,” which sets standards to remove content that has the “potential to contribute to imminent violence or physical harm.”
Facebook largely hesitates to remove misinformation across its network, preferring to demote false information in the news feed using its algorithms. But it is willing to take a stronger hand in Myanmar due to the violence linked to misinformation. Facebook said it’s undertaking similarly focused enforcement strategies in Sri Lanka, India, Cameroon and the Central African Republic.