Facebook bug might have exposed moderators to suspected terrorist groups

It was recently revealed that the information of over 1,000 content moderators was leaked thanks to a security flaw.

Facebook past year introduced a bug in its content moderation software that exposed the identities of workers who police content on the social network to those being policed, raising the possibility of retribution.

This all started after Facebook moderators started receiving friend requests from people affiliated with the terrorist organisations they were scrutinising. The bug was apparently discovered late previous year. He says he was given two weeks of training before being asked to investigate reports of terrorist content on Facebook's network. As a result, he chose to quit his job at the counter-terrorism unit based at Facebook's European headquarters in Dublin, Ireland and go into hiding in Eastern Europe, according to the Guardian.

"Within the high-risk, six had their personal profiles viewed by accounts with ties to IS, Hezbollah and the Kurdistan Workers Party".

It is believed 40 of these moderators worked in a counter-terrorism unit and have since been flagged as high priority victims after Facebook said their profiles could have been viewed by suspected terrorists.

However, while Facebook assures that this bug has been fixed, those working on the content moderation teams are not so sure.

Warriors fans celebrate National Basketball Association title with rowdy street party
James teamed up with All Stars Dwayne Wade and Chris Bosh on the Miami Heat to win his first championship in 2012. Thursday and utilize public transportation to ease congestion on the streets, as driving could be very rough.

Facebook Inc has hired more than 150 counterterrorism experts and is increasingly using artificial intelligence that can understand language and analyse images to try to keep terrorists from using the social network for recruiting and propaganda.

Ensuring that terrorists have no place anywhere in the Facebook apps, the company has begun to work on systems to take action against terrorist accounts across all its platforms, including WhatsApp and Instagram. In May, Facebook Chief Executive Mark Zuckerberg said the company would hire 3,000 more staffers to review content in an attempt to curb violent or sensitive videos. Facebook offered the affected moderators home security systems, transport to and from work, and counseling as well.

The Facebook post - by Monika Bickert, director of global policy management, and Brian Fishman, counterterrorism policy manager - did not specifically mention May's calls. During their investigation, they discovered that the moderator's profiles had thankfully never been viewed by the suspected group admins. The apparent bug caused the personal profiles of Facebook content moderators to appear as notifications in the activity logs of groups and individuals who they had removed from the site. "We remove terrorists and posts that support terrorism whenever we become aware of them".

A Facebook spokesman confirmed the incident to the Guardian and said it had made technical changes to fix the glitch.

Facebook says it has "no evidence" of any threat to the affected workers or their families because of the data exposure.

Latest News