Revealed: Facebook exposed identities of moderators to suspected terrorists

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDxHItdXOjASDyL82yG37A33_Bn41wYV_piZ-wjwRFxS9kUq2BhGFveAi8nguZMUZZ3eP4E3eAlRUj894mqdW5FMsmDcHbHHBFKpinLq4LnoJeXkqhmpAvbhZ_IzDqkeJlXCiZzwGt0hI/s400/2017-09-15_20-59-02.png

Revealed: Facebook exposed identities of moderators to suspected terrorists



Powered by Guardian.co.ukThis article titled “Revealed: Facebook exposed identities of moderators to suspected terrorists” was written by Olivia Solon in San Francisco, for The Guardian on Friday 16th June 2017 07.09 UTC


Facebook put the safety of its content moderators at risk after inadvertently exposing their personal details to suspected terrorist users of the social network, the Guardian has learned.


The security lapse affected more than 1,000 workers across 22 departments at Facebook who used the company’s moderation software to review and remove inappropriate content from the platform, including sexual material, hate speech and terrorist propaganda.


A bug in the software, discovered late last year, resulted in the personal profiles of content moderators automatically appearing as notifications in the activity log of the Facebook groups, whose administrators were removed from the platform for breaching the terms of service. The personal details of Facebook moderators were then viewable to the remaining admins of the group.


Of the 1,000 affected workers, around 40 worked in a counter-terrorism unit based at Facebook’s European headquarters in Dublin, Ireland. Six of those were assessed to be “high priority” victims of the mistake after Facebook concluded their personal profiles were likely viewed by potential terrorists.


The Guardian spoke to one of the six, who did not wish to be named out of concern for his and his family’s safety. The Iraqi-born Irish citizen, who is in his early twenties, fled Ireland and went into hiding after discovering that seven individuals associated with a suspected terrorist group he banned from Facebook – an Egypt-based group that backed Hamas and, he said, had members who were Islamic State sympathizers – had viewed his personal profile.


Facebook confirmed the security breach in a statement and said it had made technical changes to “better detect and prevent these types of issues from occurring”.


“We care deeply about keeping everyone who works for Facebook safe,” a spokesman said. “As soon as we learned about the issue, we fixed it and began a thorough investigation to learn as much as possible about what happened.”


The moderator who went into hiding was among hundreds of “community operations analysts” contracted by global outsourcing company Cpl Recruitment. Community operations analysts are typically low-paid contractors tasked with policing Facebook for content that breaches its community standards.


Overwhelmed with fear that he could face retaliation, the moderator, who first came to Ireland as an asylum seeker when he was a child, quit his job and moved to eastern Europe for five months.


“It was getting too dangerous to stay in Dublin,” he said, explaining that his family had already experienced the horrifying impact of terrorism: his father had been kidnapped and beaten and his uncle executed in Iraq.


“The only reason we’re in Ireland was to escape terrorism and threats,” he said.


The moderator said that others within the high-risk six had their personal profiles viewed by accounts with ties to Isis, Hezbollah and the Kurdistan Workers Party. Facebook complies with the US state department’s designation of terrorist groups.


“When you come from a war zone and you have people like that knowing your family name you know that people get butchered for that,” he said. “The punishment from Isis for working in counter-terrorism is beheading. All they’d need to do is tell someone who is radical here.”


Facebook moderators like him first suspected there was a problem when they started receiving friend requests from people affiliated with the terrorist organizations they were scrutinizing.


An urgent investigation by Facebook’s security team established that personal profiles belonging to content moderators had been exposed. As soon as the leak was identified in November 2016, Facebook convened a “task force of data scientists, community operations and security investigators”, according to internal emails seen by the Guardian, and warned all the employees and contracted staff it believed were affected. The company also set-up an email address, nameleak@fb.com, to field queries from those affected.


Facebook then discovered that the personal Facebook profiles of its moderators had been automatically appearing in the activity logs of the groups they were shutting down.


Craig D’Souza, Facebook’s head of global investigations, liaised directly with some of the affected contractors, talking to the six individuals considered to be at the highest risk over video conference, email and Facebook Messenger.


In one exchange, before the Facebook investigation was complete, D’Souza sought to reassure the moderators that there was “a good chance” any suspected terrorists notified about their identity would fail to connect the dots.


“Keep in mind that when the person sees your name on the list, it was in their activity log, which contains a lot of information,” D’Souza wrote, “there is a good chance that they associate you with another admin of the group or a hacker …”


“I understand Craig,” replied the moderator who ended up fleeing Ireland, “but this is taking chances. I’m not waiting for a pipe bomb to be mailed to my address until Facebook does something about it.”


Facebook CEO Mark Zuckerberg delivers a keynote address. Six moderators were assessed as ‘high priority’ victims of a mistake that shared their personal details with extremist groups.
Facebook CEO Mark Zuckerberg delivers a keynote address. Six moderators were assessed as ‘high priority’ victims of a mistake that shared their personal details with extremist groups. Photograph: Justin Sullivan/Getty Images

The bug in the software was not fixed for another two weeks, on 16 November 2016. By that point the glitch had been active for a month. However, the bug was also retroactively exposing the personal profiles of moderators who had censored accounts as far back as August 2016.


Facebook offered to install a home alarm monitoring system and provide transport to and from work to those in the high risk group. The company also offered counseling through Facebook’s employee assistance program, over and above counseling offered by the contractor, Cpl.


The moderator who fled Ireland was unsatisfied with the security assurances received from Facebook. In an email to D’Souza, he wrote that the high-risk six had spent weeks “in a state of panic and emergency” and that Facebook needed to do more to “address our pressing concerns for our safety and our families”.


He told the Guardian that the five months he spent in eastern Europe felt like “exile”. He kept a low profile, relying on savings to support himself. He spent his time keeping fit and liaising with his lawyer and the Dublin police, who checked up on his family while he was away. He returned to Ireland last month after running out of money, although he still lives in fear.


“I don’t have a job, I have anxiety and I’m on antidepressants,” he said. “I can’t walk anywhere without looking back.”


This month he filed a legal claim against Facebook and Cpl with the Injuries Board in Dublin. He is seeking compensation for the psychological damage caused by the leak.


Cpl did not respond to a request to comment. The statement provided by Facebook said its investigation sought to determine “exactly which names were possibly viewed and by whom, as well as an assessment of the risk to the affected person”.


The social media giant played down the threat posed to the affected moderators, but said that it contacted each of them individually “to offer support, answer their questions, and take meaningful steps to ensure their safety”.


“Our investigation found that only a small fraction of the names were likely viewed, and we never had evidence of any threat to the people impacted or their families as a result of this matter,” the spokesman said.


Details of Facebook’s security blunder will once again put a spotlight on the grueling and controversial work carried out by an army of thousands of low-paid staff, including in countries like the Philippines and India.


The Facebook Files: sex, violence and hate speech

The Guardian recently revealed the secret rules and guidelines Facebook uses to train moderators to police its vast network of almost two billion users, including 100 internal training manuals, spreadsheets and flowcharts.


The moderator who fled Ireland worked for a 40-strong specialist team tasked with investigating reports of terrorist activity on Facebook. He was hired because he spoke Arabic, he said.


He felt that contracted staff were not treated as equals to Facebook employees but “second-class citizens”. He was paid just €13 ($15) per hour for a role that required him to develop specialist knowledge of global terror networks and scour through often highly-disturbing material.


“You come in every morning and just look at beheadings, people getting butchered, stoned, executed,” he said.


Facebook’s policies allow users to post extremely violent images provided they don’t promote or celebrate terrorism. This means moderators may be repeatedly exposed to the same haunting pictures to determine whether the people sharing them were condemning or celebrating the depicted acts.


The moderator said that when he started, he was given just two weeks training and was required to use his personal Facebook account to log into the social media giant’s moderation system.


“They should have let us use fake profiles,” he said, adding: “They never warned us that something like this could happen.”


Facebook told the Guardian that as a result of the leak it is testing the use of administrative accounts that are not linked to personal profiles.


Moderation teams were continually scored for the accuracy and speed of their decisions, he said, as well as other factors such as their ability to stay updated training materials. If a moderator’s score dropped below 90% they would receive a formal warning.


In an attempt to boost morale among agency staff, Facebook launched a monthly award ceremony to celebrate the top quality performers. The prize was a Facebook-branded mug. “The mug that all Facebook employees get,” he noted.


Contact the author: olivia.solon@theguardian.com


guardian.co.uk © Guardian News & Media Limited 2010


Published via the Guardian News Feed plugin for WordPress.




Revealed: Facebook exposed identities of moderators to suspected terroristshttps://goo.gl/YhcVmj

0 comments:

Post a Comment

More

Whats Hot