https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbl18rIiyz9GF_davH8RHU4YL3ITU6cixojs1zFW8kVt9Q1jj4b68tz5OeJcWXHFOtSyOeIxqRQbkJqgErtoulMqasN-DoF6FYoX1I442GrUVJyoyC4BlzlZlNvWiAeoj8IPGpfkz49cU/s1600/2018-10-20_19-31-46.jpg
Facebook is promoting a new “war room” as a part of its solution to election interference, unveiling a team of specialists working to stop the spread of misinformation and propaganda.
It’s unclear how well it’s working.
The Silicon Valley company, which has faced intensifying scrutiny over its role in amplifying malicious political content, opened its doors to reporters to tour a new workspace at its Menlo Park headquarters on Wednesday. Engineers, data scientists, threat investigators and other Facebook experts from 20 teams recently began collaborating inside the so-called “war room”, a term that political campaigns typically use to describe operation centers.
The press briefing provided minimal new information about Facebook’s specific strategies and impacts when it comes to combatting foreign interference and false news. The corporation has been eager to publicly demonstrate that it is taking abuses on its platforms seriously amid an avalanche of scandals. That includes a vast data breach, government inquiries across the globe, new ad fraud allegations, and the continuing stream of viral fake content and hate speech.
The stakes are high as the US approaches critical midterm elections in November and the 2020 presidential race. WhatsApp, the Facebook-owned messaging service, has also been linked to widespread false news stories that have led to violence and mob lynchings in India. The platform has further struggled to mitigate harms it is causing in Myanmar, where an explosion of social media hate speech has contributed to violence and genocide. American hate groups and far-right activists have also weaponized the site.
On Wednesday morning, a group of journalists crowded outside a windowless room, snapping iPhone photos of a closed door with a small sign stuck to it that said “WAR ROOM” in red letters. Inside, digital dashboards displayed real-time information about activity on the platform. CNN played in the background, and the wall displayed a large American flag and motivational posters saying “Focus on impact” and “Bring the world closer together”.
Some screens were “off the record” and could not be photographed, Facebook communications representatives said. The names of employees inside the room could not be published.
“We don’t have a crystal ball. We’re not going to be able to predict every tactic,” said Nathaniel Gleicher, cybersecurity policy chief. “Having all these teams in a room together will help.” He said Facebook had seen an uptick in interference efforts surrounding the midterms, but did not provide specifics.
The Facebook executives outlined a range of the company’s tactics, including making political advertising more transparent, targeting reduced distribution of false news, detecting and taking down coordinated campaigns by “bad actors”, preventing spam and fake accounts, and launching “rapid response” efforts when election misinformation escalates. In advance of the midterms, Facebook has also adopted new practices banning misinformation specific to voting, such as fake stories about long lines or voting requirements that could lead to voter suppression.
In recent Brazilian elections, for example, the war room discovered and removed a false story about the date of the election changing.
Pressed about evidence of rampant misinformation in Brazil, where a court ordered Facebook to remove links to 33 fake news stories, the executives said there was more work to be done.
The court order was “proof of why we want to have these partnerships and be working with partners across government”, said Katie Harbath, director of global politics and government outreach. “It’s showing that we have set up our systems to be able to react to these things as quickly as we possibly can.”
One study, however, found that out of 50 of the most widely shared political images on WhatsApp in the lead-up to the election in Brazil, only 8% were considered fully truthful, and many were false, misleading or unsubstantiated. There has also been a growing problem of fake news videos, which don’t face the same scrutiny as articles.
Samidh Chakrabarti, Facebook’s director of elections and civic engagement, said WhatsApp had been “doing quite a bit of work to try to stay ahead of any sort of emerging issues”, adding: “They’ve been cracking down on spamming accounts on WhatsApp – and they’ve removed hundreds of thousands.”
Facebook also noted that it has a fact-checking partnership with the Associated Press in all 50 states for the midterms. The continuing collaborations with third-party factcheckers, however, have been controversial, with some partner journalists expressing frustration over the seemingly minimal impact.
Asked how Facebook has been measuring the success of the factchecking and if the company had new data on its effectiveness, Harbath told the Guardian that it was “one piece of the puzzle” and cited “automated work” to reduce the reach of “clickbait” and “ad farms”.
The new political ad moderation system has also had major hiccups. Hours after the briefing, USA Today published a report showing that Facebook had removed ads after incorrectly labeling them “political”, simply because they used descriptions like “African-American” and “Mexican” or were written in Spanish.
Asked at the briefing what ways Facebook may be falling short in its efforts, Harbath responded: “This is really going to be a constant arms race. This is our new normal. Bad actors are going to keep trying to get more sophisticated in what they are doing, and we’re going to have to keep getting more sophisticated in trying to catch them.”
guardian.co.uk © Guardian News & Media Limited 2010
Published via the Guardian News Feed plugin for WordPress.
Facebook has a fake news 'war room' – but is it really working?https://is.gd/WmSAyH
0 comments:
Post a Comment