Facebook security chief rants about misguided “algorithm” backlash

 In Technology

“I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos” wrote Facebook Chief Security Officer Alex Stamos on Saturday in a reeling tweetstorm. He claims journalists misunderstand the complexity of attacking fake news, deride Facebook for thinking algorithms are neutral when the company knows they aren’t, and encourages reporters to talk to engineers who actually deal with these problems and their consequences.

Yet this argument minimizes many of Facebook’s troubles. The issue isn’t that Facebook doesn’t know algorithms can be biased or that people don’t know these are tough problems, but that the company didn’t anticipate abuses of the platform and work harder to build algorithms or human moderation processes that could block fake news and fraudulent ad buys before they impacted the 2016 U.S. presidential election, instead of now. And his tweetstorm completely glosses over the fact that Facebook will fire employees that talk to the press without authorization.

[Update: 3:30pm PT) I commend Stamos for speaking so candidly to the public about an issue where more transparency is appreciated. But simultaneously, Facebook holds the information and context he says journalists and by extension the public lack, and the company is free to bring in reporters for the necessary briefings. I’d certainly attend a “Whiteboard” session like Facebook has often held for reporters in the past on topics like News Feed sorting or privacy controls.]

Stamos’ comments hold weight because he’s leading Facebook’s investigation into Russian election tampering. He was the Chief Information Security Officer as Yahoo before taking the CSO role at Facebook in mid-2015.

The sprawling response to recent backlash comes right as Facebook starts making the changes it should have implemented before the election. Today, Axios reports that Facebook just emailed advertisers to inform them that ads targeted by “politics, religion, ethnicity or social issues” will have to be manually approved before they’re sold and distributed.

And yesterday, Facebook updated an October 2nd blog post about disclosing Russian-bought election interference ads to congress to note that “Of the more than 3,000 ads that we have shared with Congress, 5% appeared on Instagram. About $6,700 was spent on these ads”, implicating Facebook’s photo-sharing acquisition in the scandal for the first time.

Stamos’ tweetstorm was set off by Lawfare associate editor and Washington Post contributor Quinta Jurecic, who commented that Facebook’s shift towards human editors implies that saying “the algorithm is bad now, we’re going to have people do this” actually “just entrenches The Algorithm as a mythic entity beyond understanding rather than something that was designed poorly and irresponsibly and which could have been designed better.”

Here’s my tweet-by-tweet interpretation of Stamos’ perspective:

He starts by saying journalists and academics don’t get what it’s like to actually like to implement solutions to hard problems, yet clearly no one has the right answers yet.

Facebook’s team has supposedly been pigeonholed as naive of real-life consequences or too technical to see the human impact of its platform, but the outcomes speak for themselves about the team’s inadequacy to proactively protect against election abuse.

Facebook gets that people code their biases into algorithms, and works to stop that. But censorship that results from overzealous algorithms hasn’t been the real problem. Algorithmic negligence of worst-case scenarios for malicious usage of Facebook products is.

Understanding of the risks of algorithms is what’s kept Facebook from over-aggressively implementing them in ways that could have led to censorship, which is responsible but doesn’t solve the urgent problem of abuse at hand.

Now Facebook’s CSO is calling journalists’ demands for better algorithms fake news, because these algorithms are hard to build without becoming a dragnet that attacks innocent content too.

What is totally false might be somewhat easy to spot, but the polarizing, exaggerated, opinionated content many see as “fake” is tough to train AI to spot because of the nuance with which it’s separated from legitimate news, which is a valid point.

Stamos says it’s not as simple as fighting bots with algorithms because…

Recent Posts
Get Breaking News Delivered to Your Inbox
Join over 2.3 million subscribers. Get daily breaking news directly to your inbox as they happen.
Your Information will never be shared with any third party.
Get Latest News in Facebook
Never miss another breaking news. Click on the "LIKE" button below now!