We Asked Facebook 12 Questions About the Election, and Got 5 Answers
Alex Stamos, chief security officer, Facebook:
In our April white paper, “Information Operations and Facebook,” we described the activity that we detected from a sophisticated threat actor that was spreading stolen information about specific political targets in the run-up to the U.S. election and using it to feed press stories that they could then amplify. We took steps to disrupt this activity and reported details to the relevant authorities.
In this white paper, we noted the challenge of attributing threat activity to foreign actors ourselves, but we specifically referenced the assessment of the U.S. government that this actor was tied to Russia’s intelligence services. This was an accurate statement of what we knew about this particular actor at the time, and it appropriately relied on the U.S. intelligence community’s public analysis.
We have been forthcoming at every opportunity about what we know about these information operations. In addition to our white paper, last month we disclosed advertising activity on our platform that we believe is linked to the Internet Research Agency, a different group from the one we described in April. We undertook this research on our own, and we named the group based on our best assessment because we weren’t aware of a comparable public report from the government.
2. Related to the above question: In July 2016, WikiLeaks complained that Facebook was censoring links to a page on its website that hosted the hacked D.N.C. emails. Your chief security officer, Alex Stamos, replied to WikiLeaks (in a tweet that has since been deleted) saying that the issue had “been fixed.” Links to WikiLeaks were subsequently restored. Did Facebook’s security team manually override a tool that flagged these fake accounts as suspicious? If so, who was responsible for the decision to restore access to WikiLeaks, despite having detected a suspicious campaign to promote its stolen documents? Did you notify law enforcement that your security team had intercepted a coordinated influence campaign?
Mr. Stamos: The temporary block of some WikiLeaks links by our automated spam-fighting systems had nothing to do with information operations. It was caused by WikiLeaks posting thousands of raw emails — several of which contained links to malicious phishing and spam sites found in industrywide block lists. We removed the block after we determined that the WikiLeaks links themselves were not harmful.
3. You recently announced you were adding 1,000 human moderators to the team that reviews Facebook ads. How many human ad reviewers did Facebook employ in November 2016? And what percentage of political ads that ran on Facebook during the 2016 election cycle did they review?
Joe Osborne, Facebook spokesman: We don’t usually share the sizes of specific teams at Facebook. Our teams review millions of ads around the world each week, and we use a mix of automated and manual processes. We’re not sharing an exact break-out of the number of manually reviewed political ads.
4. Of the 1,000 human moderators you’re hiring, how many will be based in the United States? Will you be hiring moderators to review ads in non-English languages? What kinds of pre-hire screening will you do to make sure that these moderators are not affiliated with foreign governments, extremist groups, or others looking to influence the American political process?