Facebook plans to tell lawmakers on Tuesday that 126 million of its users may have seen content produced and circulated by Russian operatives, many times more than the company had previously disclosed about the reach of the online influence campaign targeting American voters.

The company previously reported that an estimated 10 million users had seen ads bought by Russian-controlled accounts and pages. But Facebook has been silent regarding the spread of free content despite independent researchers suggesting that it was seen by far more users than the ads were.

Tuesday’s planned disclosure, contained in draft company testimony obtained by The Washington Post ahead of three Capitol Hill hearings this week, comes as Facebook and other tech giants face mounting pressure to fully investigate the Russian campaign to influence American voters and reveal their findings to the public.

Google acknowledged for the first time Monday that it had found evidence that Russian operatives used the company’s platforms to influence American voters, saying in a blog post that it had found 1,108 videos with 43 hours of content related to the Russian effort on YouTube. It also found $4,700 worth of Russian search and display ads.

Twitter also plans to tell congressional investigators that it has identified 2,752 accounts controlled by Russian operatives and more than 36,000 bots that tweeted 1.4 million times during the election, according to a draft of Twitter’s testimony obtained by The Post. The company previously reported 201 accounts linked to Russia.

Although the Russian effort sprawled across many U.S.-based technology platforms, attention has focused most heavily on Facebook, which has faced repeated calls from lawmakers and researchers to dig more deeply into its data and disclose more of what it has found.

There have been similar calls within the company, where debates over what to reveal publicly have yielded cautious compromises that have left members of the company’s security team frustrated, according to people familiar with private conversations among Facebook employees.

Such concerns have focused on forensic evidence the security team collected about Russia’s online influence campaign that was, after months of internal company wrangling, not included in a 13-page “white paper” issued publicly in April, according to people familiar with the negotiations. The report spoke in general terms about “information operations” but included only a single page on the U.S. election and did not at any point use the word “Russia” or “Russian.”

Several independent researchers also say Facebook likely has the ability to search for data that could substantiate allegations of possible collusion between the Russian disinformation operation and the Trump campaign’s social media efforts. The possible sharing of content, the timing of social media posts and other forensic information known only to the company could help answer questions central to congressional investigations and the probe led by Special Counsel Robert Mueller.

“If there was collusion in the social media campaign between the Russians and the Trump campaign, they would have that evidence,” said Philip Howard of Oxford University’s Computational Propaganda Project. “It is a needle in a haystack for us outside researchers.”

The president and his campaign officials have denied colluding in any way with the Russians.

The push for more information is likely to emerge as an important theme during the congressional hearings Tuesday and Wednesday, when lawmakers plan to push for more details.

“I hope they will be more forthcoming,” said Sen. Mark Warner, Va., the top Democrat on the Senate Intelligence Committee, one of three committees holding hearings on these issues this week. “I think there’s a lot more that Americans deserve to know.”

Facebook’s chief security officer, Alex Stamos, said in a statement to The Post on Monday that the company is doing everything it can to assist investigators.

“By publicly describing our understanding of information operations in April, and by fully cooperating with the various investigations into Russian interference, I’m confident that we are doing everything we can to be helpful and contribute our piece of the broader picture,” Stamos said. He did not directly respond to a question about reports of frustrations on his team.

Facebook spokesman Jay Nancarrow acknowledged the importance of probing company data for the possibility of collusion. “We believe this is a matter that government investigators need to determine, which is why we are fully cooperating with them to help them make their assessment,” he said.

Facebook has said Russia’s efforts to influence the election involved 470 accounts and pages that spent more than $100,000 on 3,000 ads that reached 10 millions users. But outside researchers have said for weeks that free posts almost certainly reached much larger audiences – a point that Facebook will concede in its testimony on Tuesday.

Facebook’s general counsel, Colin Stretch, plans to tell the Senate Judiciary Committee that between 2015 and 2017, a single Russian operation in St. Petersburg generated about 80,000 posts and that roughly 29 million people potentially saw that content in their news feeds.

Because those posts were also liked, shared and commented on by Facebook users, the company estimates that as many as 126 million people may have seen material in their news feeds that originated from Russian operatives, which was crafted to mimic American commentary on politics and social matters such as immigration, African American activism and the rising prominence of Muslims in the United States.

Stretch plans to characterize that content as a tiny fraction of what users see every day in their Facebook news feeds.

The company has long sought to play down the impact of manipulation of its platform during the 2016 campaign. Chief executive Mark Zuckerberg initially dismissed the importance of phony news reports spreading unchecked on Facebook, saying it was “a pretty crazy idea” to suggest that “fake news” could have affected the outcome of the election. He later apologized for the remark.

But from the first days after the election, many employees expressed frustration and dismay that a social media platform they had built helped elect a president many of them disliked deeply, according to current and former employees and others familiar with internal company conversations.

Some Facebook employees also expressed regret that it had removed human editors from the “trending topics” feature seen in the news feeds of users after allegations surfaced several months before the November election about supposed liberal bias in how stories were selected and portrayed. Company officials, reluctant to be seen as favoring one part of the political spectrum, bowed to demands from conservatives for changes.

Zuckerberg had met with prominent conservative media personalities in the run-up to his decision to remove human editors and emphasized that Facebook was not a media company – even though the company has taken a greater role in policing content published by its 2 billion global users.

The potential for gaming Facebook’s algorithm with limited human oversight soon became clear, as demonstrably false news reports spread with increasing speed during the election. The company’s security team identified scores of sites that had spread phony news reports – such as one about Pope Francis supposedly endorsing President Donald Trump – during the campaign. But a December blog post said the company intended to focus only on blocking the “worst of the worst.”

Not all publishers were happy with the effort. At one point, USA Today complained to the FBI that it had lost nearly 40 percent of its Facebook followers after the social network quietly removed millions of accounts.

Internal company debate flared early in the new year, after Facebook’s security team found extensive evidence supporting the conclusion of U.S. intelligence agencies that Russia had engaged in an extensive and well-coordinated campaign to influence the presidential election.

The April white paper included only a general description of this effort and the assertion that Facebook’s data “does not contradict” the conclusions of U.S. intelligence officials.

In the statement Monday, Stamos said, “While we were able to identify the malicious activity itself, we have to be realistic and honest about the challenges of attribution. Ultimately, we decided that the responsible thing to do would be to make clear that our findings were consistent with those released by the U.S. intelligence community, which clearly connected the activity in their report to Russian state-sponsored actors.”

But the compromise prompted grumbling among members of the security team, some of whom complained privately that the majority of their groundbreaking work was kept from reaching the public, according to several people who heard such complaints and spoke on the condition of anonymity to protect their relationships with Facebook.

Others familiar with internal company debates, who also spoke on the condition of anonymity, said pushes for the most detailed possible revelations have repeatedly run into broader concerns from Facebook’s legal and policy teams, along with fears that some kinds of deep data-mining might impinge on the privacy of legitimate users.

“We’re going to have to figure out what it means for this private social network to be democratically accountable,” said Peter Eckersley, chief computer scientist for the Electronic Frontier Foundation, a civil liberties group. “Facebook is going to have to really think about what their order of priorities really are. They are first and foremost a for-profit company, but do they have a responsibility to democracy?”

– Washington Post

Categories: Education News