Facebook has revealed that it shut down 5.4 billion fake accounts on its main platform so far this year – 2 billion more than what was removed in all of 2018.
The news was announced in the tech giant’s latest transparency report on the removal of harmful content from its site.
The tech giant also pulled 11.4 million posts of hate speech and 11.6 million pieces exploiting children were taken down in just the past three months.
Experts, analysts and online watchdogs are preparing for a tsunami of fake and misleading content to flood social media as the 2020 presidential election nears and Facebook is no different.
The firm has been lighting the midnight oil in order to pull fake accounts from the platform, as it was bombarded with misinformation from phony accounts during the 2016 election.
CEO Mark Zuckerberg spoke with reporters via a phone call today and used the number of fake accounts the firm has removed to show how seriously they are taking this issue and called on other platforms to follow in suit, CNN reported.
‘Because our numbers are high doesn’t mean there’s that much more harmful content. It just means we’re working harder to identify this content and that’s why it’s higher,’ he said.
The report includes Facebook’s removal of two billion fake accounts from January to March that were set up by ‘bad actors’.
In the second quarter of this year, only 1.5 billion accounts were removed, but from July through September the social media site disabled another 1.7 billion – the firm noted about five percent of its user base are likely fake accounts.
Facebook has also been working to remove inappropriate content from the platform.
Guy Rosen, VP of Integrity for Facebook, shared in the report, ‘Over the last two years, we’ve invested in proactive detection of hate speech so that we can detect this harmful content before people report it to us and sometimes before anyone sees it.’
‘Our detection techniques include text and image matching, which means we’re identifying images and identical strings of text that have already been removed as hate speech, and machine-learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve seen previously in content that violates our policies against hate.’
Starting in the second quarter of this year, the firm used new technology capabilities to remove hateful posts automatically.
However, this was only possible if the content was either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy.
‘With these evolutions in our detection systems, our proactive rate has climbed to 80%, from 68% in our last report, and we’ve increased the volume of content we find and remove for violating our hate speech policy,’ Rosen explained.
The firm has also removed 11.6 million pieces of child nudity and child sexual exploitation from the site, which is up from 5.8 million from the first quarter.
Over the last four quarters, the firm noted it proactively detected over 99% of the content we remove for violating this policy.