Facebook has acknowledged that its artificial intelligence systems weren’t prepared to catch the live stream video of the New Zealand mosque shooting, which left 50 people dead.
In the wake of the tragic attack, many have questioned why Facebook’s AI, which works with human moderators to flag content that violates its policies, was unable to detect the shooter’s livestream.
Now, in a new blog post, Facebook’s vice president of integrity Guy Rosen has provided additional details on how its AI systems are trained to detect these kinds of content.
He also laid out how Facebook’s AI needs to be improved in order to detect videos like the Christchurch stream in the future.
‘Many people have asked why artificial intelligence (AI) didn’t detect the video from last week’s attack automatically,’ Rosen explained.
‘AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove.
‘But it’s not perfect,’ he added.
Rosen explained that Facebook’s AI systems are based on ‘training data,’ which is made up of several thousands of different types of content.
Facebook’s AI has been successful at taking down nudity, terrorist propaganda and some examples of graphic violence.
‘However, this particular video did not trigger our automatic detection systems,’ Rosen said.
‘To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare.
‘Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground,’ he added.
Facebook will need to continue to rely on a mix of AI systems and human moderators in its efforts to remove hateful, violent and disturbing content, Rosen said.
He pointed to Facebook’s decision last year to double its human moderator workforce from 15,000 employees to 30,000 employees, while calling on users to continue reporting content that they believe violates the site’s policies.
In the firm’s efforts to stop these kinds of content from spreading, it has been deploying new techniques, such as an ‘experimental audio-based technology’ to identify ‘variants of the video.’
Facebook used this technology, which it had been building previously, to halt the spread of the Christchurch shooting video.
The tech giant is also exploring how it can use AI to moderate livestreams.
Rosen said Facebook is not currently considering adding a time delay to livestream videos, as it would only complicate the detection process.
Many have suggested Facebook add a delay to broadcasts, like those that are used on some live TV channels.
‘There are millions of Live broadcasts daily, which means a delay would not help address the problem due to the sheer number of videos,’ he said.
‘More importantly, given the importance of user reports, adding a delay would only further slow down videos getting reported, reviewed and first responders being alerted to provide help on the ground.’
The update follows a previous post from Facebook where it said the gunman’s live 17-minute broadcast was viewed fewer than 200 times when it was live and the first user report didn’t come in until 12 minutes after it ended.
All told, the video was viewed some 4,000 times before it was removed from Facebook.
In the first 24 hours following the attack, Facebook said it took down more than 1.2 million videos of the incident ‘at upload, which were therefore prevented from being seen on our services.’
About 300,000 additional copies were removed after they were posted, Rosen said.