Facebook has teamed up with the Metropolitan Police to prevent live-streaming of terror attacks, six months after the Christchurch mosque gunman streamed his actions using the social network’s Live feature.
From October, the London force will provide Facebook with footage of its firearms training, taken from the body cameras worn by officers.
It will be used to train artificial intelligence systems that Facebook say will be able to detect and automatically remove live-streamed firearms attacks.
Fifty-one people were killed in the attacks
“With this initiative, we aim to improve our detection of real-world, first-person footage of violent events and avoid incorrectly detecting other types of footage such as fictional content from movies or video games,” the firm said in a post.
The Metropolitan Police, which was approached by Facebook after the Christchurch attack, welcomed the move.
“The technology Facebook is seeking to create could help identify firearms attacks in their early stages and potentially assist police across the world in their response to such incidents,” said Neil Basu, assistant commissioner for specialist operations, the UK’s top counter-terrorism police officer.
Neil Basu says new technology could help identify firearms attacks in their early stages
The move is the latest measure from Facebook to limit the use of its live streaming feature in the wake of the attacks in New Zealand, in which 51 people died.
In May, the social media giant announced that anyone sharing “violating content” – such as a statement from a terrorist group without context – would be blocked from using Facebook Live under a “one-strike policy”.
The Metropolitan Police is one of a number of police forces around the world providing imagery to Facebook as part of the initiative.
The force will give its footage to the Home Office, so that it can be shared with other technology companies to develop similar technology.
In the aftermath of the Christchurch attack, Facebook faced heavy criticism for its lack of response to New Zealand officials, who demanded strong action to prevent a repeat of the incident.
New Zealand’s privacy commissioner said the firm’s silence was “an insult to our grief”.
In the last two years, Facebook claims to have removed more than 26 million pieces of content related to terrorist groups such as Islamic State and al Qaeda.
It has since expanded the techniques it uses to what it calls “a wider range of dangerous organisations”, including white supremacist groups, banning more than 200.