Researchers claim they have developed software that can identify cyber bullies and trolls on Twitter with 90% accuracy.
The tool uses computer programmes called crawlers to gather details from profile pages, and also examines connections between different accounts – such as who follows who.
Once the information has been collected, an algorithm works to classify tweets as cyber bullying or cyber aggression to make sure it does not incorrectly flag up regular interactions.
Twitter has come under pressure to do more to crack down on abusive behaviour on its platform
The team behind the machine learning tool, based at Binghamton University in the US, says it could be used by social media platforms to help find and delete abusive accounts.
Computer scientist Jeremy Blackburn explained: “In a nutshell, the algorithms ‘learn’ how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples.”
The research comes after a group of celebrities, politicians and campaigners pledged not to publicise the abuse they receive on social media from trolls.
Ex-England striker Gary Lineker, Countdown presenter Rachel Riley and London Mayor Sadiq Khan were among those who made the promise following research that suggested hate speech is being inadvertently spread through social media when insults, put-downs or worse are quoted or shared.
Match Of The Day presenter Lineker, with 7.4 million Twitter followers, said he was determined to “show online trolls the red card” after seeing the racist abuse directed at young black footballers.
Chelsea striker Tammy Abraham and Manchester United star Marcus Rashford, both 21, are among those to have been targeted in recent weeks.
Chelsea striker Tammy Abraham has received racist abuse from online trolls this season
Such incidents have seen social media platforms come under increased pressure to do more to protect their users from hateful and harmful content.
Facebook has admitted it needs help to regulate its platforms, and the UK government is considering giving Ofcom increased powers to fine social media firms in a bid to protect youngsters online.
Mr Blackburn said the tech developed at Binghamton, in New York State, would not be able to prevent abusive behaviour taking place, but could play a big role in identifying those responsible.
The university has said the algorithm can “identify abusive users on Twitter with 90% accuracy”.
“Our research indicates that machine learning can be used to automatically detect users that are cyberbullies, and thus could help Twitter and other social media platforms remove problematic users,” Mr Blackburn said.
“However, such a system is ultimately reactive: it does not inherently prevent bullying actions, it just identifies them taking place at scale.
“And the unfortunate truth is that even if bullying accounts are deleted, even if all their previous attacks are deleted, the victims still saw and were potentially affected by them.”