In order to stop the spread of misinformation, an AI has been trained to flag the social media accounts that tell the most lies.

 

 

A cool new artificial intelligence system will start analysing social media accounts and rate them based on how likely they are to spread misinformation.

This is good news for society at large, and bad news for anyone who isn’t quite ready for their Facebook profile to get a -1 rating. One man’s truth is another’s fake news, but back here on planet Earth, we really need to do something about the rapid spread of misinformation.

From conspiracies to anti-vaccine misinformation, wildly ill-conceived COVID theories and the notion that the earth is flat, people make stuff up all the time, either to fuel their own insecurities or to foster public discord to embolden and enrich corporate vested interests (it’s a tangled web).

Given how important science and facts are to people’s overall wellbeing, finding a solution to stop the spread of these falsehoods looks to be at hand, thanks to a team at the Massachusetts Institute of Technology.

An artificial intelligence system newly developed at MIT could help counter the spread of disinformation. With the goal of creating a system to automatically detect disinformation narratives as well as those individuals who are spreading the narratives within social media networks, it combines multiple analytics techniques to create a comprehensive view of where and how the disinformation narratives are spreading. 

One might suggest that they tune in to Fox News as a starting point.

In the 30 days leading up to the 2017 French elections, the Reconnaissance of Influence Operations (RIO) program team collected real-time social media data to search for and analyse the spread of disinformation. In total, they compiled 28 million Twitter posts from 1 million different accounts. They were then able to detect disinformation accounts with 96 percent precision.

The team envisions RIO being used by both government and industry as well as beyond social media and in the newspapers and TV. They’re presently trying to figure out how narratives spread across European media outlets. A new follow-on program is also underway to dive into the cognitive aspects of influence operations and how individual attitudes and behaviours are affected by disinformation.

It’s not the only case in point: RAND Europe was commissioned by the UK Ministry of Defence’s (MOD) Defence and Security Accelerator (DASA) to develop a method for detecting the malign use of information online. The study was contracted as part of DASA’s efforts to help the UK MOD develop its behavioural analytics capability.

Among the study’s key findings was how social media is increasingly being used by human and automated users to distort information, erode trust in democracy and incite extremism.

The truth is out there; it’s just a matter of how we best identify it.

 

 

 

Share via