Mitigating misinformation on social media

Mitigating misinformation on social media

Using innovative AI tools and ‘big data’ analysis, University of Adelaide researchers are protecting the public against harmful online influence.

Social media has become an integral part of our daily lives; it shapes how we connect with friends, work, learn about our world, and entertain ourselves. However, this reliance also exposes us to misinformation, disinformation, and online communities designed to manipulate public perception, influence political actions, or deceive. This threat is a key concern in Australia, with ‘Information Warfare’ highlighted as a priority for the Australian Defence and National Security. Fortunately, University of Adelaide researchers are equipping us with the tools we need to navigate these challenges. Through studies on how digital interactions shape real-world behaviour and the development of tools to track false narratives and combat political misinformation, their work aims to better understand—and protect us from—harmful online influence.

Professor Carolyn Semmler, Professor Lewis Mitchell, Dr Rachel Stephens, and Dr Keith Ransom work collaboratively on a wide range of research in this area, including the impact of online interactions on in-person protest activity. In one instance, the team analysed a corpus of online interactions before, during, and after the anti-lockdown protests in Melbourne. The research showed that in digital conversations, humour was often effective in reinforcing group identity and encouraging agreement, and attacks and challenges often created more conflict and disagreement. The research also found that the in-person protesters’ actions were informed by a handful of influential internet users, who used these patterns toward agreement and disagreement to their advantage. 

"This is an important step toward making the links between online and offline behaviour that has consequences for the understanding and perceptions of social cohesion in Australia," Semmler says.

"It certainly shows that information and influence is directed by central players in online groups who use specific methods for gaining attention and reinforcing ‘in-group’ norms."

The team’s study, which has been funded by the Digi+FAME scheme, the Defence Science and Technology Group, and most recently an ASCA grant, is one of very few to use fine-grained analysis methods together with social network analysis. It’s one of many studies being conducted through the University’s Defence and Security Institute in the online influence space.

The team of researchers is also bringing cognitive science and large volumes of data generated by social media analyses together to detect, model, and prevent false online narratives. They will combine their gathered information and behavioural findings to create one of the first multidisciplinary tools for monitoring virtual narratives.

"To be able to counter misinformation and false narratives, you first need to be able to identify them through ‘narrative situational awareness’. This is what our research is working to be able to do," Mitchell says.

"So far, we’ve found many interesting insights, including that characteristics such as novelty or ‘truthiness’ are more likely to change peoples’ views. We can detect these characteristics using machine learning and natural language processing."

Another project underway at the University focuses on developing AI-driven tools to monitor and protect virtual information spaces. One tool from their project, Project MAGPIE, is called SOCRETIS, and it aims to verify the level of consensus on a given claim on social media.

"The topic is of particular concern because it is easy for a small number of agents to post large volumes of information or for people to get stuck in echo chambers, which may distort the apparent prevailing consensus on particular topics," Stephens says.

"One key finding is that, without help, people may be misled by particular agents sharing large numbers of messages to support or reject a particular claim. But if SOCRETIS is presented to highlight source diversity, people take this information into account."

What’s next?

With innovative tools, extensive data sets, and high-quality behavioural insights already in progress, research into online influence is making meaningful strides in a promising direction. However, Lewis has identified a key challenge: scale.

"LLMs, deepfakes, and other generative AI tools all give the ability to generate lots of mis- and disinformation, cheaply and at scale, by almost anyone," he says. "And this is only going to be exacerbated by the platforms, which are limiting the data they allow researchers to get access to, making it financially prohibitive for academics to gain access data and study it."

AI is, on one side, causing the problem––as it’s able to generate unprecedented quantities of disinformation. Fortunately, it can also help with the solution by increasing the pace and effectiveness of research aimed at combatting this same issue.

Semmler says that, with help from their new ASCA grant, mechanizing their research processes is a plausible next step.

"Automating some of the process without loss of validity in the inferences that can be made from the data is our next challenge," she says.

"This next stage will help us fight disinformation and identify co-ordinated and sophisticated efforts to influence people online with speed and accuracy."

Tagged in Defence, cyber and space, featured