Research

Generative AI in the era of 'alternative facts'

The spread of misinformation on social media platforms threatens democratic processes, contributes to massive economic losses, and endangers public health. Many efforts to address misinformation focus on a knowledge deficit model and propose interventions for improving users’ critical thinking through access to facts. Such efforts are often hampered by challenges with scalability, and by platform users’ confirmation bias. The emergence of generative AI presents promising opportunities for countering misinformation at scale across ideological barriers. In this paper, we present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions generated by large language models (LLMs), (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users with the goal of alleviating confirmation bias, and (3) an analysis of potential harms posed by personalized generative AI when exploited for automated creation of disinformation. Our findings confirm that LLM-based interventions are highly effective at correcting user behavior (improving overall user accuracy at reliability labeling up to 47.6%). Furthermore, we find that users favor more personalized interventions when making decisions about news reliability.

Details

PUBLICATION DATE
27
March
2024
SOURCE
MIT Open Publishing Services
RELATED PROGRAMME
LINK TO PUBLICATION