People are less inclined to accurately judge the truth of a headline when asked about sharing or otherwise engaging with it on social media, a new study in Science Advances suggests. These findings suggest that content sharing — a key participatory feature of most social media platforms — may inherently produce a mindset that clouds judgment, impeding the ability to discern the truth.
"If in the same newsfeed you see cat memes, your cousin's baby photos, and serious news about global events, it can be particularly hard to think carefully about the news," said Ziv Epstein, a Ph.D. student in the Human Dynamics group at the Massachusetts Institute of Technology (MIT) Media Lab and lead author of the study. "This stands in contrast to other information environments, which focus context on a single mindset, such as a newspaper."
In their online experiment involving 3,157 Americans, Epstein and colleagues gave participants a series of either COVID-19 or political news headlines that may or may not have been accurate. They asked varying sequences of questions about these headlines — sometimes only about their accuracy, or whether they would share, like, or comment on it; other times. about their accuracy and then whether they would share, and vice versa.
They found that those who were asked about sharing before accuracy were 35% worse at discerning the truth compared with those who were asked only about accuracy. When asked about accuracy before sharing, the participants were still 18% worse at discerning true headlines compared with accuracy-only participants.
Accuracy Competes for Attention
The researchers suggest that their findings are reflective of the mindset that social media produces. In the social media context, users are encouraged to endlessly scroll their feeds and engage, becoming distracted and emotionally stimulated by a vast, fleeting array of content from their networks. "The platforms are creating an attention environment in which people's attention is drawn to other factors," said David Rand, a professor at the MIT Sloan School of Management and co-author of the study, during a press briefing at the AAAS 2023 Annual Meeting. "If you take those same people and you put them in a different context … then they would be much more discerning. But the social media attention ecosystem is one that does not prioritize [accuracy]."
Prior studies have suggested that such environments may breed susceptibility to believing "fake news," and the researchers' findings emphasize a key aspect of this vulnerability. When deciding what content to share on social media, people could be more susceptible to spreading falsehoods that they wouldn't normally believe, simply by being too distracted by other motivations to accurately judge whether the content is true.
"There are social motivations for sharing that crowd out accuracy, such as pleasing friends and followers, or signally group membership," said Epstein. When asked about opportunities for social media platforms to deprioritize these motivations, Epstein proposed ways of engaging with content that focus less on sharing with followers. "Platforms could emphasize building connections between content rather than directly sharing content with an audience," he said. "For example, platforms like Are.na and Pinterest achieve this by allowing users to connect or pin content to channels."
Could Nudges Reduce the Spread of Misinformation?
Epstein also suggested that accuracy nudges — simple prompts that redirect a user's attention to accuracy — may be a simple but effective solution for social media companies to implement.
Epstein and his colleagues also observed that simply asking about accuracy improved truth discernment — making people less likely to share false headlines compared with those who were only asked about sharing. This is a hopeful result, affirming prior findings that suggest accuracy nudges could generally be effective at reducing misinformation. But the onus is on social media companies to implement such features, the researchers said. "To some extent, as individuals, we can try and take action to make ourselves be more vigilant, but really it's a systemic problem that needs to be addressed at the platform level," said Rand.
"The take-home is less that they need to get rid of the social stuff, but they need to add some layers of things that make people think about accuracy," suggested Gordon Pennycook, an associate professor of behavioral science at the University of Regina and co-author of the study. Along with Epstein and Rand, Pennycook has studied the efficacy of accuracy nudges in different social media contexts.
When asked whether accuracy nudges could still be effective when social media is used to communicate about emotionally evocative, evolving developments such as during conflicts, natural disasters or protests, Pennycook indicated that there may be plenty of situations on social media where accuracy nudges would be superfluous.
"There are some contexts — perhaps many — where accuracy may not be a primary concern," he said. "In cases where the truth is hard to discern or there's a lack of information, such as in the early stages of a natural disaster, it is naturally going to be difficult to prioritize accuracy."
Questions remain as to how effective accuracy nudges can be in different contexts, such as over longer periods of time or within or across certain partisan or geographic groups. Future work will expand on recent studies to understand how the social media mindset might affect different users across the world, the researchers said.
Accuracy and User Satisfaction
Epstein and colleagues highlight that social media companies and policymakers should take note if they want to reduce misinformation — and if companies want to improve the user experience.
"Most people don't like engaging with misinformation," Rand said. "To some extent, users would prefer platforms where there was less misinformation."
Rand suggested that accuracy prompts could be implemented on social media platforms such that they target specific instances of misinformation. Other efforts, such as crowdsourced fact-checking, have gained popularity as a potential means to fight misinformation, seeing variable success when piloted on Twitter recently.
Epstein further suggested that the paradigm of maximum engagement that pervades many social media platforms — and intertwines with motivations to share — should be reassessed. "Thinking more about user satisfaction or long-term metrics … maybe it's less about more myopic engagement, but more about how these things are integrated into people's lives."