Skip to main content

From Deepfakes to Fake News, S. Shyam Sundar Takes On the Internet’s Bad Actors

headshot of S. Shyam Sundar
AAAS Member S. Shyam Sundar.

While so-called “fake news” hasn’t always been a household term, AAAS Member and researcher S. Shyam Sundar’s passion for truth dates back to his days moonlighting as a print journalist while attending college in his native India.

“Fake news has been with us for a very long time,” says Sundar, a professor at Penn State University and co-director of its Media Effects Research Laboratory.

In America, propagation of falsehoods has existed as long as the free press. Sundar’s interest in the topic started at the dawn of internet journalism – decades ahead of former President Donald Trump’s rhetoric, which brought anti-media buzzwords to the mainstream dialogue. In 1994, Sundar, then a doctoral student at Stanford University, set out to understand how the source of online news impacts the way information is perceived.

In one of the first studies of its kind, he mocked up his own news site where users viewed the same articles, each receiving varying information about the curation of featured content. Some were told that professional journalists or computer algorithms made the selections. Others were informed that their peers – users of the news site – picked the stories. The result: people favored other users, viewing peer-selected articles as more newsworthy and higher quality.

“For me, that set off alarm bells in those days,” Sundar says. “You trust your friend because they are a friend, but you don’t stop to think that they don’t have journalistic expertise.”

From this research, Sundar noticed patterns in cognitive heuristics – mental shortcuts that allow people to make quick decisions. These rules of thumb often involve generalizations and bias. One common example involves walking faster when you see a person with a hood up in a dark alley.

On the internet, people often leverage the “bandwagon heuristic,” fueled in part by the widespread availability of consumer product reviews. We are more likely to adopt a popular idea or belief and skip our own processes of evaluation, entrusting others to do the work for us instead. “We have come to rely on this idea that if something is good for someone else, it must be good for me too,” he explains.

The “realism heuristic” – the idea that seeing is believing – proves most dangerous thanks to the emergence of deepfakes, Sundar says.

Deepfakes use artificial intelligence called “deep learning” to generate videos of fake events. In many cases, one person’s face is replaced by a computer-generated face that looks and sounds like another person – usually a famous one. An example includes a video of Facebook founder Mark Zuckerberg talking about having “billions of people’s stolen data.” Recent deepfake videos have also depicted Russian President Vladimir Putin declaring peace with Ukraine.

“These deepfakes are so clever, it’s very hard for the human eye to discern,” says Sundar, adding that the realism heuristic can have deadly consequences.

Between 2017 and 2020, more than 20 people were lynched by mobs in rural India. The victims were presumed to be kidnappers due to a grainy viral video circulating on WhatsApp that depicted two men on a motorcycle snatching children. The video – mistaken by viewers as actual footage – originated from a public service announcement in Pakistan that had been deceptively edited. Consequently, the Indian government pressured Meta, which owns Whatsapp, to limit the spread of false information in the messaging app’s largest user base.

In 2018, a research team led by Sundar received funding from WhatsApp to study how modality impacts misinformation. His research found that a fake story shown via video is more credible and more likely to be shared by people than the same story told via other modalities such as audio and text.

WhatsApp then began limiting the number of times someone can forward a piece of content – a means of reducing the speed at which information moves through the app. Detection of false information remains difficult though, since WhatsApp uses end-to-end encryption to ensure that nobody – including its employees – can read or see what’s being sent.

Vetting news can be tedious, and in the age of information overload, even the smartest of people can allow heuristics to lead them astray. Sundar references a 2016 Buzzfeed News study that showed Facebook engagements on fake news exceeding engagements for real news.

“Most of us thought that this was some fringe stuff on the Internet. You never think that normal people you know are falling for fake news,” he says. “The Buzzfeed article convinced us that this is a big problem, and we need an urgent solution.”

Facebook, to its credit, has since made strides in alerting readers to suspicious information on its platform.

“A lot of the efforts social media platforms like Facebook have been putting in can broadly be considered content moderation,” Sundar notes. “It’s all about trying to go in with a fairly soft touch and moderate content that might fall on the edges of real versus fake.”

Facebook, he adds, leverages international fact-checking organizations to verify content – often flagging potentially false posts, “downranking” misleading articles by pushing them lower in your feed and outright banning repeat offenders.

Many of these changes came post-2016 when Facebook landed in hot water after misinformation abounded on the platform. Still, Sundar says there’s more other social media platforms can – and should – do to protect readers.

Looking forward, he aspires to improve machine learning methods for flagging fake news – separating it from native advertising, misreporting, commentary or satire. The same flags used by the algorithms can also be leveraged by readers to self-differentiate a fake news article. For example, a more recent registration date for the news site’s domain might indicate it’s not an established news source. If the site ends in “.com.co,” it could also be suspicious, Sundar points out. As director of Penn State’s Center for Socially Responsible Artificial Intelligence, he’s currently working with colleagues at the university’s College of Information Sciences and Technology to develop more sophisticated algorithms.

“We need to figure out ways to gatekeep fake news that will help users process information in a way that’s meaningful,” says Sundar.

Date
Blog Name