Fake news isn't just bad news—it's bad for the bottom line, too

  
Fake news isn’t just bad news—it’s bad for the bottom line, too
USC expert Kimon Drakopoulos, who studies how fake news spreads, is working on a behavioral experiment to study how people consume and internalize information. Credit: iStock

Note to Mark Zuckerberg: Beware of misinformation

Research by the USC Marshall School of Business makes a case that misinformation is a business risk for social media platforms, and proposes informational methods to alleviate the phenomenon of "fake news."

A recent paper suggests that although viral content is good for revenue (via increased viewership and the attendant increase in advertising), it poses danger to the ultimate bottom line. It was written by Kimon Drakopoulos, assistant professor of data sciences and operations at USC Marshall, and Ozan Candogan, assistant professor in operations management at the University of Chicago's Booth School of Business.

"Our models show that engagement levels fall when users aren't warned of posts that contain misinformation," Drakopoulos said. "And they don't just fall; they fall to levels lower than when users are warned."

The researchers saw clicks fall by more than half when platforms did not have a fake news warning. Failing to intervene can lead to an even greater drop in engagement, Drakopoulos said. Once users realize they're getting fake news, whether they learn that from an external source or some other means, they lose trust in the site that conveyed it.

Facebook has started placing an icon for "more information" on shared posts, which takes a user to the news item's original site, but leaves it to the user to decide whether it's a credible news source.

Why fake news spreads—engagement and misinformation

Drakopoulos wants to use his findings to understand how to optimize fake news warnings.

"Look at how fake news progresses," he said. "It looks the same as a contagion, but it's different. We want to know why fake news spreads and how we can prevent it.

"We are at the frontier of what can be achieved via different mechanisms of engagement and misinformation."

A key insight: Leverage network structure to circumvent the spread of fake news.

"Networks have multiplicative effects," Drakopoulos said. "Why not use a network intervention to exploit them?"

Facebook incentives

Moving forward, platforms must focus on the incentive issues in the creation and monitoring of content, the researchers suggest. For example, Facebook could provide incentives, in the form of reputation scores, monetary incentives or other privileges, to users who turn in purveyors of fake news.

"That might make people think twice," Drakopoulos said. "Engagement might go down, but quality will go up leading to a long-term healthy engagement recovery."

The next step in his research agenda is to work with Gad Allon, Jeffrey A. Keswin Professor of Operations, Information and Decisions at the Wharton School, and Vahideh Manshadi at the Yale SOM to develop a behavioral experiment that can demonstrate how users actually consume and internalize information, and which aspects of this behavior lead to the current phenomena of political polarization and incomplete learning. The researchers expect to finish the experiment by the end of the year.

Too much information

"If the theoretical findings that initiated the project are true," Drakopoulos said, "too much information, surprisingly, leads to incomplete learning. The troublesome phenomena we see is the result of the abundance of information on social media."

The project, partially funded by USC Marshall's Institute for Outlier Research, will build on previous work Drakopoulos has done on the economic considerations of contagion intervention and social policy.

"Decisions are ultimately economic," he said.