Sharing Misinformation Habitual, Not Just Lazy or Biased

Sharing Misinformation Habitual, Not Just Lazy or Biased

TEHRAN (ANA)- The spread of misinformation on social media is often blamed on users, but new findings by the University of Southern California challenge the misconception that political prejudice and a lack of critical thinking are solely responsible.
News ID : 2579

A new study has found that people habitually share misinformation when seeking the social validation built into the fabric of platforms such as Facebook, the journal PNAS reported.

This finding challenges the misconception that political prejudice and a dearth of critical thinking are solely responsible for the spread of false news.

Various theories have been proposed to explain what drives the spread of misinformation, with most placing blame on the user. Being inattentive, weakly critical, and poorly discerning while sharing information online may lead people to overlook the credibility of content.

“The answers from prior research largely involved individual deficits (i.e., an individual-based problem), such as users’ low motivation or ability to discern information accuracy,” said senior author Wendy Wood, a researcher at the University of Southern California in the United States. “However, the broader social media system, especially the incentive structure on social media sites, promotes the formation of habits to share attention-getting news.”

Wood and colleagues explored how misinformation is shared by social media users. To do this, they conducted four different studies with people who have a Facebook account, and where misinformation in these experiments represented both false and partisan news.

For the first three studies, the researchers set up the same basic experiment in which they presented all participants with a total of 16 true or false news headlines and asked them to share it or not. After this, the subjects responded to a survey that assessed the strength of their social media sharing habits. Based on the survey, strongly habitual sharers were quick to hit share on Facebook while weakly habitual sharers were those who shared posts infrequently and cautiously.

In their initial study, the researchers noted that habitual users shared more headlines and were less discerning compared with weak users of social media. People with weak habits were nearly four times more discriminating than those with strong habits. Further, 15% of the most habitual users shared 37% of misinformation. “Habitual sharers shared on average five to six times more fake news than less habitual sharers,” said the study’s first author, Gizem Ceylan, a researcher at Yale University.

Next, the researchers wanted to see if adding in information on the accuracy of the headlines would in any way lower the spread of misinformation. When participants evaluated the truth of a headline before choosing to share it, the overall spread of false news was reduced. But the most habitual users continued to show little discrimination when sharing news, despite previous knowledge of their falsehood. This meant that habitual users paid little attention and were insensitive to the veracity of information. On the other hand, weakly habitual users were nearly twofold as discerning as habitual users.

In the third experiment, subjects took into account if the news was liberal or conservative before choosing to share it. The researchers found that habitual users were less discriminate, and shared news even if it was not in line with their own political preferences whereas weak users were less likely to share news that went against their political beliefs.

In their final experiment, the team directly targeted the reward structure of social media platforms. They incentivized users in two ways: one rewarded the sharing of true news and the other the sharing of false news.

“Incentives for accuracy rather than popularity (as is currently the case on social media sites) doubled the amount of accurate news that users shared,” said Ceylan. “Demonstrating that participants formed habits to share accurate or false information, they continued to share the rewarded information even when they knew they would no longer get rewards.”

By redesigning the ways in which platforms reward users, people may be encouraged to share verified news.

Social media platforms and their algorithms are designed to keep people engaged for as long as possible. By spending more time on them, habitual users are driven to keep posting and sharing sensational news that gains the most attention from other users, irrespective of its accuracy.

“Users with strong habits respond automatically and share information when cued by the platform,” said author Ian Anderson, a researcher at the University of Southern California. “They don’t consider response outcomes, such as misleading others or even spreading information that conflicts with their own politics.”

To slow the spread of misinformation, individuals who spend a lot of time on social media can monitor their usage. Another option would be to delay sharing any information by a minute, making users more cautious.

Besides these self-imposed limitations on use, efforts to address the spread of misinformation rest squarely on the shoulders of social media platforms. “Our results are designed to guide the restructuring of social media platforms and their underlying reward structure to promote sharing of accurate information instead of popular, attention-getting material,” said Wood.

Platforms like Twitter and Facebook amplify content that grab eyeballs, getting it to the top of peoples’ social media feeds. Posts that rise quickly through the ranks may often be surprising, provocative, partisan or outright false. But the researchers don’t argue for content moderation. Instead of immediately amplifying the spread of viral content, “social media platforms would do well to introduce an embargo or verification period that is triggered by a high volume of shares,” said Ceylan.

If the content was found to be untrue while under embargo, it simply would not be viewed by many people. Limiting the spread of such content would significantly mitigate damage. A more radical suggestion would be to alter the algorithm and prioritize accuracy, not engagement alone. 

Another approach could be to add reaction buttons that are more nuanced than simple likes and shares. Wood suggests including “buttons that can automate truthful sharing such as ‘trust,’ ‘distrust,’ ‘fact-check,’ or ‘skip’”.

These changes might encourage users to ignore fake news and incendiary content. At the very least, it would make users consider more than likes and shares while making the decision to share content.

In the future, the team hopes to evaluate their findings in a real-world social media environment with more diverse study populations.

4155/v

Send comment