‘Trust/Distrust’ Buttons Might Reduce Misinformation Spread On Social Media

‘Trust/Distrust’ Buttons Might Reduce Misinformation Spread On Social Media
This illustration picture shows social media applications logos from Linkedin, YouTube, Pinterest, Facebook, Instagram and Twitter displayed on a smartphone in Arlington, Virginia on May 28, 2020. - Trump is expected to sign an executive order on May 28, 2020, after threatening to shutter social media platforms following Twitter's move to label two of his tweets misleading. (Photo by Olivier DOULIERY / AFP) (Photo by OLIVIER DOULIERY/AFP via Getty Images)
Irina Antonova
6/15/2023
Updated:
6/15/2023
0:00
Analysis

To trust or not to trust—that is the question many of us ask when it comes to choosing our information today. Our choices can be so overwhelming that science wants to come to the rescue by adding extra buttons on social media postings.

This comes after researchers from the University College London (UCL) conducted experiments by adding ‘trust’ and ‘distrust’ buttons on social media, in addition to the already existing ‘like’ buttons, and found this helped reduce the spread of misinformation by half.

“Over the past few years, the spread of misinformation, or ‘fake news’, has skyrocketed, contributing to the polarisation of the political sphere and affecting people’s beliefs on anything from vaccine safety to climate change to tolerance of diversity. Existing ways to combat this, such as flagging inaccurate posts, have had limited impact,” said Professor Tali Sharot, a co-lead author of the study, in a statement.

Sharot is from the UCL Department of Psychology & Language Sciences, the Max Planck UCL Centre for Computational Psychiatry and Ageing Research, and the Massachusetts Institute of Technology.

“Part of why misinformation spreads so readily is that users are rewarded with ‘likes’ and ‘shares’ for popular posts, but without much incentive to share only what’s true,” she said.

“Here, we have designed a simple way to incentivize trustworthiness, which we found led to a large reduction in the amount of misinformation being shared.”

What Was In The Experiments

The scientists tested 951 participants over six experiments by changing the incentive structure of the social media platforms.

The participants were to share accurate or inaccurate news articles, and the receivers of these articles were to ‘like’ or ‘dislike’ them, as well as chose to ‘trust’ or ‘distrust’ them.

The study found that people utilized the ‘trust’ or ‘distrust’ buttons more than the ‘like’ or ‘dislike’ ones and that they started to post more ‘trust’ information in order to gain trust themselves.

When the researchers used computational modelling to analyze the results, they found that with the trust/distrust buttons, the users were more likely to be careful with the information they reposted and shared.

Additionally, Sharot and team found that people were more likely to share information on social platforms that they saw repeatedly, as the repetition was perceived to be more accurate, stated Sharot in another paper published in the journal Cognition.

Practical Benefits Of The Study

The researchers said that their study has practical benefits as it can help with the reduction of the spread of misinformation on social platforms.
“Buttons indicating the trustworthiness of information could easily be incorporated into existing social media platforms, and our findings suggest they could be worthwhile to reduce the spread of misinformation without reducing user engagement,” said Laura Globing, co-lead author of the paper and a PhD student at the same university.
“While it’s difficult to predict how this would play out in the real world with a wider range of influences, given the grave risks of online misinformation, this could be a valuable addition to ongoing efforts to combat misinformation,” Globing concluded.

Potential Challenges 

However, the researchers note there are potential challenges to the implementation of ’trust‘ and ’distrust' buttons on social platforms, with subjectivity and abuse of the buttons being primary considerations.

This is because determining the threshold for trust or distrust can be challenging due to the subjective nature of content evaluation. Additionally, such buttons may be vulnerable to abuse, where users might exploit them to promote personal biases or engage in targeted harassment.

Algorithmic complexities are another challenge, as implementing ’trust‘ and ’distrust' buttons requires robust algorithms to prevent gaming the system. Thus, platforms would need to develop sophisticated algorithms that can distinguish genuine user feedback from malicious or manipulative actions.

A polarization risk is not to be underestimated as well. The introduction of explicit ’trust‘ and ’distrust' buttons may exacerbate polarization on social media platforms, which can potentially lead to echo chambers, propaganda, or suppression of dissenting opinions.
Irina Antonova holds a M.S. in Genetics (from Bulgaria) and Ph.D. in Biotechnology (from Australia). Throughout her career, Irina worked as a scientist in academia and the industry, as well as teaching at universities. She enjoys learning about the mysteries of mind, body, life, and the universe.
Related Topics