Survey data mixed combined with qualitative interviews on 732 academic authors affiliated with The Conversation Canada found that toxic online public comments lead to authors self-censoring and reducing their efforts to inform the public of research findings. Over 25% of the 732 respondents experienced toxic comments in either comment sections, on social media, and in their email accounts. Toxic comments were most commonly experienced to be ideological (70%), skeptical of expertise (47%), sexist (22%) or racist (16%).
Category: Misinformation 101
-
Does Developing a Belief in One Conspiracy Theory Lead a Person to be More Likely to Believe in Others?
Longitudinal research on participants in Australia, New Zealand and the UK found evidence with small but significant effects that increased belief in one conspiracy theory could lead to increased beliefs in other conspiracies at a later time. This research contributes to ongoing efforts to demonstrate the validity of the “rabbit hole” theory, which is the idea that belief in conspiracy theories can grow when one conspiracy theory is believed.
-
Mapping, understanding and reducing belief in misinformation about electric vehicles
Electric Vehicle (EV) misinformation includes ideas that EV’s emit electromagnetic fields harmful to human health and that EVs are more likely catch fire. Survey research on EV misinformation focused on Germany, Austria, Australia and the USA found that participants more often agreed than disagreed with EV misinformation statements. Conspiratorial thinking was the strongest predictor of such perspectives while education level was not a predictor.
-
Best practices for source-based research on misinformation and news trustworthiness using NewsGuard
Evaluations of NewsGuard, a source providing trust-reliability ratings to popular news sources, overall found the site to offer fairly rigorous assessments of source trustworthiness. When using the site, this research recommends assessing the database critically by assessing, for example, trustworthiness ratings at multiple timepoints (and whether ratings have changed), as well as the methods used to determine the reliability ratings. Researchers are cautioned against assuming all content from an “untrustworthy” site has accuracy issues.
-
The role of narratives in promoting vaccine confidence among Indigenous peoples in Canada, the United States, Australia, and New Zealand: a scoping review
Results from a scoping review show that vaccine hesitancy among Indigenous can best be addressed by engaging communities and community leaders in the (co)creation of culturally relevant messaging that can include concise and coherent narratives. Mistrust in health care systems can be addressed through building respectful relationships among all parties.
-
Perceptions and Concerns About Misinformation on Facebook in Canada, France, the US, and the UK
Survey research on the populations in four countries (Canada, France, UK, and the US) finds that overall perceptions and concerns of misinformation on Facebook are strongly correlated. France differs from the other three countries in that perceptions and concerns of misinformation on Facebook, while present, are decreasing over the past half decade. Perceptions of incivility among those discussing politics adds to the misinformation concerns.
-
Psychological booster shots targeting memory increase long-term resistance against misinformation
Misinformation inoculation interventions in the form of text and video can be effective but their influence diminishes after approximately 30 days. Motivating memory models should incorporate developments in cognitive science to increase the durability, and ultimately impact, of misinformation countering interventions.
-
Identifying Misinformation About Unproven Cancer Treatments on Social Media Using User-Friendly Linguistic Characteristics: Content Analysis
New methods exist for identifying cancer misinformation online by building misinformation-detecting algorithms that can identify core linguistic characteristics. These characteristics can include for example, certain hashtags, expressions using absolutes, and specific URLS. Combined with manual labeling, these algorithms can help aid detection efforts.






















