Skip to main content

Dialogues with large language models reduce conspiracy beliefs even when the AI is perceived as human

Share:

An experiment with nearly 1,000 participants finds that conversations with artificial intelligence large language models (LLMs) can reduce participants’ confidence in conspiracies and other inaccurate beliefs. Because the finding holds whether the participant knows the message is being by delivered by a human or an LLM, this indicates that what is most important in persuasive discourse is compelling messages.