Nothing's in my cart
4-minute read
Our social networks, the digital companions to our daily lives, are awash with a plethora of conspiracy theories—some so bizarre they border on the absurd, others dangerous slippery slopes of logical fallacies.
From the claim that the 1969 Apollo 11 moon landing was directed by Stanley Kubrick, to the notion that America's technological edge stems from reverse-engineered alien tech at Area 51, to more recent theories about a flat Earth, deep state operations, and QAnon—the list of conspiracy theories is endless. It's undeniable that the internet has become a breeding ground for such phenomena. And with the rapid evolution of AI technology, the proliferation of deepfake techniques that blur the lines between reality and fiction through voice and image manipulation has only intensified this issue. The underlying concerns of technological advancements remain challenges we must confront and resolve, and implement an effective AI strategy to mitigate the spread of misinformation.
In an atmosphere rife with fear of the future, the recent study titled "Durably reducing conspiracy beliefs through dialogues with AI" stands out as a refreshing counter-current. Just by reading the title, doesn't it sound somewhat counterintuitive?
The all-seeing eye—someone is watching you.
Let's delve into what this paper is all about.
The trio of authors—Thomas H. Costello, Gordon Pennycook, and David G. Rand—hail from MIT's Sloan School of Management and Cornell University's Department of Psychology, bringing a rich academic insight from the fields of social psychology, behavioral science, management, and behavioral economics together. The paper poses an intriguing question: Can engaging in dialogue with AI durably reduce the beliefs of conspiracy theorists? The answer is yes.
In this study, the researchers enlisted 2,190 participants, asking them to describe a conspiracy theory they believed in and to elaborate on the evidence or experiences supporting their belief. Participants then engaged in three rounds of dialogue with ChatGPT, which generated responses based on the evidence provided by the participants, to see if it could diminish their belief in the conspiracy theory.
Initially, participants had to describe a conspiracy theory they believed in, and then they began their debate with AI.
Before and after the dialogue with AI, participants were asked to assess their level of trust in the conspiracy theory to measure the AI's impact. Furthermore, the researchers conducted follow-ups 10 days and two months after the experiment to evaluate the lasting effects.
The study found that after conversing with AI, participants' belief in conspiracy theories decreased by an average of about 20%, and this effect persisted two months after the experiment. Remarkably, even staunch believers in the conspiracy theory that Donald Trump lost the 2020 election due to electoral fraud were significantly impacted, leading them to unfollow related accounts and stop sharing content associated with those conspiracy theories on platforms driven by social media algorithms.
"I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes."—Sam Altman, October 25, 2023
We're aware that the proliferation of conspiracy theories is fueled by social media algorithms and the platforms' reluctance to take responsibility for unverified misinformation, which often leads to even faster and wider dissemination of misinformation which continue the cycle of reinforcing the pre-existing views of conspiracy theorists. But how can a large language model (LLM), also trained on the same internet data that these "believers" engage in, manage to reduce the beliefs of conspiracy theorists?
One reason lies in the "dialogue". Engaging in conversation with a large language model provides more targeted responses or rebuttals than most general social media posts, offering a more personalized form of interaction. The paper suggests that evidence tailored to the individual is more likely to change the beliefs of conspiracy theorists. Another research paper has found that content generated by large language models is more persuasive than arguments written by humans, and that ChatGPT's debating skills surpass those of humans, achieving more consensus with opponents.
However, while it can induce self-doubt among conspiracy theorists, it's not yet capable of completely eradicating their beliefs.
The second point may lie in AI's unique "persuasion strategy."
Beyond the vast training data that large language models possess-- which allows them to provide rich, fact-based statements of information to counter conspiracy theories-- another factor is AI's unique persuasion strategy, as discussed in author David G. Rand's Twitter thread. Human conversations, especially those about controversial public issues, tend to appeal to emotions, which can exacerbate disagreements. When our views are challenged, we often react defensively, which initiates an internal reinforcing our existing beliefs. In contrast, an emotionless AI offers a perceptively more objective, fact-focused mode of communication, helping to keep dialogue rational and evidence-based.
Perhaps it's because AI hasn't been anthropomorphized by these participants. As @noobestjohn suggests in a discussion thread: "I guess it might be because people think they are 'doing their own research' when using Google or ChatGPT. They feel like they are changing their own minds rather than somebody else persuading them."
Regardless, the "personality" of AI explains the experimental results—we tend to trust AI more than other humans. The perceived impartiality of AI, marked by its lack of personal interests and emotional biases, makes it appear more neutral and objective. In contrast to the directional bias of social media, AI's responses, which are grounded in facts and data, bolster people's trust in it, positioning large language models as a useful and neutral information source.
Ironically, the distinctly human issues of echo chambers and polarization on social media might be resolved with the help of AI. However, given that AI can be more persuasive than humans, this raises questions about its potential for greater influence on public opinion. If this is the case, control over these large language models becomes a significant political issue.