News Fighting Misinformation with Artificial Intelligence — and Maybe Cash A new study explores whether money, AI, or maybe a mix of both, can help people identify misinformation Posted October 31, 2025 By Ryan Nagelhout Artificial Intelligence in Education Technology and Media A study conducted in part by the Harvard Graduate School of Education is shining new light on how people can be incentivized to spot misinformation online.The spread of misinformation — intentionally false or inaccurate information — is a growing problem, especially as social media networks value speed and virality over accuracy of information. But a new study from doctoral student Max Lu and Associate Professor Bertrand Schneider found that providing incentives to people to spot misinformation isn’t as straightforward as you might expect.Schneider, who runs the LIT Lab at HGSE, has studied how artificial intelligence can aid in certain learning environments. Lu, meanwhile, worked in news media at Bloomberg Media and the Los Angeles Times before coming to study at HGSE. Three Harvard researchers — Schneider, Lu, and Wendi Jiang (now at Yale) — worked with researchers from the University of Missouri and University of Münster to design a study where they aimed to “promote more accurate knowledge-building” by offering incentives (money) and informational support (AI) to people as they read news headlines and determined their accuracy.The study, Can Money Buy Better Judgment?, investigates why certain people are susceptible to misinformation, and how they might be motivated to improve their ability to detect misinformation in their news consumption.Motivation and Information Previous misinformation research focused on two major factors regarding its spread: a person’s motivation to care whether something is accurate or not, and if they were informed well enough to judge its accuracy for themselves. Education and understanding of media literacy can play a huge role in a person’s ability to detect misinformation, and the new HGSE study tested the relationship between a person’s information and motivation and how they might interact.Schneider and Lu studied 300 participants who were given varying levels of money and AI assistance. Some participants received $1 for taking part, while others were highly incentivized with a chance to win $250 if they took part. Participants were asked to read headlines and determine which were real, and which were misinformation. Some participants were also given an AI-generated “credibility analysis” to help them determine if the headlines they were reading were accurate or misleading. "When monetary incentives were combined with AI-generated analysis, participants’ performance improved, indicating that the right combination of motivation and support can enhance discernment.” Associate Professor Bertrand Schneider and Ph.D. Student Max Lu The AI-generated credibility analysis was an attempt to close the “information gaps” that past research has indicated make it difficult for some people to detect misinformation. In other words, it was intended to help "supplement" a person’s judgment. The money, meanwhile, was meant to provide “motivation” to participants, as some studies had found monetary incentives can “enhance analytical reasoning and misinformation discernment.” Those two factors — motivational lapses and information gaps — play a huge part in identifying misinformation, and Lu and Schneider’s study focused on how these two factors interact.Better TogetherAmong the major findings, the research showed that offering only motivation (money) could actually make it harder for participants to detect misinformation.“When individuals were motivated solely by high potential monetary rewards, their ability to detect misinformation actually declined. The incentive acted as a distraction,” say Lu and Schneider. “However, when monetary incentives were combined with AI-generated analysis, participants’ performance improved, indicating that the right combination of motivation and support can enhance discernment.”The researchers said they were “genuinely surprised that financial incentives backfired." Participants offered the “high” incentive of winning $250 were just as accurate at detecting misinformation as their control group given no incentives, but they were actually worse at determining which headlines were accurate.“In other words, increased motivation made them more skeptical, but not more discerning,” say Lu and Schneider. “Their performance, overall, actually dropped.”Participants with no monetary incentive but given AI access also did not see noticeable improvements over the control group. When participants were given both monetary incentives and AI analysis, though, they performed above the baseline.“Our findings suggest that while both motivation and information play important roles, they should not be examined in isolation,” say Lu and Schneider. Interestingly, the study also highlighted problems with overall AI literacy and how presenting information to people matters.The researchers noted that 1 in 4 study participants who had access to the AI tool “never clicked on it,” and nearly half of the responses they got were made without ever consulting the AI at all. The AI tool worked, but many people never bothered to try it out. "Improving AI’s capability is just part of the solution. We also need to build users’ trust and literacy around using it. That’s where educators play a vital role." Associate Professor Bertrand Schneider and Ph.D. Student Max Lu “When people did use AI, aligning with its suggestion increased the likelihood of a correct answer dramatically,” say Lu and Schneider. “Yet, participants who avoided AI often cited reasons like wanting to rely on their own judgment or distrusting the AI’s analysis.”The AI tool, researchers note, was capable of helping participants determine what was accurate and what was misinformation, but only if the participant trusted AI and the analysis it provided.According to the researchers, “Improving AI’s capability is just part of the solution. We also need to build users’ trust and literacy around using it. That’s where educators play a vital role: helping people develop the critical thinking and digital literacy skills needed to engage with AI thoughtfully and effectively.”The researchers are planning their next phase of the study, which will involve in-depth interviews with subjects about how they interacted with — or ignored — the AI tool used to analyze misinformation.“We want to understand the broader sensemaking processes,” they say. “How people decide what to trust, what sources they rely on, and how they cope with information overload in everyday life.” News The latest research, perspectives, and highlights from the Harvard Graduate School of Education Explore All Articles Related Articles News The Human Element of Data and AI Gahyun Callie Sung's journey to HGSE and the LIT Lab is reflected in her research into data and using AI to improve student outcomes News Exploring Structural Oppression in Digital Spaces Ph.D. student Avriel Epps studies how bias in the digital world impacts users across diverse backgrounds News Weighing the Costs and Benefits of Cellphones in Schools When implementing policies, school leaders should consider the pros and cons, says researcher Dylan Lukes, Ph.D.'22