EdCast Teaching Students to Think Critically About AI Educators Stephanie Smith Budhai and Marie Heath highlight AI’s human biases and urge its critical, intentional, and equitable use in classrooms Posted October 8, 2025 By Jill Anderson Artificial Intelligence in Education K-12 School Leadership Technology and Media When educators talk about artificial intelligence, the conversation often begins with excitement about its potential. But for Stephanie Smith Budhai and Marie Heath, that excitement must be matched with caution, context, and critical awareness.“AI is a piece of technology. It's not human, but it's also not a neutral thing either,” says Budhai, an associate professor in the educational technology program at the University of Delaware. “We have to be intentional and purposeful about how we use technology. So, thinking about why we're using it. So why was the technology created?” Budai and Heath, an associate professor of learning, design, and technology at Loyola University Maryland, are the authors of “Critical AI in K–12 Classrooms: A Practical Guide for Cultivating Justice and Joy” from Harvard Education Press. Their research explores how bias is built into artificial intelligence and how these biases can harm students if left unexamined. While bias in technology isn’t new — it’s been present in tools as old as the camera — both scholars argue that educators and students must learn to approach AI critically, just as they evaluate sources and evidence in other forms of learning.“What does it mean when we ask children … to partner with or think with a machine that is based in the past, with historical data full of our historical mistakes and also doesn't really explore? It's not looking at the world with wonder. It's looking in this very focused way for the next answer that it can give the most likely possibility,” Heath says. “And I think as learners, that's actually not how we want kids to learn. We want them to explore, to make mistakes, to wrestle with ideas, to come up with divergent creative thinking.” Both Budhai and Heath believe that using AI responsibly in education means grounding teaching in equity and critical engagement. Budhai points to projects like Story AI, which helps young students tell their own cultural stories while revealing bias in generative image tools. Heath’s Civics of Technology project encourages “technology audits,” helping teachers and students uncover the trade-offs and values embedded in everyday tools.In this episode of the Harvard EdCast, we explore how to use AI critically in classrooms, and the responsibility of educators to cultivate AI literacy, develop thoughtful policies, and consider broader implications such as environmental impact, equity, and student privacy.TranscriptJILL ANDERSON: I'm Jill Anderson. This is the Harvard EdCast. [MUSIC PLAYING]The AI in students’ classrooms may be shaping their learning and their worldviews without anyone noticing. Stephanie Smith Budhai and Marie Heath say the real question isn't what AI can do, but what biases and values are built into it.Stephanie is an associate professor at the University of Delaware, and Marie is an associate professor at Loyola University Maryland. They are teaching educators to become critical consumers of AI by examining who built it, how it works, and the ethical questions it raises. The stakes are high. They say it's about equity, justice, and creating classrooms where all students can thrive. I asked Stephanie, how can educators begin to spot bias in AI? Stephanie Smith Budhai STEPHANIE SMITH BUDHAI: AI is a piece of technology. It's not human, but it's also not a neutral thing either. This is something that I've spoken with my students about for over a decade now. When we're using technology, we're not just picking it up. And we shouldn't say, I have to use this technology because it just came out, and the school paid for it. No. We have to be intentional and purposeful about how we use technology. So, thinking about why we're using it. So why was the technology created? That's the whole neutral piece. There's a purpose to it. And the AI isn't picking up skin tones now. We've talked about in our research about the camera and how it didn't pick up, but it's still happening now in many ways, in ways that people don't often think about. A big part of it is looking at technology for what it is, looking at AI for what it is. It's a machine and it's constantly computing. It's using data. And that's the whole algorithm. And it's looking for patterns so that it could give you the right answer.I have quotation marks up, if you can't see. The ''right answer.'' But the answers that it gives you, the outputs are biased. Because if it's computing from information from humans who are flawed, people who are biased. It's a reflection of our society.So, we have to always think about what the technology is, who created it, what was the purpose of it. Where's the information coming from? And how correct is it? We want students to be critical consumers of information. We want them to use sources when they cite their papers, when they're looking at things. So, we want them to be critical consumers of AI, too. We want them to look at it and acknowledge the biases and realize that it might not actually be right the way that we're using it right now. So, Marie, you have anything to add to that? Marie Heath MARIE HEATH: Yeah, thanks, Stephanie. And thanks, Jill, for this question. A lot of the focus of my work is on the non-neutrality of technology. And I often push back against that notion that it's just a tool. Again, I'm using the air quotes, too.You were asking about how seeing bias exist in the design of an older technology, like a camera, can help us identify bias in technologies that exist today. For me, that's kind of twofold. One is, as Stephanie said, technology is created by humans, and we exist with our own individual as well as social constructs. We exist in the times we live in, and we breathe in that air. And so, when we design and create things, Joy Buolamwini says, the algorithms and the data that feeds facial recognition software favors, she calls it the pale males. The data set represents the folks who tend to dominate computing, which tend to be overwhelmingly White and overwhelmingly male.So, the data set itself and the designers themselves privilege a particular point of view. So that's one thing to think about. And then the other thing Stephanie also noted was that technologies have a purpose when they're designed. So, this is the McLuhan/Postman influence that the media is the message to-- what is it? To a hammer, everything looks like a nail.So, every technology has some embedded bias or logic built into it. So, a pencil's logic, is that you can write with it. You can also make mistakes and erase with it. So, there is intention in it. It's clearly a tool meant for writing. AI technology, generative AI technology is a tool that has particular values built into itself. Values for efficiency. It wants to give us an answer, and it wants to give us an answer that is an average of all of the possible answers. It's a probabilistic machine. So, it gives us the most likely answer of all of the possible answers that have ever existed before. So, it also is an encoded bias towards the past. So, it's, again, scare quotes, ''imagination'' because it's a machine. It doesn't have imagination.But when you ask it, what do you think I should do for x, y, z? It takes all of its data about what has been done in the past on x, y, z and averages it together and gives you the most likely solution. And so, for educators to bring this back to education, I wonder, what does it mean when we ask children, and Stephanie said this earlier, to partner with or think with a machine that is based in the past, with historical data full of our historical mistakes and also doesn't really explore? It's not looking at the world with wonder. It's looking in this very focused way for the next answer that it can give the most likely possibility. And I think as learners, that's actually not how we want kids to learn. We want them to explore, to make mistakes, to wrestle with ideas, to come up with divergent creative thinking. And so, I know that you asked about the camera, but I think it points to two things. One, the bias built in. And also, the bias built in from society. And also, the bias built into the tool itself because it carries within it its own logic.JILL ANDERSON: Do you think that most educators, or at least educators who you encounter, have this awareness?MARIE HEATH: Whenever I do professional development around this work with teachers, they are always shocked that no one has talked about this. Often, we think about as educators, and I do it too. Like, how could I use this in my classroom? This could be wonderful for x, y, z. I could help kids with readability. I could read out loud. I can help students generate ideas or edit their writing, or all of these possibilities. And I think we do that because we are excited about education. But rarely, and in fact, we're not really ever taught this in school anywhere, do we pause and think, how has technology disrupted my life? What are the downsides of technology? We can really easily see the possibilities and uses. And I think it's wonderful that as educators, we do that often. We think about how will this be helpful. But there are, in particular with GenAI, so many ethical considerations to think about. Besides the internal built-in bias. There are environmental issues and justice issues. There are issues of socioeconomic justice. There's labor challenges.There are a lot of things to think about. Issues of the truth and what is the truth. And fracturing the very foundation of what we all believe is a common reality. So, all of those things that Stephanie and I do in our work and think about, I often find that just generally, folks who aren't doing that work don't think about it. And when we talk about it, they are curious and want to know more. For me, the goal is to make AI a point of inquiry so that we can invite it into our lives the way we want it to be, and into our educational spaces the way that we think would be useful and positive. And not have us told how we're going to use it and what we're going to do with it.JILL ANDERSON: When it comes to the day to day, what strategies do you think help teachers use AI without reinforcing inequities?STEPHANIE SMITH BUDHAI: I think relying on time and tested pedagogies. I've always taught my students, and I say my students, they're either pre-service and in-service teachers and coaches, principals, to teach from a culturally responsive or culturally sustaining framework when you're looking at teaching. If you're teaching from that way, then everything that you're doing, you're making sure that it fits with it. You're asking the right questions to make sure that every student is being taught in a way that uplifts their individual person, that doesn't exacerbate inequalities that are just inherent to our educational system. So, some teachers might not even realize. I mean, that's a part of learning about culturally sustaining pedagogies, is learning that our education system is just inherently filled with systemic racism, just the structure of it. It privileges certain students over others.But teaching from that type of framework, but also even because AI is so different, it can be so harmful. But it also can be transformative in different ways, looking at other different types of pedagogies. I know with our work that we've done, Marie and I, we've looked at abolitionist pedagogies and fugitive pedagogies as well. So, I think the day to day, if you're teaching using those pedagogical frameworks, it's an easier way to lean into looking at AI from this critical lens and making decisions day to day.There's a lot of researchers out there who are doing work in this area and providing examples for educators to do this. And we've connected with researchers at UC Irvine, and they actually have something called Story AI. The Story AI platform was built from a culturally responsive pedagogical framework. And just in short, the AI helps young students tell their own cultural stories. But at the same time, they use this technology to learn about AI. So, for example, one of the students asked for AI to give them an image of a girl with brown hair and brown eyes because, I guess, that was reflective of them. And the AI gave them paler looking girls with brown hair and brown eyes, but the student was not pale. So, they realized that you had to actually ask the AI for a specific ethnicity or specific skin tone in order for them to generate this image. Because, as Marie and I have already talked about, the AI is biased. It's pulling information. And unfortunately, the default setting is White. So, they just assume that they give you that sort of thing.So, I think using tools like Story AI, using these culturally responsive, sustaining pedagogies and teaching about AI, teachers and students need to learn about AI. They need to learn what it does. And you can do that while you're also teaching. So, they're using the Story AI. They're learning writing skills. They're writing their own stories, but they're also asking AI questions and then learning that's not the answer that I wanted. Or like, why are you giving me that. That's not what I wanted. How the outputs sort of happen.So, I think that that's one way in a day to day, that you could use AI in ways that do not reinforce these inequalities. Asking questions. And Marie actually can speak more to this because she was on the writing team. The Kapor Center, they have a racial justice toolkit with interrogation questions that are amazing. And resources for teachers that they can actually use to help themselves to help their students learn about AI but within everyday lessons. So, they're also tapping into other areas of learning.JILL ANDERSON: That was a really great example of how you can shift in the moment when you're doing this work with students, and just really interesting. Because it's almost like we're asking educators to be critical, live in the moment. And you never know what's going to come back when you're working with AI, and it continually shifts depending on what you're asking it to do and so on. So that's really interesting. How can educators be critical and evaluate the tools that they're using?MARIE HEATH: I think that is a great question. And the focus of the work that I do is how do we help young people, as well as educators, evaluate the technology that is in their lives. I have done work with some folks through the Civics of Technology project, which is a group that I'm part of and help run, which explores those questions. And so, we have a series of different audits, we call them, technology audits that you ask questions about. So, you ask things like, what do I give up to use this? Who are the winners and who are the losers of this particular technology? What are the biases within it?And so, my dream wish is that across disciplines and across grades, just like we teach media literacy across disciplines and grades, we teach thinking about the impact of technology on our lives. So that when something comes up in our class, whether it's around generative AI or social media, or whatever the next technology is that we can't even predict, we have a toolkit of ways to critically interrogate it. And so, for me, it's teaching both educators and students to attend to the effects of technology in our lives in ways that make sense in a social studies classroom, in ways that make sense in a kindergarten classroom.And Stephanie and I have both done work on and sharing different lessons — I'm using the word lesson, like a lesson plan materials for classes across grade bands and disciplines to develop those skills. So, it's less about the AI specifically and more about developing lifelong habits of mind to inquire into those effects and make intentional choices around them.STEPHANIE SMITH BUDHAI: I think also Marie and I, as well as our colleague, Daniel Crocker, we did some work on helping teachers to cultivate AI criticality with students through basically resisting AI. Refusing AI, but then also reclaiming it. And basically, they can use that framework. Take a deep dive. They don't have to use it. Or they shouldn't have to. It should be their choice. They should be able to say, well, let me think about it. Let me learn about it. Let me see if it's doing what it's supposed to be doing and let students have a choice. But also let them choose how they're going to use it, how their information is going to be used, how they're going to connect with this technology. But that's another way, too, that really aligns with the techno skepticism audit that Marie's work has really just uplifted.But yeah, it's just a way, I think, a lot of times technology in general, but definitely AI is being forced on students. And we want them to know that they do have some agency in deciding because it is their information.Unless they're using a walled garden, where not only is the information from the school and the information that they want but they're not trained on it. So, some AI tools, they specifically say we do not train on your data. But most do not say that. Most are training on it. So, if you're having students use it, then you're allowing these technology companies to essentially make money off the students' data. So, they should have the right to say no, I don't want to give that information and data.MARIE HEATH: And not to be overly dramatic, but if you align yourself with things, like what John Dewey said, that schools are a site of democracy, then we have an obligation to model for our younger citizens in our schools, ways that you can engage with technologies that uphold and uplift, and encourage the continuation of democracy, which includes things like being able to refuse something that is surveilling you, being able to make informed choices and have folks listen to you about it.And also, I love Stephanie's note of reclaiming it. I really like the analogy to the Green Book, which is if you think about the technologies of interstate highways and the ways that cars in the 1950s transformed how we traveled, it wasn't necessarily safe for Black people to travel on those roads in the ways that it was for White people.And so, in reclaiming that technology, they create the Green Book, which allows you to share which places are safer, which places are more dangerous. And so, what might that look like for now? For technology that is actively harming Black and Brown people, that's being used against them in spaces that are really anti-democratic? What might it look like to reclaim that? And we see very cool examples of folks, like protesters who might use masks or laser pens to disrupt facial recognition technology. So, what are ways that you play with it to remake or reimagine how that technology is used for uplifting justice instead of destroying it?JILL ANDERSON: We're talking a lot about teachers, and too often so much falls on their shoulders. What about school leaders? What is the role that they play in making sure AI is used in an equity-centered way?STEPHANIE SMITH BUDHAI: School leaders have a tough job. But you know they're saying heavy is the head who wears the crown. When you're at the top, you have the decisions you can make. But I think school leaders must lead. They have to create policy. They have to create policy because their teachers, their staff, they're going to look to the policy.And I always tell my students who are the ones in the teaching roles, make sure it aligns with your district and school policy. So more than teachers, the school leaders must be aware of all these things. They're the ones who are signing off on multi-million-dollar contracts for different licensing in software. Every tech person is trying to get into big and small school districts to get in front of students because they see students as money. The more students they have, the more licenses they can sell and the more money they can make.So, leaders have to be extremely cognizant and diligent about AI. And not only that, many existing tools, such as Kahoot! And Canva that we have been using without AI. Now have AI in it. So sometimes you don't even know that you're using AI depending on how your settings are. If you go to Google for a search, depending on how your settings are, just Gemini would just be there to give you answers or it will say, do you want to search with AI? It really depends on how it is.So many tools that did not have AI capabilities before, they now have AI in it. And then there are tools being created that are just strictly AI tools. And they're trying to get in front of leaders to sell. So, I think school leaders just they have to think about how do they want their teachers and students and staff to engage with AI. But first, they need to know these things too. So, leaders have to have this AI literacy that we're talking about.We hope that all school leaders have an equity-centered framing, the way that they're approaching their leadership, because they are in charge of so many different types of students, so many different types of communities. I think school leaders must find out how is the community being affected right now by AI. There was a town hall recently in Delaware where my university is located because they're trying to bring a data center to Delaware. And this summer I taught a class called societal implications of AI. And one of the units was on the environment. A lot of people don't realize how much energy it takes to run these computer, to run these algorithms. And then the data centers, they get really hot. The machines get really hot because people are constantly running things, running things, running things. So, then you need water. So, you need so much water to cool the machines down. There's AI faculty footprint that you can do, and you can calculate your own personal footprint, how much water you're using, how much energy you're using just in teaching and creating lessons and searches, and videos and things like that. So, I think principals need to be aware of that.So, if we're going to sign on for this AI tool, what are we using it for? Do we really actually need to use it? What were we doing before the tool existed? We have to be so intentional about when we're using AI. Marie and I, we're not saying don't use AI because we've been using AI. We talked about Siri and Alexa earlier. AI is going to be here. But we want to use it in ways that are responsible, the ways that don't harm people. And use it because it actually do something to transform teaching and learning and assessment. Because if not, why are we using it?I often think about the overreliance of ChatGPT and similar systems. And research is just starting. But what does that doing to our brains? Because if we're not critically thinking, then what is happening if we're just asking a question and getting an answer? So, we have to use our minds. That's how we stay sharp. What's happening to brain atrophy? There's some new studies right now looking at the connection between AI and Alzheimer's. Principals need to think about all of this. Like, if I'm signing on to use these AI tools, how will this impact my students and my teachers' ability to be creative, to think critically? How is this impacting the communities that our school is in if we're getting more data centers? They need to create policies with teachers, with students, with family members, with community members using real data to guide teachers and students. Because teachers are looking for their guidance. So, deciding on how AI will be used in schools is probably the most important decision that leaders today will have to make.JILL ANDERSON: Well, technology, as we all know, advances quickly while education is known for moving slowly. How do you think about carrying this work forward when AI is evolving so fast?MARIE HEATH: I like this question, and I also have just a gentle, from my perspective, push on the word advance and evolve. When we say things like technology advances or it evolves, there's this underlying assumption that that's a positive trajectory. I totally agree that it changes really quickly. I don't know if the direction of that change has a positive or negative to it. I agree that education changes more slowly. Certainly. And I think that if — instead of thinking of it as advanced, because then it feels like there is that pressure.If technology is advancing and schools aren't ''advancing,'' quote unquote, with it, then we're falling behind. That narrative has been used for hundreds of years to sell technology to schools, and it has not really done anything except privatize education. It hasn't necessarily transformed education. I would probably use the word change. And so, if technology is changing quickly and education is known for not changing quickly, what do we do? I return to my — I'd rather us teach about the impacts of technology than try to keep up with the rapid changes of technology.And I haven't really seen, through my work, an advantage to trying to keep up as a way to change education. We have pretty clear research on what will improve education, which includes things like more funding, more teachers, more time, more equitable practices for students. Community schools.None of those things, by the way, have the word AI in them. They all take a level of political will and commitment that is not necessarily related to technology. So, for me, the way to address the changes of technology are not to try to always include them, but rather to teach about the ways technology intersects with society so that we can make powerful decisions about that. And then if there's things we want to do in education, I think we should work on finding the political will for that, instead of hoping that the next technological invention will be the silver bullet that will suddenly fix our social and educational problems.STEPHANIE SMITH BUDHAI: I go back to where I started in special education. And when we had to figure out what assistive technology could support a student, maybe they had an alternative augmentative communication device that they needed to use. Whatever it was, we use something called the SETT model, S-E-T-T. For Students, Environment, Tasks, and Tools.You start with the student first, not with the technology. So, who is the student? What are their strengths? What are some areas that they need additional supports? And then environment. What environment will they be in? Because years ago, we used to use for an AAC device, like the Dynavox, and you had to plug it up. So that's before they had iPads, where you could just walk around and have the app. So, we had to say, what environment are they going to be learning in? Because is it an outlet? Is it outside? How can we make sure that the technology is going to power on or whatever the tool is?And then after the student's environment, the task. What are the students doing? What do they have to accomplish? What are the academic tasks or the life skills? What are the tasks that the students will need to do? And then finally, the tool. What tool could actually help them do that? But the whole point is that the technology is always last. So how do address technology changing? We just let it go by every other technology does. We don't chase it. We stay focused on the student and their learning.And then, then whatever technology is available, we can see, will this actually help, or will this stifle their critical thinking and their creativity? How can we use it to help this particular student in this particular environment complete this task to advance their learning?So, the tool is always last. I always teach for that. And it has to be — when you do use it, it has to be purposeful. It has to be for a reason. It just can't be, oh, we're just going to just use it because we have signed this three-year license for this tool.JILL ANDERSON: No, I love it. And I think that is a fantastic way for educators. If they're not already thinking in that way, they may be. So, I wanted to just thank both of you for coming on and talking about such a timely and relevant topic on the minds of lots of folks today.STEPHANIE SMITH BUDHAI: Thank you so much for having us.MARIE HEATH: Thank you for having us. It was really lovely to be able to think with both of you.JILL ANDERSON: Stephanie Smith Budhai is an associate professor in the educational technology program at the University of Delaware. Marie Heath is an associate professor of learning, design, and technology at Loyola University Maryland. They are the authors of “Critical AI in K-12 Classrooms: A Practical Guide for Cultivating Justice and Joy.” I'm Jill Anderson. This is the Harvard EdCast, produced by the Harvard Graduate School of Education. Thanks for listening. EdCast An education podcast that keeps the focus simple: what makes a difference for learners, educators, parents, and communities Explore All Articles Related Articles Usable Knowledge Tips for Using AI, From Grad Students and Professors A new guide offers advice and strategies for using generative AI tools to support self-directed project-based learning News Educating Students and Ourselves in a Changing Climate Schools have an essential part to play in confronting climate change and developing climate literacy EdCast Equality or Equity? Jeff Duncan-Andrade discusses why schools need to be equity-focused and how equality hasn't produced the results needed