Usable Knowledge Developing AI Ethics in the Classroom Harvard guidance and a new tool from the Center for Digital Thriving can help educators color in the gray areas Posted July 9, 2025 By Ryan Nagelhout Learning Design and Instruction Moral, Civic, and Ethical Education Student Achievement and Outcomes Teachers and Teaching Technology and Media The proliferation of artificial intelligence (AI) tools such as large language models and chatbots has considerably changed the academic landscape. Teachers have had to adapt on the fly, straddling the line behind harnessing these tools for their own educational gain in the classroom while preventing their potential misuse.In its wake, AI’s rapid ascendence has left behind a lot of ethical gray areas for teachers and students to navigate. This is why a new tool called Graidients from the Center for Digital Thriving — a research and innovation lab housed at Project Zero — aims to provide a framework for making these ethically unclear areas of generative AI visible to educators inside the classroom.Gray Areas and Blue Books While some classrooms may embrace AI fully, other educators are combatting its proliferation and potential academic dishonesty with a return to oral or written exams. The Wall Street Journal recently reported sales of “blue books” for written essays have risen dramatically, while schools and organizations have developed their own guidelines for using AI with academic integrity.The Harvard Graduate School of Education, for example, “encourages responsible experimentation with generative AI tools,” but warns there are “important considerations” regarding information security, data privacy, copyright issues, and trustworthiness of content and its impact on academic integrity. The guidance is part of an overall student policy the school released for the 2025–26 academic year, which notes that individual instructor rules may offer more specific guidance for students in certain contexts.Finding Our LinesEven with guidance from teachers and schools, gray areas remain when it comes to AI. But Graidients aims to help make the “lines” we are or are not willing to cross more visible by turning a conversation about what’s an acceptable use of AI tools into a valuable learning experience.The tool helps educators scaffold a conversation with students around how to use AI to support learning for a given classroom assignment. Teachers pick a specific assignment — the center cites a five-page paper about The Great Gatsby as an example — then asks students to brainstorm how they could use AI to help with that assignment.By using a digital whiteboard tool like Miro or even physical sticky notes, the class can work together to come up with a web of creative ways AI could help them generate ideas, answer questions about the topic, or even do the whole assignment for them. The center stressed this type of brainstorming should be “a no judgment zone,” where there are no bad ideas. Once the brainstorm is over, though, students are asked to reflect on those ideas and sort them into five different categories based on how they feel. A “totally fine” idea, or one that “crosses the line” into feeling unethical or unfair, are on the far ends of the spectrum. In the middle sit three other gray area categories — “mostly OK,” “not really sure” or “feels sketchy” — which house ideas students aren’t sure about ethically. Taking A Step Back Graidients aims to visually represent questions and thoughts about AI that students can sort out in real time. Using Graidients, teachers can take a step back and do a “gallery walk” of each students’ maps to notice patterns and trends and ask questions about specific areas where students think using the technology is acceptable or not.“Some of the things that you make visible are uses that are really supportive of learning. You will be able to talk about what is and isn’t OK for a given assignment, and you’ll be able to see that your line for AI usage may be different than others,” says Beck Tench, senior researcher and designer at the Center for Digital Thriving. “As an educator in the room with their learning goals in mind, you can talk about those learning goals and use that as justification for what is and what isn’t crossing the line.”Once students have mapped their lines and discussed how they feel, educators can set their official expectations for what is acceptable use of AI for that given project.Making AI Thinking VisibleThe center is still prototyping what’s next for Graidients, but suggested educators try the tool for different assignments and give feedback on their results. Since launching Graidents in January, Tench and the team also released a voting application so students can sort through their ethical continuum in real time by voting via QR code.The center hopes to develop more prototype tools that make AI thinking visible and help educators and students find positive uses for technology as it continues to evolve. Usable Knowledge Connecting education research to practice — with timely insights for educators, families, and communities Explore All Articles Related Articles Usable Knowledge Simple ‘Mindshifts’ Toward Digital Well-Being How small adjustments for educators can help them guide their students through a technology-filled world Usable Knowledge How Generative AI Can Support Professional Learning for Teachers A new free online tool helps teachers practice creative problem-solving Usable Knowledge Students Are Using AI Already. Here’s What They Think Adults Should Know A new report details what teens think parents and teachers should know about how they use, or don’t use, generative artificial intelligence