Skip to main content
EdCast

The Impact of AI on Children's Development

AI designed with certain principles in mind can benefit children's growth and learning, says Assistant Professor Ying Xu, but AI literacy is essential
Smiling child looking at a phone with chatbot illustration

The explosion of artificial intelligence comes with many benefits and challenges for children interacting with artificial intelligence, especially in educational and social contexts.

“The big question becomes whether children can benefit from those AI interactions in a way that is similar to how they benefit from interacting with other people,” says Ying Xu, an assistant professor at the Harvard Graduate School of Education. “So if we talk about learning first, my research, along with that of many others, show that children can actually learn effectively from AI, as long as the AI is designed with learning principles in mind.”

For instance, AI companions that ask questions during activities like reading can improve children's comprehension and vocabulary, Xu says. However, she emphasizes that while AI can simulate some educational interactions, it cannot fully replicate the deeper engagement and relationship-building that come from human interaction, particularly when it comes to follow-up questions or personalized conversations that are important for language and social development.

“There is the excitement that AI has the potential for personalized learning and to help students develop skills for this AI-driven society. But like many of you, I share the same concerns about the outlook of this, what we call the 'AI generation,’” she says. “There are so many questions, we don't have answers yet. When we talk about children's ability to actually find answers and learn on their own, and is using 'hey' to command or activate AI makes kids forget about politeness. And perhaps the most worrisome to a lot of people is whether children would become more attached to AI than to the people around them.”

In this EdCast episode, Xu shares what we know so far about how AI impacts children’s development and the importance of AI literacy, where children are taught to understand the limitations and potential misinformation from AI, as well as the need for both developers and educators to promote critical evaluation of AI-generated content.

Transcript

JILL ANDERSON: I'm Jill Anderson. This is The Harvard EdCast. 
[MUSIC PLAYING]

Ying Xu knows artificial intelligence is reshaping children's lives. Whether it's asking Siri to solve a math problem or having Alexa play a favorite song, AI is everywhere and it's transforming the way kids learn, engage and even socialize. Part of Xu's work is studying how this new technology influences children's academic and social development.

She has a lot of insight about how AI can be used to support learning, and the challenges it creates for social behavior. But she also insists that educators and parents can guide children's AI interactions in ways that foster critical thinking and healthy boundaries. I wanted to better understand the risks and benefits of raising an AI savvy generation. First, I asked her how much we know about AI's impact on children's development.

Ying Xu

YING XU: It is actually very important to first think about what drives children's development. It is obviously a very complicated process. But one key factor is children's social interactions with others-- typically with the people around them, like parents, teachers and peers.

But with the rise of AI, children now have a new kind of interaction-- one with AI agents like Siri, Alexa or ChatGPT. So the big question becomes whether children can benefit from those AI interactions in a way that is similar to how they benefit from interacting with other people. So if we talk about learning first, my research, along with that of many others, show that children can actually learn effectively from AI, as long as the AI is designed with learning principles in mind.

For example, over the past few years I have developed and tested AI companions that engage children during activities like reading books, watching television and storytelling. To give you a more precise idea of how this works, just imagine a young child reading a picture book with their parents. And in my studies, a smart speaker steps in to simulate that role, reading the story aloud and pausing to ask questions. Questions like the problems the characters are facing, how the characters feel, then what the child thinks will happen next. And the AI listens to the child's response and offers small hints, just as a caregiver or a teacher would if the child needs help.

So what we've found is that children who engage in this type of interactive dialogue with AI comprehend the stories better and learn more vocabulary, compared to those who just listen to the stories passively. And in some cases, the learning gains from interacting with AI were even comparable to those from human interactions. I do have a caveat here. So even studies show learning benefits from AI, it is important to remember that AI can't really fully replicate the unique benefits of real conversations with other people.

JILL ANDERSON: Right.

YING XU: Just one example. In my studies, while children were quite talkative with AI, they were actually even more engaged when speaking with a human.

JILL ANDERSON: Yeah.

YING XU: And when children talk to a human, they're more likely to steer the conversation, ask follow-up questions and share their own thoughts. And those are all important elements that are critical for the language development. And these kind of child-driven aspects of conversation are where AI still falls short.

So another very important aspect of research is look at how children perceive AI, how they feel about their interactions with AI. Younger children might initially treat AI as human-like. But many of them also recognize that AI might look, or talk, or act like a person, but AI actually lacks the shared experiences and genuine empathy. And the important part is students thrive when they engage with someone who they can relate to, and also someone who can relate to them. So since it is uncertain whether AI can form this kind of deep connection with children, we should be very cautious about using AI as a source of companionship.

JILL ANDERSON: Right.

YING XU: Because, after all, conversations aren't just about exchanging information. It's about building relationships, and those aspects are very crucial for children's development.

JILL ANDERSON: In your research, you're looking at relatively younger children, correct?

YING XU: Right. So I mostly focus on preschool all the way to upper elementary school kids.

JILL ANDERSON: It's interesting, because I think about AI as something that has really exploded over the past couple of years. And I suspect a lot of that has to do with ChatGPT coming out, and it's just getting huge in a relatively short amount of time. But the reality is, you mentioned some things like Siri and even Alexa, things like that. Kids have been interacting with those AI tools for a really long time. Are there some other common ways young kids are interacting with AI?

YING XU: Yeah, you're absolutely right that AI has been a part of children's lives long before ChatGPT became popular just a couple of years ago. So, in fact, AI is actually more prevalent than most children, or even us, realize.

JILL ANDERSON: Yeah.

YING XU: A great example is YouTube's auto recommendation system. Its AI algorithms suggests the next videos to play, based on a child's viewing history. So this is what I'd call a hidden, or unnoticed, way children interact with AI.

So there is another type of AI interaction which is children's direct engagement with AI agents, and this is currently at the center of the ongoing heated debate. So the most common reason children interact with AI agents is to ask questions and seek information. So there are a lot of different types of questions students ask AI. Some children ask curiosity-driven questions, like "what is inside a volcano?"

JILL ANDERSON: Right.

YING XU: And other children use AI for homework help, "how would you solve this math problem?" Or request practical information, like recipes or the weather today. I actually view this as an area where AI can have a positive impact on children, because it significantly broadened children's access to knowledge and information. However, a critical factor is that children must be able to engage critically with the information, and be aware of the potential for mis or disinformation.

JILL ANDERSON: We'll get back to that engaging critically in a few minutes. But I think about this, and I have an elementary-aged child who loves all of these AI tools, thinks they're phenomenal and fun. And just as you mentioned, loves asking questions to these things. I had ChatGPT open on my computer one day, and they were just unbelievably fascinated by the idea that they could ask a question.

YING XU: Yeah, I could field that. So I have two kids, and they are actually heavy users of AI agents right now. So I've seen a lot of instances of them asking questions. And also how they feel amazed, but also puzzled by AI's ability to generate a similarly intelligent response to their questions.

JILL ANDERSON: You already mentioned a little bit about how we know this can be somewhat beneficial for learning, because it's better than a child just doing something on their own. There is some interaction there, that it seems like there is some positive response. But it's not the same as a human, obviously. And so I'm wondering, what about the implications on social development?

YING XU: So there is a lot we could say about how AI might impact children's social development. And one area is social etiquette, saying "thank you," "excuse me," and things like that. Children learn social etiquette through interactions with others who model the socially appropriate behaviors. But AI does not always follow our social norms, or encourage the use of polite language. So we've observed instances where children give demands, or even insult AI.

[LAUGHTER]

And the concern is these behaviors would carry over into children's real-life interactions with people. So we actually don't have conclusive evidence for this yet, but there is some evidence suggesting that children can pick up linguistic habits from their conversations with AI and use this language subsequently when they interact with others. However, it's still uncertain whether children are doing this for playfulness because it's fun and silly, or if it reflects a real change in behavior.

Regardless, it is enough of a concern that tech companies have started implementing strategies to prevent AI from diminishing children's politeness. For example, Amazon's Echo Dot has introduced a "polite mode," where if a child says "please" when asking a question, Alexa will respond with "thank you for asking so nicely." So it could be a step in the right direction. But it also poses a risk of obscuring, from the children's perspective, the boundaries between AI and humans.

JILL ANDERSON: That's really fascinating. Did you have any additional thoughts about the implications on academic learning?

YING XU: Yeah. So for academic learning, when children turn to AI like ChatGPT for help with homework and assignments, the key question is, are they actually engaging in the learning process or are they bypassing it by getting an easy answer from the AI? We actually have pretty robust evidence suggesting that access to AI tools can improve students' task performance. So, for example, when writing an essay, students who use ChatGPT as an assistant tend to produce higher quality essays. However, the question here is, can students still write better essays when they no longer have access to ChatGPT?

JILL ANDERSON: Huh.

YING XU: So this is a focus of many current studies. An emerging theme suggests that if AI is designed as a scaffolding tool to guide children in developing their writing skills, rather than simply providing direct answers, it could have more lasting impact on learning. But there is still the question of how much scaffolding AI should provide. 
Sometimes it is not necessarily a bad thing for students who get stuck on a problem and work through it for an extended period. This is a kind of struggle that we wanted to call "productive struggling." It's helpful for the learning, as it creates space and time for students to construct their own understanding. So for future studies, what we really wanted to think about is the impact of tools like ChatGPT. How they should move beyond focusing on short-term performance improvements to asking whether ChatGPT could foster deeper and long-term learning.

JILL ANDERSON: Hm. What do we know about children's ability to evaluate information they're getting from AI?

YING XU: This is a great question. I think even adults are having trouble evaluating AI-generated information. So children actually do encounter a vast amount of information every day, and it's not just an AI issue.

Since the rise of the internet and social media a couple of decades ago, children have increasingly faced challenges in evaluating credible information sources. But generative AI and ChatGPT actually makes things a little bit more complicated. So if you think about when you search on Google, you get a wide range of sources with citations, right? But with ChatGPT or generative AI, it combines and remixes everything for you. You actually can't tell where the information comes from.

So this lack of transparency can sometimes discourage children from questioning the credibility of the information. And additionally, if you think about how you engage with ChatGPT, it presents information in a conversational style, much like how humans communicate, which can blur the line between human knowledge and machine-generated content.

So how do children actually evaluate information from AI? There aren't actually a lot of studies on this yet. But we can gain some insights from research that has explored children's trust in robots-- particularly those that could also talk and answer questions.

So this research found that children use similar judgment strategies to assess information provided by both humans and robots. Often their judgment is based on whether the robot or human has given accurate information in the past, and their perception of the human or robot's expertise. But still, we see that some children are better at utilizing these strategies to calibrate their trust, and some children tend to trust blindly whatever information AI provides.

So this ability is likely influenced by their background knowledge in the subject area-- if they actually know something about the topic they're discussing-- and also their understanding of how AI works. We often call it "AI literacy." And research has shown that children even as young as preschool-aged can be taught AI literacy, which helps them more effectively assess the strengths and limitations of AI.

JILL ANDERSON: Do you know whether there is a lot of this AI literacy happening yet?

YING XU: So there are actually quite a number of programs specifically targeting school-aged children to teach them the underlying mechanisms of how AI works. But I'm not sure if there are a lot that is commercially available already, because most of what I have seen exists in the research world, in the research projects. But one thing that I'm very interested in doing is to develop interventions that are embedded in the AI tools that the children are actually interacting with as a way to provide ongoing support when they engage with AI information.

So, for example, we could provide prompts to encourage students to reflect and think about the credibility of the information before they proceed to share the information or trust the information. So this strategy has been found helpful to encourage high school students and middle school students in critically evaluating information from social media, because that pause actually creates a space for the students and young people to actually think about whether they should trust the information.

JILL ANDERSON: So it sounds like there's a lot of opportunities for educators to create AI literacy tools and incorporate these types of tools into their classrooms, but we just haven't really gotten there yet, at least to the masses, to do this.

YING XU: Yeah, I see that it should be a collective effort-- not just on the educators' part, but also from the developers of the AI tools. On the one hand, educators and schools could organize AI workshops and initiatives and programs that directly teach students knowledge about AI and strategies to help them evaluate AI information.

But on the other hand, developers and those who design AI tools should also incorporate those reflection prompts and warning messages in their AI product to signal, or specifically instruct, the students that AI might distribute misinformation. And it's very important for you to pause and think about if a piece of information is trustworthy.

JILL ANDERSON: It's fascinating to think about how quickly there was some kind of response to this politeness factor. Like they started to build it into certain components of these tools. And so it does make you wonder if this will be kind of a newer path that we see AI go in-- for kids, at least-- taking these pauses to have them question and think critically about what they're accessing.

YING XU: Yeah, so this is interesting. So if you look closely at your ChatGPT interface, there is actually one very small warning sign at the bottom. "ChatGPT can make mistakes. Check important information."

JILL ANDERSON: I was on ChatGPT the other day and I noticed that.

YING XU: Yeah. So I would say that the research community and the developers are actually moving pretty quickly, responding to a lot of the challenges that arise from people's engagement with these generative AI tools.

JILL ANDERSON: People were very fearful or very excited about all of these new tools. How do you think parents and educators can help guide their children's interactions with AI?

YING XU: So I have definitely seen both enthusiasm and also skepticism when it comes to AI. So there is the excitement that AI has the potential for personalized learning and to help students develop skills for this AI-driven society. But like many of you, I share the same concerns about the outlook of this, what we call the "AI generation."

There are so many questions, we don't have answers yet. When we talk about children's ability to actually find answers and learn on their own, and is using "hey" to command or activate AI makes kids forget about politeness. And perhaps the most worrisome to a lot of people is whether children would become more attached to AI than to the people around them.

I think that both hopes and concerns are valid. But the question is, how should we approach AI amidst this uncertainty? My view is that we should embrace AI that is well designed and child-centered as a valuable tool to support children's development-- not as a replacement of human interaction, but as a complement to the interactions children already have with their families, teachers and peers.

Here are two suggestions for how we might be able to achieve that. I think, first, we need to help children maintain healthy boundaries with AI by being transparent about AI's nature. It is essential that they understand that they're interacting with a program, not a person. This clarity helps prevent confusion and also strengthen their ability to engage with AI more effectively. And the AI literacy program we just talked about could be important for empowering children to engage with AI confidently and thoughtfully. 
And the second suggestion I have would be, AI should be designed to encourage human connections. In a recent study, our team developed an AI that allowed a Sesame character to engage with children while reading a book together. But what is different from most of the AI products is that this AI not only provides prompts to the children, but also provide prompts to get parents to stay involved in the discussion with their child.

And we have found that this not only supports the children's language development, but also strengthens family connections through the shared activities. I'm actually very optimistic that if we could adopt some of these strategies, we could embrace the new learning opportunities AI offers while preserving the irreplaceable human connections that are so crucial to children's development.

JILL ANDERSON: So much of this reminds me of the response to screens in kid's [INAUDIBLE]. It echoes a lot of the similar ideas, that screens aren't necessarily bad. There was always a movement for a while demonizing television, and then demonizing phones and things like that. But from what I understand, a lot of that is how do you get involved with kids to be a part of the games, or part of the shows? Or how do you ask them questions? And it just seems very similar with AI.

YING XU: Yeah, so we've had a lot of similar discussions and similar fears and enthusiasm in the past. And I think one important thing for us to consider, especially when we think about the impact these technologies have on children, is children exist in an ecosystem. So there are many sources of support that could help them learn and develop-- and technology is not the only source. 
And if you think about the technology's impact, it either helps or hinders children's development-- not through children's direct engagement with this technology, but is mediated by a lot of the other factors surrounding them. So, for example, even the same television program would have different effects on children. And sometimes it depends on if the child actually has a family member sitting next to them—

JILL ANDERSON: Right.

YING XU: --and helping them digest and comprehend what is being taught on the program. So there are so many different factors we need to think about as how we approach this question more holistically.

JILL ANDERSON: Well, there's a lot here, a lot of opportunities, especially for educators, to come up with tools and curriculums and ways to embrace AI with kids. So thank you so much.

YING XU: Thank you.

JILL ANDERSON: Ying Xu is an Assistant Professor at the Harvard Graduate School of Education. I'm Jill Anderson. This is The Harvard EdCast, produced by the Harvard Graduate School of Education. Thanks for listening. 

EdCast

An education podcast that keeps the focus simple: what makes a difference for learners, educators, parents, and communities

Related Articles