Skip to main content
EdCast

Educating in a World of Artificial Intelligence

Chris Dede discusses how education can evolve to work with — rather than fight against — artificial intelligence
Girl in school library with AI graphic

Senior Researcher Chris Dede isn't overly worried about growing concerns over generative artificial intelligence, like ChatGPT, in education. As a longtime researcher on emerging technologies, he's seen many decades where new technologies promised to upend the field. Instead, Dede says artificial intelligence requires educators to get smarter about how they teach in order to truly take advantage of what AI has to offer.“The trick about AI is that to get it, we need to change what we're educating people for because if you educate people for what AI does well, you're just preparing them to lose to AI. But if you educate them for what AI can't do, then you've got IA [Intelligence Augmentation],” he says. Dede, the associate director of research for the National AI Institute for Adult Learning and Online Education, says AI raises the bar and it has the power to significantly impact learning in powerful ways.

In this episode of the Harvard EdCast, Dede talks about how the field of education needs to evolve and get smarter, in order to work with — not against — artificial intelligence. 

ADDITIONAL RESOURCES

 

TRANSCRIPT

Jill Anderson: I'm Jill Anderson. This is the Harvard EdCast. 

Chris Dede thinks we need to get smarter about using artificial intelligence and education. He has spent decades exploring emerging learning technologies as a Harvard researcher. The recent explosion of generative AI, like ChatGPT, has been met with mixed reactions in education. Some public school districts have banned it. Some colleges and universities have tweaked their teaching and learning already. 

Generative AI raises que

Chris Dede

Chris Dede: I've actually been working with AI for more than half a century. Way back when when I was a graduate student, I read the first article on AI in education, which was published in 1970. And the author confidently predicted that we wouldn't need teachers within five or six years because AI was going to do everything. And of course, we still see predictions like that today. 

But having lived through nine hype cycles for AI, I'm both impressed by how much it's advanced, but I'm also wary about elaborate claims for it. And there is a lot of excitement now about generative AI is the term that people are using, which includes programs like ChatGPT. It includes things like Dolly that are capable of creating images. It includes really AI on its own doing performances that we previously would have thought were something that people would have to do. 

But it's interesting to compare ChatGPT to a search engine. And people don't remember this, but there was a time when-- before search engines when people really struggled to find resources, and there was enormous excitement when search engines came out. And search engines are, in fact, AI. They are based on AI at the back end, coming up with lists of things that hopefully match what you typed in. In fact, the problem with the search engine becomes not trying to find anything, but trying to filter everything to decide what's really useful. 

So you can think of ChatGPT as the next step beyond a search engine where instead of getting a list of things and then you decide which might be useful and you examine them, you get an answer that says, this is what I think you want. And that is really more the AI taking charge than it is the AI saying, I can help you. Here's some things that you might look at and decide about. That makes me wary because AI is not at a stage where it really understands what it's saying. 

And so it will make up things when it doesn't know them, kind of a not very good student seeing if they can fake out the teacher. And it will provide answers that are not customized to somebody's culture or to somebody's reading level or to somebody's other characteristics. So it's really quite limited. 

I know that Harvard has sent some wording out that I've now put into my syllabi about students being welcome to use whatever tools they want. But when they present something as their work, it has to be something that they wrote themselves. It can't be something that somebody else wrote, which is classic plagiarism. It can't be something that Chat AI wrote that they're presenting as their work and so on. I think that what Chat AI does is it raises the bar for human performance. 

I know a lot about what people are going through now in terms of job interviews because my older daughter is an HR manager, and my younger daughter just graduated. And she's having a lot of job interviews. And in contrast to earlier times, now, job interviews typically involve a performance. 

If you're going to be hired for a marketing position, they'll say bring in a marketing plan when we do our face-to-face interview on this, and we'll evaluate it. Or in her case, in mechanical engineering, they say when you come in, there's this system that you're going to have a chance to debug, and we'll see how well you do it. Those employers are going to type the same thing into Chat AI. And if someone comes in with something that isn't any better than Chat AI, they're not going to get hired because why hire somebody that can't outcompete a free resource? 

Jill Anderson: Oh interesting. 

Chris Dede: So it raises the bar for human performance in an interesting way. 

Jill Anderson: Your research looks at something called intelligence augmentation. I want to know what that means and how that's different from artificial intelligence. 

Chris Dede: Intelligence augmentation is really about the opposite of this sort of negative example I was describing where now you've got to outthink Chat AI if you want to get a job. It says, when is the whole more than the sum of the parts? When do a person and AI working together do things that neither one could do as well on their own? 

And often, people think, well, yeah, I can see a computer programmer, there might be intelligence augmentation because I know that machines can start to do programming. What they don't realize is that it applies to a wide range of jobs, including mine, as a college professor. So I am the associate director for research in a national AI institute funded by the National Science Foundation on adult learning and online education. And one of the things the Institute is building is AI assistants for college faculty. 

So there's question answering assistants to help with student questions, and there's tutoring assistants and library assistants and laboratory assistants. There's even a social assistant that can help students in a large class meet other students who might be good learning partners. So now, as a professor, I'm potentially surrounded by all these assistants who are doing parts of my job, and I can be deskilled by that, which is a bad future. You sort of end up working for the assistant where they say, well, here's a question I can't answer. 

So you have to do it. Or you can upskill because the assistant is taking over routine parts of the job. And in turn, you can focus much more deeply on personalization to individual students, on bringing in cultural dimensions and equity dimensions that AI does not understand and cannot possibly help with. The trick about AI is that to get it, we need to change what we're educating people for because if you educate people for what AI does well, you're just preparing them to lose to AI. But if you educate them for what AI can't do, then you've got IA. 

Jill Anderson: So that's the goal here. We have to change the way that we're educating young people, even older people at this point. I mean, everybody needs to change the way that they're learning about these things and interacting with them. 

Chris Dede: They do. And we're hampered by our system of assessment because the assessments that we use, including Harvard with the GRE and the SAT and so on, those are what AI does well. AI can score really well on psychometric tests. So we're using the wrong measure, if you will. We need to use performance assessments to measure what people can do to get into places like Harvard or higher education in general because that's emphasizing the skills that are going to be really useful for them. 

Jill Anderson: You mentioned at the start artificial intelligence isn't really something brand new. This has been around for decades, but we're so slow to adapt and prepare and alter the way that we do things that once it reaches kind of the masses, we're already behind. 

Chris Dede: Well, we are. And the other part of it is that we keep putting old wine in new bottles. I mean, this is — if I had to write a headline for the entire history of educational technology, it would be old wine in new bottles. But we don't understand what the new bottle really means. 

So let me give you an example of something that I think generative AI could make a big difference, be very powerful, but I'm not seeing it discussed in all the hype about generative AI. And that is evidence-based modeling for local decisions. So let's take climate change. 

One of the problems with climate change is that let's say that you're in Des Moines, Iowa, and you read about all this flooding in California. And you say to yourself, well, I'm not next to an ocean. I don't live in California. And I don't see why I should be that worried about this stuff. 

Now, no one has done a study, I assume, of flooding in Des Moines, Iowa, in 2050 based on mid-level projections about climate change. But with generative AI, we can estimate that now. 

Generative AI can reach out across topographic databases, meteorological databases, and other related databases to come up with here's the parts of Des Moines that are going to go underwater in 2050 and here's how often this is going to happen if these models are correct. That really changes the dialogue about climate change because now you're talking about wait a minute. 
You mean that park I take my kids to is going to have a foot of water in it? So I think that kind of evidence-based modeling is not something that people are doing with generative AI right now, but it's perfectly feasible. And that's the new wine that we can put in the new bottle. 

Jill Anderson: That's really a great way to use that. I mean, and you could even use that in your classroom. Something that you said a long, long time ago was that — and this is paraphrasing — the idea that we often implement new technology, and we make this mistake of focusing on students first rather than teachers.
 
Chris Dede: In December, I gave a keynote at a conference called Empowering Learners for the Age of AU that has been held the last few years. And one of the things I talked about was the shift from teaching to learning. Both are important, but teaching is ultimately sort of pouring knowledge into the minds of learners. And learning is much more open ended, and it's essential for the future because every time you need to learn something new, you can't afford to go back and have another master's degree. You need to be able to do self-directed learning. 

And where AI can be helpful with this is that AI can be like an intellectual partner, even when you don't have a teacher that can help you learn in different ways. One of the things that I've been working on with a professor at the Harvard Business School is AI systems that can help you learn negotiation. 

Now, the AI can't be the person you're negotiating with. AI is not good at playing human beings — not yet and not for quite a long time, I think. But what AI can do is to create a situation where a human being can play three people at once. So here you are. You're learning how to negotiate a raise. 

You go into a virtual conference room. There's three virtual people who are three bosses. There's one simulation specialist behind all three, and you negotiate with them. And then at the end, the system gives you some advice on what you did well and not so well. 

And if you have a human mentor, that person gives you advice as well. Ronda Bandy, who was a professor in HGSE until she moved to Hunter College, she and I have published five articles on the work we did for the HGSE's Reach Every Reader Project on using this kind of digital puppeteering to help teachers practice equitable discussion leading. So again, here's something that people aren't talking about where AI on the front end can create rich evocative situations, and AI and machine learning on the back end can find really interesting patterns for improvement. 

Jill Anderson: You know, Chris, how hard is it to get there for educators? 

Chris Dede: I think, in part, that's what these national AI institutes are about. Our institute, which is really adult learning with a workplace focus, is looking at that part of the spectrum. There's another institute whose focus is middle school and high school and developing AI partners for students where the student and the partner are learning together in a different kind of IA. There's a third Institute that's looking at narrative and storytelling as a powerful form of education and how can AI help with narrative and storytelling. 

You can imagine sitting down. Mom and dad aren't around. You've got a storybook like Goldilocks and the Three Bears, and you've got something like Alexa that can listen to what you're reading and respond. 

And so you begin, and you say, Goldilocks went out of her house one day and went into the woods and got lost. And Alexa says, why do you think Goldilocks went into the woods? Was she a naughty girl? No. Or was she an adventurous girl, or was she deeply concerned about climate change and wanting to study ecosystems? 

I mean, I'm being playful about this, but I think the point is that AI doesn't understand any of the questions that it's asking but it can ask the questions, and then the child can start to think deeper than just regurgitating the story. So there's all sorts of possibilities here that we just have to think of as new wine instead of asking how can AI automate our order thinking about teaching and learning. 

Jill Anderson: I've been hearing a lot of concern about writing in particular-- writing papers where young people are actually expressing their own ideas, concerns about plagiarism and cheating, which I would say the latter have long existed as challenges in education, aren't really a new one. Does AI really change this? And how might a higher ed or any educator really look at this differently? 

Chris Dede: So I think where AI changes this is it helps us understand the kind of writing that we should be teaching versus the kind of writing that we are teaching. So I remember preparing my children for the SAT, and it used to have something called the essay section. And you had to write this very formal essay that was a certain number of paragraphs, and the topic sentences each had to do this and so on. 

Nobody in the world writes those kinds of essays in the real world. They're just like an academic exercise. And of course, AI now can do that beautifully. 

But any reporter will tell you that they could never use Chat AI to write their stories because stories is what they write. They write narratives. If you just put in a description, you'll be fired from your reportorial job because no one is interested in descriptions. They want a story. 

So giving students a description and teaching them to turn it into a story or teaching them to turn it into something else that has a human and creative dimension for it, how would you write this for a seventh-grader that doesn't have much experience with the world? How would you write this for somebody in Russia building on the foundation of what AI gives you and taking it in ways that only people can? That's where writing should be going. 

And of course, good writing teachers will tell you, well, that's nothing new. I've been teaching my students how to write descriptive essays. The people who are most qualified to talk about the limits of AI are the ones who teach what the AI is supposedly doing. 

Jill Anderson: So do you have any helpful tips for educators regardless of what level they're working at on where to kind of begin embracing this technology? 

Chris Dede: What AI can do well is what's called reckoning, which is calculative prediction. And I've given some examples of that with flooding in Des Moines and other kinds of things. And what people do is practical wisdom, if you will, and it involves culture and ethics and what it's like to be embodied and to have the biological things that are part of human nature and so on. 

So when I look at what I'm teaching, I have to ask myself, how much of what I'm teaching is reckoning? So I'm preparing people to lose to AI. And how much of what I'm teaching is practical wisdom? 

So for example, we spend a lot of time in vocational technical education and standard academic education teaching people to factor. How do you factor these complex polynomials? 

There is no workplace anywhere in the world, even in the most primitive possible conditions, where anybody makes a living by factoring. It's an app. It's an app on a phone. Should you know a little bit about factoring so it's not magic? Sure. 

Should you become fluent in factoring? Absolutely not. It's on the wrong side of the equation. 
So I think just teachers and curriculum developers and assessors and stakeholders in the outcomes of education need to ask themselves, what is being taught now, and which parts of it are shifting over? And how do we include enough about those parts that AI isn't magic? But how do we change the balance of our focus to be more on the practical wisdom side? 

Jill Anderson: So final thoughts here — don't be scared but figure out how to use this to your advantage? 

Chris Dede: Yeah, don't be scared. AI is not smart. It really isn't. People would be appalled if they knew how little AI understands what it's telling you, especially given how much people seem to be relying on it. But it is capable of taking over parts of what you do that are routine and predictable and, in turn, freeing up the creative and the innovative and the human parts that are really the rewarding part of both work the life. 

EdCast: Chris Dede is a senior research fellow at the Harvard Graduate School of Education. He is also a co-principal investigator of the National Artificial Intelligence Institute in adult learning and online education. I'm Jill Anderson. This is the Harvard EdCast produced by the Harvard Graduate School of Education. Thanks for listening. 
[MUSIC PLAYING] 

EdCast

An education podcast that keeps the focus simple: what makes a difference for learners, educators, parents, and communities

Related Articles