Skip to main content
EdCast

Harvard EdCast: Applying Education Research to Practice

Senior Lecturer Carrie Conaway on how to effectively bridge the gap between education research and practice.
Magnifying glass in the middle of puzzle

Senior Lecturer Carrie Conaway, an expert on making use of data and research to improve education, knows education research can truly be useful for education leaders — some leaders just may need to be enlightened as to how.

One way to prevent the research from being disconnected from practice, says Conway, is keeping educators from getting bogged down in statistical details, and instead helping them to use their own common sense and experience to apply the research. “Part of our professional obligation as educators is to learn from our work," she says. "If we're not incorporating learning from what we're doing as we're going, we're not doing ourselves a service and we're not using our own common sense. We're just sort of blindly moving around and trying to hit a target and we're not actually being intentional and strategic.”

In this episode of the Harvard EdCast, Conaway discusses how school leaders can make education research work for them, as well as implement their own evidence-based research.

TAKEAWAYS:

  • Find new sources for education research beyond your usual “go tos.” Be sure that one resource is based in research and another based in practice.
  • Conduct the research yourself as part of a school improvement plan.
  • Ask deeper questions like: What am I trying to see? What do I want to learn from my practice and not just all about sort of the impact, but also how did I accomplish that? What did we do and how could we improve that work over time?

TRANSCRIPT:

Carrie Conaway
Jill Anderson: I'm Jill Anderson. This is the Harvard EdCast. There's a lot of education data out there, but it's not always easy for school leaders to use it. Harvard's Carrie Conaway has spent her career figuring out how to take research and apply it to education in ways that improve outcomes and make a difference. She says part of the problem is educators and researchers don't connect enough to ask the questions that really matter.

Carrie Conaway: Well, I think right now too frequently the problem is that the research that we have, the evidence that we have is designed for researchers and not for practitioners. So it's answering questions that are of interest for general use or to build generalizable knowledge or are answering questions that are easy to answer especially when you're talking about causal evidence, meaning that a particular program caused an impact on something else. It's a lot easier to answer a question about the impact when people are admitted by lottery. So it's random assignment or something like that.

And there's lots and lots and lots of questions, real practical questions practitioners have that don't lend themselves to that type of analysis. And so if that's all the evidence that practitioners are seeing, it's not answering the questions they actually have about their own practice.

Jill Anderson: And so in the end, it's very difficult for practitioners to read a study and take something and actually implemented in their schools, right?

Carrie Conaway: Yeah, it can be if that's sort of the type of evidence that you're looking for. It can be missing a lot of the key information that a practitioner would immediately ask too many studies are just, "Here's the impact of this program." And what a practitioner wants to know is, "Well, that's great. How much did it cost? How many teachers did you need to train? What sort of context are you in and what enabled you to be successful with this program? Do I have those same conditions in place in my district?"

There's too little of that information about context and implementation that is actually what practitioners need to know if they're going to actually implement that idea.

Jill Anderson: What could education leaders do in order to get more from the research that exists and also to sort of create their own evidence-based research?

Carrie Conaway: To get more from the research that already exists, I think one piece of it is just getting a little more research in your reading diet. I think everyone as a professional has their go-to sources for where they're getting ideas from and professional knowledge from. So getting a few more places that are more based in research in there as well as the ones based in practice, I think is one piece that's a fairly easy shift for people to make.

But I actually think there is no better way to get people to use research than to do it on yourself. Because then you're automatically engaged in the answer to the question. That's a real problem you're trying to solve and you're trying to answer yourself. So I think a big piece of what could really help drive greater research use is more districts and more states taking on doing research as part of their improvement strategy, which is really what my role was at the State Department of Ed.

My job was to help us figure out how to do our work better. And I never had to worry about my colleagues in the program offices, reading the work we were doing because they were engaged in it to begin with. They helped design the questions. They helped us interpret the answers. They help frame the agenda that we were working from. So that solves a lot of the problem.

Jill Anderson: I mean, how hard is it to implement that on the ground in a school where you might have limited resources, may not have someone specifically working on research in your district?

Carrie Conaway: I think everyone can be a learner. It's a funny conversation to me that in education we care about learning, that is literally our profession and our job, and research is just a structured way of learning. So this should be a pretty easy sell and we should all be able to find ways to do this work. Whether you're a classroom teacher where you're sort of as a part of your own self-reflection of professional growth. You're reflecting on what did I try with this particular group of students? Importantly, who can I compare them to that might be a good comparison group for what would have happened had I not done that intervention?

That's something I think we tend to forget about. It's easy to just do a pre-post like the kids improve, but what if everybody else improved at the same rate that they didn't really gain anything extra? So getting better at asking those kinds of questions without having to be super fancy in statistics-oriented about it. Just a little bit more depth of who can I compare this to relative to what am I trying to see? What do I want to learn from my practice and not just all about sort of the impact, but also how did I accomplish that? What did we do and how could we improve that work over time?

Jill Anderson: Do you think that part of the challenge is just you get a little caught up in this idea of numbers and statistics and having that knowledge in order to execute research?

Carrie Conaway: Yeah, I do think people get bogged down in that and they think they need to be more formal and more fancy than they actually need to be. I mean, really I think we could get pretty far in education with simply asking ourselves how much improvement did I see in the group I was working with and how much improvement did I see in some other group of students that is roughly similar to them? And we don't need to get fancier than that. It'd be great if we did randomized control trials for everything, but a lot of things don't lend themselves to that.

And we can't just be like, "Oh well, we're not going to learn from that." I would have dropped out 90% of the questions that my colleagues had at the agency if I limited myself only to things where I could get a really strong estimate of the impact. Some information is better than none. And seeing some improvement relative to another group is a good place to start for a lot of educators.

Jill Anderson: You reference a term called common sense evidence. Is that what you really mean when you say common sense evidence?’

Carrie Conaway: Yeah. I think it's sort of a few dimensions. One is this sort of like don't let the perfect be the enemy of the good. Learning something from your work is important. I think don't get bogged down in the technical details of the statistics and do your best. But I also think part of our professional obligation as educators is to learn from our work. And that dimension to me is also common sense as well.

That if we're not incorporating learning from what we're doing as we're going, we're not doing ourselves a service and we're not using our own common sense. We're just sort of blindly moving around and trying to hit a target and we're not actually being intentional and strategic.

Jill Anderson: Looking at education leaders, they're tasked with looking at what exists within their district or their school system and making decisions and implementing some kind of change. But I imagine that has to be very hard to do because it's overwhelming. And there's a lot of different things at play that maybe a study from some other part of the country might not be able to have insight into your unique problems. So I'm wondering where is a good place to start with beginning to do this work on your own as an education leader?

Carrie Conaway: Well when I started at the State Department of Ed, I did not start by attempting to implement some giant research agenda across the entire agency. You have to kind of get some quick wins to change management strategy, which leaders have lots of experience with. Good leaders, part of what they're doing is leading change. And so it's just applying those same strategies towards leading the change of using more research.

So in my case, I started with a combination of a couple of things that basically my boss told me we should get such and so a study going. And working with the leadership to understand what were the priorities of the agency at the time. And picking strategically a couple of those. To give a concrete example, this is in 2007, there was a big policy initiative in the state to give our very lowest performing schools some additional autonomy from district policies around curriculum and instruction and time of the school day, that sort of thing.
And it was a huge priority of our state board director, I'm looking around I'm like, "Is anybody collecting any data on this as we're going?" And at the time nobody had really planned anything systematic. And so I went to the woman who was running that part of the, and said, "It seems like this would be a great opportunity to collect some baseline data before we start this.And then to collect information along the way." We did interviews with the school leaders and people involved in the program side. And we looked at some data just to sort of capture and document as we're going.

Because we know the board director is going to come back around in a year and be like, "How'd that initiative go? Wouldn't it be better if we planned ahead?" So that's sort of starting small with some stuff that you know is going to have an impact because you know people are going to ask you how did this study matter? I think is a great place to start. The other place you can start I would say is if you know something's coming down the pike a year or two from now like you're planning to take on a big initiative or you know your state legislature's likely to pass a law.

Planning ahead for that by looking in that lead up time, what do we already know from existing research that might be useful here? Do we want to collect any baseline data so that we can measure the change? If this is an intervention, how are we selecting students to participate? Is there a way we could introduce an element of randomness, even if it's not literally a lottery to the assignment to help get a better answer on the impact?

There's lots of little small things like that. Just thinking a little farther ahead can really, I think, pay off big benefits later on in terms of being able to really understand and learn from what you've done and tweak it and improve it over time.

Jill Anderson: One of the things that I was reading in your work was this idea that leaders often start with the problem and researchers start with the question. And it's important for education leaders to think about how to turn a problem into a good line of questioning. Can you talk a little bit more about that idea of coming up with questions and lines of questioning as you work through a certain problem you're trying to solve?

Carrie Conaway: I mean, that's another way I could have answered your question about what's the fundamental issue here is that educators have problems and researchers answer questions. And so someone has to shift to get to the point that we're on the same page about what we're trying to solve. When I think about questions, I think of three buckets of questions that I have found very useful in my own practice. Questions of implementation and questions of impact.

So one question is how much impact does this program that we've just implemented or we're about to implement having on student outcomes? That's a great question to ask. And then there's a bunch of to me lead up questions that are about how did the implementation go? Did this turn out the way we expected it to? Where were the weak links in the chain that from sort of the idea in the superintendents head down into the classroom, where did things fall apart or where do they have the potential to fall apart so you can catch those later? Those are two very practical questions that I think educators bring to the table that can guide a lot of that work.

And then the third piece is a little bit harder. It's thinking about a question of diagnosis. So you're getting more at what is the problem actually that we have for whom and when is this problem worse? And that will help figure out what the policy solution is or the practice solution depending on the level of granularity. So again, to give an example, my colleague Nate Schwartz in Tennessee did a fantastic report a few years ago, where he looked at the reasons why kids were not successful in Tennessee on AP tests at the high school level.

It turns out that the answer is different depending on the school. In some schools it was they didn't have a whole lot of kids scoring very high on grade nine and 10 tests. So there weren't a lot of kids that were academically prepared in 11th and 12th grade. In other schools, lots of kids were well-prepared, but they weren't offering AP classes. In other schools they were offering, but only the more advantaged kids were taking them. And you could imagine there's five or six reasons.

But for each school, it really matters which circumstance you're in. If you're in the circumstance of no one's scoring well enough earlier on, nobody's prepared, you got to tackle that problem. Whereas if your kids are prepared, but you're not offering AP, then that's a different solution. And so really asking yourself to think carefully about for whom and when is this problem worse can help dig from the problem space into the question space. And get a better set of questions that hopefully guide your practice better.

Jill Anderson: How hard is it to actually implement this way of thinking and approach into your work?

Carrie Conaway: Certainly every educator right now is under a tremendous amount of stress and strain. And this is probably not the best time to bring on a brand new thing, but it turns out again like learning from your work actually is the work. So if you're doing your job well as a leader, what you should be working towards is building an organization that can learn and iterate and improve. And this is a way of doing that. So it fits very well into strategic planning processes.

My job at the State Department of Ed in Massachusetts was research and planning for that reason because they fit together quite well. Districts are already doing strategic planning. So this isn't really adding that much extra. And the other thing I would say is an aspiration on its own probably isn't going to get you anywhere. You need a little bit of space in somebody's time to do this, but it doesn't have to be the superintendent.

I mean, if you're fortunate and you're a larger district, you might be able to hire some staff, but even if you can't having someone whose part of their role is to help figure out what are the key questions we need to answer this year and who can help us answer them? Along with perhaps some other set of duties or some sort of buyout for part of their time, I think can go a long way. So it's sort of like, yes, it takes some time, but lots of things that are worth doing take time. And if you don't have a system to learn from your work, in the end that's far more costly.

Jill Anderson: I imagine there will be tons of research coming out in the next probably decades about everything that's been happening.

Carrie Conaway: It's already coming, yes.

Jill Anderson: How do you find really good evidence-based research that you can use versus maybe something that's a little bit lower quality? How do you differentiate between the two and know something that you can actually really use as a leader?

Carrie Conaway: In general, reading single studies is probably not the way to go. Because in general, that's a particular moment in time in a particular context. And what you really want to know is what in general do we know across lots of studies about what the impact of a dual enrollment program at the high school, or of a tiered intervention system at a elementary school? You don't want to know how exactly did it work in Chicago in 2018. Unless it's your district that did the study, you want kind of that broader picture. Because any given study could be kind of pro or con.

It's sort of like nutrition studies. In general, you will find that studies that look at the impact of high cholesterol will find a negative impact on later health outcomes, but some of them won't, and that's just because of how research works. So I would look towards summaries of research that are done by credible organizations. The U.S. Department of Education has the What Works Clearinghouse. That's a great example. There's other evidence clearinghouses that try to summarize and not just show individual studies. So that's one piece.

And then I think you do need to build a little bit of facility for what makes for a better and worse comparison group. That's really mainly what it comes down to when you're talking about studies that get into did this quote work or not has to do with how good is the comparison group? Researchers will say the supposed gold standard is a randomized control trial. Because you have randomly assigned some students to get something and some not to. And so they should be the same except for the treatment. There's lots of other ways you can answer questions just as credibly, but building a little bit of facility with what makes for stronger or weaker methods in that regard is probably the one piece that you would need to have to kind of add on to that to build your judgment.

And then finally, as an educator, the thing only you can bring is your knowledge of your kids. So the research will tell you some stuff and it'll hopefully tell you a little bit about what context that happens in, and then you have to use your professional judgment to know is that similar enough to the circumstances I'm in that this is likely to be relevant? So if you haven't thought about the relevance of the work, who cares what the impact is, right? If it's not relevant to you don't even look at it.

It's an interesting challenge. Because I do one challenge I had working with district people when I was at the state was everyone believes their district to be special and they are all special. They're all very special, but they are not necessarily so different from one another as people think they are. And so pushing yourselves to think a little bit about like, "Okay, well, yeah, we are different from that town down the road, but how different are we really? Is it different enough that it would make a difference for this study to be relevant for us or not?"

Jill Anderson: I have to ask a little bit about the other side of the coin, which is education research. And how does that start to look a little bit different in a way that it's more useful for practitioners?

Carrie Conaway: yeah, this is such a huge challenge, but also an opportunity. In the 13 years that I've worked in education research, it has improved tremendously in terms of the connection between the relevance of research to practice. First of all, I would say nobody goes into education policy research, or education like intervention research wanting their work to be irrelevant. You would pick a different topic if you didn't care about it having an impact. And so people are fundamentally I think in general motivated that they want their work to have an impact. And it's more a matter of both building the skill on how to do that and the incentives in the higher ed institutions that it counts as part of their work.

So in terms of the skill I think I see tremendous hope in the younger generation of scholars that are coming out of doctoral programs right now. I see a lot more work that is more closely collaborative with practitioners, not just sort of what people think is interesting, but what is also a value to practice. And I think increasingly we are starting to figure out what incentives might help there. I think some organizations are doing better at that than others. But I'm optimistic that over time the work that it takes to do partnership work well will be recognized and valued as part of what makes research good, which is really what it comes down to.

Jill Anderson: Well, thank you so much for enlightening us on education research and what education leaders can do to make a difference in their own practice.

Carrie Conaway: Wonderful. I'm happy to join. Thank you for inviting me.

Jill Anderson: Carrie Conaway is a Senior Lecturer at the Harvard Graduate School of Education. She's the coauthor of Common-Sense Evidence: The Education Leader's Guide to Using Data and Research. I'm Jill Anderson. This is the Harvard EdCast produced by the Harvard Graduate School of Education. Thanks for listening.

EdCast

An education podcast that keeps the focus simple: what makes a difference for learners, educators, parents, and communities

Related Articles