Making Feedback Count

A new survey tool — free and open source — goes beyond the basics to shed light on what really matters in the classroom

October 1, 2014
image of young boy with a colorful rainbow of choices before him

Student feedback can be a crucial way to evaluate teaching, assess a new curriculum, and improve classroom achievement. Or it can be next to useless. “You have to ask the right questions in the right way,” says Hunter Gehlbach, a survey methodologist at the Harvard Graduate School of Education. And that’s something most standard questionnaires don’t do.

Gehlbach spent the last year collaborating with Panorama Education to develop an entirely novel approach to student surveys, creating a scientifically rigorous and reliable set of survey questions that will help educators measure perceptions of teaching and learning and assess a dozen hard-to-quantify classroom dynamics like engagement, interest in the subject matter, grit, and relationships between students and teachers.

I hope these scales will begin to help schools build a culture of evaluation for everyone. It’s not just singling out the teachers and focusing on their performance; it’s looking more broadly at the many factors that influence learning.

Now the result of that work is ready for schools to use. The Panorama Student Survey is a free, downloadable, open-source tool designed as a series of scales — a series of questions related to each one of the dozen key topics or ideas.

How can educators use the survey? We asked Gehlbach to share several common approaches (from among many potential applications):

  • To get an overall baseline of where the school is. In particular, many schools are interested in how different groups within the school compare to each other. Do boys feel that the classroom climate in their math and sciences classes is more positive than girls do? How much of a sense of belonging at the school do the English language learners feel as compared to the native English speakers?
  • To get a sense of the changes that occur over time. Do the teacher-student relationships get more positive or more negative from the beginning of the school year to the end? Do students become grittier or less gritty as they advance through middle and high school?
  • To assess the efficacy of interventions. If a new curriculum has been introduced for social studies, do students value the subject matter more and show more interest in these courses? Do those students who participate in a peer tutoring program develop greater facility in applying learning strategies to their schoolwork?

Gehlbach and his research team — HGSE students Molly Cahen, Bryan Mascio, Joe McIntyre, Beth Schueler, and Julianne Viola — wanted to be meticulous in crafting survey items that would yield valid, actionable data. They did a comprehensive literature review for each of their 12 key topics, interviewed teachers and students, developed survey questions according to best methodological practice, asked academic experts to evaluate the quality of the survey items, reinterviewed students to make sure questions still aligned with the way they articulated their learning experiences, and finally piloted the surveys in 13 districts across the country. Panorama, the data analytics startup that has already worked with more than 5,000 schools to gather and assess usable data in innovative ways, provided the grant that made the work possible.

“One of the things I hope these scales will do is begin to help schools build a culture of evaluation for everyone,” says Gehlbach. “It’s not just singling out the teachers and focusing on their performance; it’s looking more broadly at the many factors that influence learning. If you work for a company that does 360 evaluations twice a year, it’s very normal for everyone to be evaluated; people get used to it and their performance improves as a result of it. So I don’t think principals or students should be absolved of being evaluated either."

Another key goal, he says, is to "expand our sense of what outcomes we care about. Everyone is concerned about test scores, and grades are a close second, and college going — but there is a lot more that we should be really invested in, in terms of what we’re evaluating.”

For more information about Gehlbach’s survey methodology, see “Measure Twice, Cut Down Error: A Process for Enhancing the Validity of Survey Scales,” published in the Review of General Psychology.

***

Get Usable Knowledge — Delivered
Our free monthly newsletter sends you tips, tools, and ideas from research and practice leaders at the Harvard Graduate School of Education. Sign up now.

See More In
K-12 School Leadership