Skip to main content

Do Value-Added Estimates Identify Causal Effects of Teachers and Schools?

A new paper by Professor Tom Kane talks of the outpouring of new research in education and its implications for future policy
education words printed in different colors

As part of Brookings' ongoing series, The Brown Center Chalkboard, Professor Tom Kane's paper "Do Value-Added Estimates Identify Causal Effects of Teachers and Schools?" — appeared on October 30, 2014.

In 25 years as an education researcher, I have never witnessed a rapid outpouring of new research in education such as we’ve seen in recent years. In what may be the most important byproduct of the No Child Left Behind Act and the Institute of Education Science’s grants to states, school agencies have been linking students to teachers and schools and tracking their achievement over time. Researchers across the country have been using those data to study the value of traditional teacher certification, the degree of on-the-job learning among teachers, the impact of charter schools, the effectiveness of teacher preparation programs, etc. Yet, much of that work depends on a simple, often unstated, assumption: that the short list of control variables captured in educational data systems — prior achievement, student demographics, English language learner status, eligibility for federally subsidized meals or programs for gifted and special education students — include the relevant factors by which students are sorted to teachers and schools. If they do, then researchers can effectively control for differences in the readiness to learn of students assigned different teachers or attending different schools. 

But do they? The answer carries huge stakes — not just for teachers. Where the assumption appears justified, our understanding of the effects of various education interventions will continue to expand rapidly, since the existence of longitudinal data systems dramatically lowers the cost of new research and allows for widespread replication in disparate sites. However, if the assumption frequently appears unjustified, then the pace of progress will necessarily be much slower, as researchers are forced to shift to expensive and time-consuming randomized trials. When should we believe program impacts based on statistical controls in the absence of a randomized trial? Even though it might seem like an innocuous statistical debate, nothing less than the pace and scope of U.S. education renewal is at stake.

Read the full paper here.

Related Articles