Menu Harvard Graduate School of Education

Eric Taylor

Assistant Professor of Education
Eric Taylor

Degree:  Ph.D., Stanford University, (2015)
Email:  [javascript protected email address]
Phone:  617.496.1232
Vitae/CV:   Eric Taylor.pdf
Office:  Gutman 469
Faculty Assistant:  Talicha Vaval

Profile

Eric Taylor is an assistant professor at the Harvard Graduate School of Education. He studies the economics of education, with a particular interest in employer-employee interactions between schools and teachers — hiring and firing decisions, job design, training, and performance evaluation. His work has been published in the American Economic Review, Journal of Human Resources, and Journal of Public Economics; and featured in Slate, Time, The Washington Post, and Education Week. Taylor was a Spencer Dissertation Fellow in 2014, and was recognized for Excellence in Teaching and Mentoring by the Stanford Graduate School of Education in 2013.

Click here to see a full list of Eric Taylor's courses.

Sponsored Projects


Does teacher effort respond to evaluation?: Evidence from a policy allowing low-scoring students to be re-tested (2017-2019)
Jacobs Foundation

The focus of our research is whether and how the incentives of evaluation and accountability programs affect teachers’ job decisions and thus student learning. In the proposed project, our primary objective is empirical: to estimate the causal effect of one change in accountability policy — allowing schools to retest students who initially fail the exam — on the test score growth of all students. This retest policy bears on two teacher decisions critical to student learning: (i) how teachers allocate their time, effort, and attention across different students; and (ii) whether teachers choose to use “teach to the test” strategies which increase test scores without an equal rise in learning.
We propose to study more than a decade of data from North Carolina, where teachers and schools are held accountable for the proportion of their students who pass end-of-year exams. From 2009-2012, schools retested students who initially failed the exams, and then only the higher of the original and retest scores was used in the accountability measure. We will estimate the effect of the retesting policy on all students across the distribution of achievement, not just the effect on the students who are retested. One hypothesis, for example, is that retesting reduces the pressure to unduly focus teaching effort on the marginal students — students who may or may not pass — in the months before the initial test, and thus teachers can give relatively more effort to the infra-marginal students, including those high-achieving students at little risk of failing.


Using Teacher Evaluation Data to Drive Instructional Improvement: Evidence from the Evaluation Partnership Program in Tennessee (2015-2020)
IES

The Evaluation Partnership Program (EPP), currently being scaled statewide by the Tennessee Department of Education (TDOE), is designed to lastingly improve teacher job performance by pairing teachers in year-long working partnerships focused on addressing the performance feedback received in formal, classroom-observation-based evaluation. In brief, a teacher with low evaluation scores in a particular area of instructional practice (e.g., “Questioning” or “Lesson Structure and Pacing”) is matched to a partner teacher in the same school who has high evaluation scores in the same areas of practice. The partnership is encouraged to discuss evaluation results, observe each other’s teaching, share strategies and tactics, and follow-up with each other. EPP was designed in a partnership between the TDOE and researchers (John Papay and John Tyler at Brown University, and Eric Taylor now at HGSE). Results from a pilot test of EPP are encouraging. In schools randomly assigned to EPP, students’ test scores improved (0.08 student standard deviations in reading and math), teachers’ observed practices improved, and teachers’ attitudes about performance evaluation improved.

The current grant proposal will expand the experimental evaluation of EPP, examining the effects of the program at statewide scale and testing hypotheses about mechanisms. The random-assignment evaluation will be extended to include all approximately 1,800 schools in Tennessee (half assigned to implement EPP, the other half to a business-as-usual control). The project also intends to inform broader questions about the scale-up of promising practices to statewide programs, including questions about why schools do or do not choose to participate. Finally, the grant proposal explicitly involves continuing and expanding partnerships between TDOE and researchers at Brown, HGSE, and Vanderbilt, in part to build research and evaluation capacity at the TDOE.

News Stories

Campaign Banner

Learn to Change the World

The Campaign for Harvard Graduate School of Education enables HGSE to fulfill its vision of changing the world through education by expanding opportunity and improving outcomes.

Learn More