Information For:

Give back to HGSE and support the next generation of passionate educators and innovative leaders.

Faculty & Research

Thomas Kane

Walter H. Gale Professor of Education and Economics

Thomas Kane

Degree:  Ph.D., Harvard University, (1991)
Email:  [javascript protected email address]
Phone:  617.496.4359
Fax:  617.495.2614
Vitae/CV:   Thomas Kane.pdf
Office:  50 Church Street 4th Floor
Faculty Assistant:  Sheila Griffin


Thomas Kane is an economist and Walter H. Gale Professor of Education at the Harvard Graduate School of Education. He is faculty director of the Center for Education Policy Research, a university-wide research center that works with school districts and state agencies. Between 2009 and 2012, he directed the Measures of Effective Teaching project for the Bill & Melinda Gates Foundation. His work has spanned both K-12 and higher education, covering topics such as the design of school accountability systems, teacher recruitment and retention, financial aid for college, race-conscious college admissions and the earnings impacts of community colleges. From 1995 to 1996, Kane served as the senior economist for labor, education, and welfare policy issues within President Clinton's Council of Economic Advisers. From 1991 through 2000, he was a faculty member at the Kennedy School of Government. Kane has also been a professor of public policy at UCLA and has held visiting fellowships at the Brookings Institution and the Hoover Institution at Stanford University.

Click here to see a full list of Thomas Kane's courses.

Areas of Expertise

Professor of the Year, UCLA School of Public Affairs, Public Policy Department,(2006)

Sponsored Projects


COVID-19 Development of Tools to Reduce Chronic Absenteeism (2020-2022)
Bill and Melinda Gates Foundation

The primary outcome of this grant would be the development of a much more self-guided option to making use of Proving Ground's approach, expertise, and tools, to significantly increase the number of partners they are able to work with. School systems engaging in this modified approach will either use it as a lower-cost entry point into the Proving Ground Network, or a method of maintaining fidelity to the approach after their full service contract with Proving Ground has ended.


Validation Network (2019-2023)
Bill and Melinda Gates Foundation



Validation Network (2019-2023)
Bill and Melinda Gates Foundation



The Long-Run Influence of Charter Schools – Phase I (2018-2020)
Laura and John Arnold Foundation

The Long-Run Influence of Charter Schools (phase I) project hopes to create a charter school lottery data clearinghouse, to make it possible for other researchers to store data, propose projects and to gain secure access to the data for their own research projects. The first phase of the project (the phase covered by this grant) proposes a large-scale data collection effort in a handful of urban centers and will focus on two goals.• Assembling of admission lottery data from past cohorts of charter school applicants in order to estimate impacts on long-term outcomes-- such as earnings, college attendance and home ownership (all based on tax records). • Assembling of admission lottery results from current cohorts of charter school applicants before those data are lost or destroyed in order to lay the groundwork to estimate impacts on both medium-term outcomes (e.g., student achievement and college-going rates) and long-term outcomes (e.g., wages) in the future. Three categories of variables will be used to facilitate the analysis of long-term outcomes from the charter schools:• Linking variables (e.g. student name, gender, parent name, parent address, student date of birth, student grade level, the school a student is currently attending)• Lottery outcome variables (e.g. the randomized ordering used, any priority categories, whether the student was initially offered a slot, and whether the student was eventually offered a slot at a charter school as the school worked through the waiting list) and • whether the student enrolled in the charter school in subsequent school years. In phase two of the project (not included in this grant) hopes to merge the data onto IRS records.


Proving Ground – Networks for Continuous Improvement (2017-2018)
Carnegie Corporation of New York

Proving Ground provides an infrastructure that makes it faster, easier and cheaper for agencies to pilot, evaluate and modify initiatives. The Proving Ground has been designed around three core principles. It is based on the beliefs that:•Evidence is most powerful when generated locally•To be useful, evidence must be generated quickly and cheaply•Piloting and testing should be a natural part of the way that school agencies make decisions.Currently, Proving Ground serves 3 large districts and 10 charter management organizations (CMOs). This grant from the Carnegie Corporation supports the addition of four school districts to the Proving Ground networks.


Never Judge a Book by its Cover: Use Student Achievement Instead (2016-2017)
William and Flora Hewlett Foundation

The Center for Education Policy Research (CEPR) is planning to examine the efficacy of elementary math textbooks and instructional materials in use during the springs of 2016 and 2017. In this proposal, we seek supplemental funds to support a teacher survey that will allow us to explore three main questions pertaining to textbook use:1) Do teachers report using the same instructional materials as school leaders and/or administrative records?2) How do teachers use textbooks and accompanying instructional materials in their classrooms?3) What additional materials besides textbooks do teachers use in their classrooms?The answers to these questions will allow us to identify the range of implementations for a given textbook and how these relate to the measured efficacy of a textbook.


Never Judge a Book by its Cover: Use Student Achievement Instead (2016-2018)
Bill and Melinda Gates Foundation

In February and March of 2015, as teachers were preparing for the first administration of the end-of-year assessments from the Partnership for Assessment of Readiness for College and Career (PARCC) and the Smarter Balanced Assessment Consortia (SBAC), the Center for Education Policy Research at Harvard surveyed a representative sample of 1,500 teachers and 142 principals across five states. After controlling for the baseline achievement of students, peers and the teacher’s own value-added in prior years, we estimated the variance in student outcomes associated with the textbooks used. (Our sample size allowed us to identify how much variation in achievement was associated with textbooks, but not necessarily to evaluate the effectiveness of each textbook individually. Only a few textbooks had a sufficient sample size in our original study to measure their effects reliably.) The variance in achievement associated with textbook effects was substantial, especially in math. In 4th and 5th grade math classrooms, we estimated that a standard deviation in textbook effectiveness was equivalent to .10 standard deviations in achievement at the student level. If all schools could be persuaded to switch to one of the top quartile textbooks, student achievement would rise overall by roughly .127 student-level standard deviations. Such a boost in the average teacher’s effectiveness would be larger than the improvement the typical teacher experiences in their first three years on the job. The choice of textbook indeed appears to matter.Obviously, alignment is critical. However, an aligned textbook may be used ineffectively for many different reasons — the approach used is too foreign to students, the training materials for teachers are not sufficient, etc. A measure of efficacy would be an important complement to measures of alignment. Ideally, districts would be purchasing textbooks which are both aligned and effective.Unfortunately, outside of the above study, there is little to no way for districts to evaluate the efficacy (and not just the alignment) of textbooks for teachers implementing the Common Core State Standards. The prior federally funded studies of instructional materials, such as textbooks, were focused on earlier standards and not the CCSS. Our work would generate much-needed empirical evidence about the relative effectiveness of instructional materials that both complements and goes beyond existing “experts’ review”-based textbook examinations. Such evidence could help inform purchasing decisions by states and school districts, the largest of which spend tens of millions of dollars each on textbooks annually. Such evidence could also help encourage and guide publishers to improve their offerings.Because textbooks change and districts change adopted books with relative regularity, for this type of data to have a lasting impact it must be updated frequently. We therefore believe that this should not be a “one-off” study but that it should be repeated with regularity every one or two years.


Faster, Cheaper Evidence-Gathering for U.S. Education (2015-2020)
Bill and Melinda Gates Foundation

In education most new ideas will fail. The process by which children and adults learn is just too complex to anticipate every obstacle. Ultimately, the rate of progress of U.S. education will not be determined by the quality of the initial reform ideas, but by the speed with which we can identify the subset of those ideas which are effective. We believe that a primary reason for our slow progress is our inability to learn what works in a timely and efficient manner and to engage state and local leaders in that learning process.The Center for Education Policy Research at Harvard University, is working with a network of school agencies to launch a system of low-cost, rapid cycle, ongoing trials. The work is guided by seven design principles:o State and local decision makers — not other researchers, nor the federal government — must be the primary audience.o The evidence must illuminate interim outcomes and implementation — and not simply provide a summative verdict on effectiveness after the fact. o Whenever feasible, solutions should be piloted in a subset of schools and classrooms and rolled out broadly only when the early results are promising.o The reporting cycle must be less than one year. o To keep costs low, the system must rely primarily on existing data (such as student achievement, grade retention, attendance, graduation, college going and student log files from educational software providers). o We focus on policies and services which could plausibly be scaled up quickly. o Reports are public and distributed to network leadership.


Partnering in Education Research (PIER): A Predoctoral Interdisciplinary Training Program (2015-2020)

Over the past decade, districts, states and the federal government have invested heavily in education data systems. Those data are a vital national resource, with the potential to illuminate many longstanding debates in education. Yet the data remain vastly underutilized by the education research community. Through the Partnering in Education Research (PIER) fellowship, we will train the next generation of education scholars to collaborate with school agencies to put the data to use. To succeed, they will need more than the quantitative methodological skills taught in graduate programs. We have designed the program to provide the other soft and hard skills they will need to work successfully with school agencies: translating practitioners’ questions into tractable research questions, understanding when and where randomization and quasi-experimental designs are feasible, negotiating the fine points of a data security agreement, preserving relationships while reporting results with rigor and objectivity.While on campus, the fellows will work with a community of prominent social scientists from across the Harvard campus (specifically, the Harvard Graduate School of Education (HGSE), the Harvard Graduate School of Arts and Science (GSAS), and the Harvard Kennedy School (HKS)). In addition, the fellows will have access to the unique network of more than 60 school agencies around the country with whom the Center for Education Policy Research at Harvard currently works. Each fellow will:• Serve as a research apprentice on education projects for a PIER core faculty member;• Be placed in an agency internship with one of our partner education agencies, during which they will design and execute a research project in collaboration with agency leadership;• Attend an annual Partnership Discovery Event connecting researchers with education leaders to design research projects on particular themes, such as “blended learning” or “teacher evaluation”; • Take part in a pro-seminar in education research. The seminar series will include “Director’s Cut” sessions given by leading researchers, in which they will describe how their research project came about and the challenges overcome along the way.Approximately 28 doctoral students will receive two- or three-year PIER fellowships. Across all cohorts, the PIER fellowship will support 68 fellowship years.


Evaluation of the Match Rookie Teacher Academy (2015-2019)
Laura and John Arnold Foundation

The purpose of this project is to perform the evaluation portion of a randomized control trial designed to assess the impact of The Match School Foundation, Inc.Â’s teacher training program for novice teachers on outcomes such as achievement growth of teachers' students, principal ratings, and retention with the ultimate goal of improving K-12 education.The Center for Education Policy Research (CEPR) has partnered with The Match School Foundation, Inc. ("Match" ) in order to evaluate the efficacy of Match's teacher training program. CEPR runs and evaluates the Match Teacher Launch Project ("TLP" ), a training program for first-year teachers. TLP uses the Match Teacher Residency ("MTR") methodology and coursework to better prepare more effective first-year teachers. CEPR and Match target "regular" pre-service teachers, or those receiving a degree from a traditional education school ("TLP Participants"), to receive this distilled version of MTR programming. Over the course of two months, TLP Participants take three courses that are each thirty to forty hours long ("Summer Training' ), and receive additional coaching every week for the first four months of teaching ("Fall Coaching"). TLP is designed as a randomized controlled trial, piloted in 2015 ("Year 1"), and fully run in 2016 and 2017. CEPR leads the research and evaluation effort, and measures effectiveness via outcomes including measures of TLP Participant achievement, attitudes, beliefs, practices, and job satisfaction, principal ratings, and overall retention.


TN study of Pre-College math remediation programs (2014-2017)
Bill and Melinda Gates Foundation

This project evaluates math options in Tennessee for high school seniors whose previous achievement in math would require them to take developmental math in college. Chief among these options is SAILS, a computer-enabled developmental math program for high school seniors. The program was developed in 2012 by Chattanooga State Community College and is now being scaled across the state. Students are assigned to SAILS based on an ACT math cut score, and those who successfully complete the programÂ’s modules are not required to take developmental math in community college and can enroll in credit-bearing college math courses immediately. The evaluation examines whether the SAILS model improves student achievement in math and results in better long-term postsecondary outcomes than other approaches to delivering remediation currently used in Tennessee. It also compares the short- and long-run costs and savings associated with SAILS to those of other approaches and uses qualitative methods to identify successful implementation practices for SAILS.


Thomas J. Kane, David Blazar, Hunter Gehlbach, Miriam Greenberg, David Quinn, Daniel Thal. "Can Video Technology Improve Teacher Evaluations?: An Experimental Study" Education Finance and Policy, Vol. 15, No. 3, pp. 397-427.,(2020)

David Blazar, Blake Heller, Thomas J. Kane, Morgan Polikoff, Douglas Staiger, Steve Carrell and Michal Kurlaender. "Curriculum Reform in the Common Core Era: Evaluating Elementary Math Textbooks Across Six U.S. States" Journal of Policy Analysis and Management, Vol. 39, No. 4, pp. 966-1019.,(2020)

Mark Chin, Thomas J. Kane, Whitney Kozakowski, Beth E. Schueler, Douglas 0. Staiger. "School District Reform in Newark: Within- and Between-School Changes in Achievement Growth" Industrial and Labor Relations Review, Vol. 72, No. 2, pp. 323-354.,(2018)

Thomas J. Kane, "Develop and Validate--then Scale: Lessons from the Gates Foundation's Effective Teaching Strategy." Education Next, October 15, 2018.,(2018)

Thomas J. Kane, "Making Evidence Locally: Rethinking Education Research under the Every Student Succeeds Act" Education Next, Vol. 17, No. 2, pp. 52-58.,(2017)

Thomas J. Kane, "Shooting Bottle Rockets at the Moon: Overcoming the Legacy of Incremental Education Reform" Brookings Chalkboard May 29, 2014.,(2014)

Thomas J. Kane, Kerri Kerr and Robert Pianta (eds.) Designing Teacher Evaluation Systems: New Guidance from the Measures of Effective Teaching Project (Jossey-Bass). (Kane was the PI.),(2014)

David J. Deming, Justine S. Hastings, Thomas J. Kane, Douglas O. Staiger. "School Choice, School Quality and Postsecondary Attainment" American Economic Review Vol. 104, No. 3, pp. 991-1013.,(2014)

Thomas J. Kane, Daniel F. McCaffrey, Trey Miller and Douglas 0. Staiger, "Have We Identified Effective Teachers? Validating Measures of Effective Teaching Using Random Assignment" (Seattle, WA: Bill & Melinda Gates Foundation). (Kane was the PI.),(2013)

Jonah Rockoff, Douglas Staiger, Thomas J. Kane and Eric Taylor. "Information and Employee Evaluation: Evidence from a Randomized Intervention in Public Schools" American Economic Review Vol. 102, No. 7, pp. 3184-3213.,(2012)

Atila Abdulkadiroglu, Josh Angrist, Susan Dynarski, Thomas J. Kane & Parag Pathak. "Accountability and flexibility in public schools: Evidence from boston's charters and pilots" Quarterly Journal of Economics, Vol. 126, No. 2, pp. 699–748.,(2011)

Thomas J. Kane and Douglas 0. Staiger, "Volatility in School Test Scores: Implications for Test-Based Accountability Systems" in Diane Ravitch (ed.) Brookings Papers on Education Policy (Washington, DC: Brookings Institution, pp. 235-283).,(2002)

Thomas J. Kane and Douglas 0. Staiger. "The Promise and Pitfalls of Using Imprecise School Accountability Measures" Journal of Economic Perspectives, Vol. 16, No. 4, pp. 91-114.,(2002)

Thomas J. Kane, The Price of Admission: Rethinking How Americans Pay for College (Washington, DC: Brookings Institution).,(1999)

Thomas J. Kane and Cecilia Rouse. "Labor Market Returns to Two-Year and Four-Year College" American Economic Review, Vol. 85, No. 3, pp. 600-614.,(1995)

News Stories