Information For:

Give back to HGSE and support the next generation of passionate educators and innovative leaders.

Faculty & Research

Thomas Kane

Walter H. Gale Professor of Education and Economics

Thomas Kane

Degree:  Ph.D., Harvard University, (1991)
Email:  [javascript protected email address]
Phone:  617.496.4359
Fax:  617.495.2614
Vitae/CV:   Thomas Kane.pdf
Office:  50 Church Street 4th Floor
Faculty Coordinator:  Sheila Griffin


Thomas Kane is an economist and Walter H. Gale Professor of Education at the Harvard Graduate School of Education. He is faculty director of the Center for Education Policy Research, a university-wide research center that works with school districts and state agencies. Between 2009 and 2012, he directed the Measures of Effective Teaching project for the Bill & Melinda Gates Foundation. His work has spanned both K-12 and higher education, covering topics such as the design of school accountability systems, teacher recruitment and retention, financial aid for college, race-conscious college admissions and the earnings impacts of community colleges. From 1995 to 1996, Kane served as the senior economist for labor, education, and welfare policy issues within President Clinton's Council of Economic Advisers. From 1991 through 2000, he was a faculty member at the Kennedy School of Government. Kane has also been a professor of public policy at UCLA and has held visiting fellowships at the Brookings Institution and the Hoover Institution at Stanford University.

Click here to see a full list of Thomas Kane's courses.

Areas of Expertise

Professor of the Year, UCLA School of Public Affairs, Public Policy Department,(2006)

Sponsored Projects


COVID Catch-Up : Road to Recovery Study (2022-2023)
America Achieves

CEPR, CALDER, and NWEA (the research partnership) will work with a consortium of 11 mid-to-large sized school districts, who collectively serve over 700,000 students to understand the efficacy of tutoring and other COVID academic recovery initiatives. In particular, using interim assessment data, we are able to provide districts with information at multiple points during a year, so they can make course corrections in their recovery strategies as needed. In addition, we will be publishing nationally released reports of lessons learned from these districts to help guide recovery efforts in the broader field. In short, our focus is providing both partnership districts, and the country as a whole, timely evidence that can be used to make spending on recovery efforts more efficient and effective.


Re-Assessing Three Decades of U.S. Education Reform (2021-2023)
Walton Family Foundation

Critics and proponents of education reform efforts over the last 30years have created a narrative thatthere's been no progress and that achievement gaps are widening. Of course, their motivations are different. Reformers have found it expedient to lament the pace of improvement and widening gaps to make the case for more aggressive action. Their opponents have been making the same complaint, only to argue that past policies sudi as test based accountability are ineffective. Over time the cumulative effect has been to undermine the willingness of policymakers to take the inevitable political risks required for reform. The collective sense of futility both sides have created has itself become an important barrier to continued efforts. But has progress actually been slow and have achievement gaps truly continued to widen? In this project we wi II take a fresh look atthe state-by-state gains that have made on the National Assessment of Educational Progress, the long-term outcomes (e.g. earnings, educational attainment etc.) for those born in states with the largest increases, evidence on income-based gaps in achievement and the role of state policy (such as test-based accountabili ty or school finance reform) in driving the NAEP trends. To the extent that we can provide a more accurate picture where progress has been made- and not made-and the role of state policy in driving those changes, we believe the project will have important impacts on the ·atmospherics" of the education policy debate, both at the state level and atthe federal level.


PIER2 : Partnering in Education Research (PIER) An Interdisciplinary Pre-doctoral Training Program (2020-2025)

Because of the rich data they control, state and local education agencies are alluring partners for education researchers. Yet all researchers—especially young ones—must be mindful of the potential hazards: how to maintain objectivity, how to deliver bad news, how to negotiate a data use agreement and respect student privacy, how to convert an agency leader’s concern into a tractable research question. By renewing the PIER fellowship, we hope to train the next generation of education scholars to navigate such hazards and produce rigorous research which informs practitioner decisions. We will focus on the content area of Improving Education Systems and in the methodological area of Efficacy. Our proposed program includes ten core elements: (i) the coordinated curriculum requirements, (ii) the public seminar with private “director’s cut” seminar, (iii) the PIER proseminar course, (iv) the PIER faculty mentor, (v) the research apprenticeship, (vi) the policy internship, (vii) the PIER Summit, (viii) the PIER student seminar, (ix) independent research, and (x) job market support—are together designed to provide students with a mix of hard skills, soft skills, and relationships that are required for successful researcher-practitioner partnerships in education.Aside from our track record of training and placing many of the top young quantitative education researchers in the field today, our proposal has two primary strengths: (1) Through our Strategic Data Project, we have built a national network of agencies, having trained more than 350 data analysts in more than 125 state and local education agencies and non-profits. Moreover, with our recent IES-funded National Center for Rural Education Research Networks, that network now includes 50+ rural school districts in NY and OH. The size and breadth of the network allows us to accommodate a wide range of student interests as well as to foster direct relationships between fellows and agency leaders that are not dependent on a faculty member’s project. Fellows take those relationships with them as they start their careers. (2) In our first training grant in 2015, we included a policy internship, placing fellows in 10-week summer residencies in a partner agency. Thus, rather than starting from scratch, we are building on that experience and improving an existing policy internship: extending it to a yearlong engagement and adding a faculty coordinator of PIER partnerships (Carrie Conaway) to oversee student placements and partner engagement.Principal Investigator, Tom Kane, has extensive experience collaborating with school agencies, including leading the $60 million Measures of Effective Teaching project. Our twenty-six faculty affiliates are drawn from three schools at Harvard (HGSE, HKS, and GSAS) and two more are close collaborators, drawn from other Boston-area institutions. Number of PIER Fellows Trained: Approximately 30 doctoral students will receive two- or three-year PIER fellowships. (The exact number depends on cohort size across each of the three years and when students apply during their graduate career). Across all cohorts, the PIER fellowship will support 68 fellowship years.


Validation Network (2019-2023)
Bill and Melinda Gates Foundation

Over the last decade, public agencies have invested heavily in testing and in longitudinal data systems which track the outcomes of individual students over time and connect students to teachers and programs. (The teacher-student links are critical, since so many interventions are delivered through the actions, skills and expertise of teachers.) Despite this investment, however, the federal Institute of Education Sciences and philanthropies, such as BMGF, still generally use a traditional model of one-off, customized evaluations. This is inefficient and unnecessarily costly. For each new study, the data must be cleaned and assembled, data use agreements negotiated, and a team of researchers spends months testing different statistical models—but after the report is written, all that work is discarded. Just as NASA transitioned from single-use spacecraft to the multi-use Space Shuttle in the 1980’s, we seek to build a standing network of districts and charter schools, with data use agreements and analysis files already in place, which can be used to evaluate multiple interventions over multiple years. Such a network would dramatically lower the incremental cost of each new study.Moreover, given the decentralized nature of decision-making in U.S. education, it is necessary to engage a broader network of educators in the excitement of discovery: proposing solutions, choosing interventions to test, and in piloting and evaluating them. The time to build an audience is before an intervention is launched, rather than after the results are released. Too often, educators have been viewed as passive recipients of “evidence” after the fact. No wonder educators are not eager to change what they are doing in response to the latest research! They are truly an afterthought.Our long-term vision is to build a mechanism for engaging a broader audience from the start-- soliciting the feedback of teachers, principals and district/network leaders to establish design parameters for interventions to be developed, recruiting teachers and schools to participate in pilots for specific interventions, keeping the network informed of pilots underway, and sharing results as soon as the pilots are completed. To do that, we will also build a mechanism for collecting data on additional outcomes and on implementation challenges, beyond test scores alone. To provide that, the network will develop a platform for surveying participating teachers and students in treatment and comparison classrooms. Once the infrastructure is in place, we believe that the field will be able to make much more rapid progress in determining what works at a fraction of current costs. As importantly, such an infrastructure will make it easier for those developing interventions (either the developers themselves or those who fund them) to evaluate efficacy in a reasonable timeframe and incorporate feedback from the field. Finally, schools, teachers, and students – the “end users” – will, over time, be faced with an array of tested products rather than unfounded claims.Although the model will require philanthropic support to start, we believe that it could be self-sustaining once it is up, running, and at scale. We expect that intervention funders and developers will be willing to pay the costs of validation network for two reasons: first, the costs (both in time and money) will be substantially below what rigorous evaluations currently cost and second, partly as a result of the increase of information available due to lower costs of getting an evaluation done, schools and districts will be more willing to demand validated solutions.


National Center on Rural Education Research Networks (2019-2024)

The Center for Education Policy Research (CEPR) is proposing to launch the National Center for Rural Education Research Networks (NCRERN) under the topic of National Research and Development Centers. Over the last 13 years, CEPR has been a leader in building the data capacity of school agencies, having recruited and trained 250+ analysts working in more than 100 SEAs and LEAs. For the last 30 months, we have worked with a network of urban districts and CMOs to design and conduct pilots for educational software and chronic absenteeism through our Proving Ground initiative. We plan to draw from that experience to launch a new network for rural schools. NCRERN proposes to support a network of 60 rural school districts in NY and OH, and to work with 3 replication states (IA, NM, and WY) to address challenges in three key content areas: chronic absenteeism, college readiness (especially Advanced Placement courses (AP)), and college enrollment. The focused program of research will consists of four distinct types of studies: The historical analyses will explore trends and differences in outcomes by schools and subgroup, with the goal of generating hypotheses regarding possible solutions. We will also do predictive analyses to identify students most likely to benefit from targeted interventions (e.g., those at risk for chronic absenteeism, those best prepared for AP success, or those needing additional support to enroll in college). The second type of analysis will consist of carefully designed causal analyses of interventions chosen by our partners in each of the three topic areas. These studies will use randomized and quasi-randomized designs to monitor implementation, evaluate immediate impacts, and refine the intervention and improve implementation on a rapid-cycle basis. We will support the partners to monitor their implementation and refine the interventions over time. The third study will be a replication evaluation in which SEAs in the replication states (IA, NM and WY) will present to their rural districts the subset of interventions which proved effective in NY and OH. Finally, we will conduct an overall evaluation of the impact of participation in NCRERN, by comparing the change in outcomes in the participating districts and the comparison districts on a wide range of student outcomes.NCRERN will be led by Co-PIs, Thomas Kane (HGSE), Douglas Staiger (Dartmouth), and Christopher Avery (HKS). We will use regional experts, with long experience working with rural schools, to serve as network facilitators: John Sipple (Cornell) in New York and Mike Fuller (Muskingum Valley Educational Service Center) in Ohio. We will host regular in-person regional meetings and an annual convening for NCRERN partners. A Center Director will oversee the day-to-day activities and overall work of the center, and a State Network Manager will be placed in both New York and Ohio to manage each state’s network of rural districts. Additionally, a Research Manager will lead the full analytics team.We believe our proposal has several distinctive strengths: our history of building district capacity through our Strategic Data Project; our recent experience running a continuous improvement network through our Proving Ground project; the combined expertise of our project team in rural education, research methods and change management; the iterative approach to piloting and testing interventions that we envision; the feasibility of the overall evaluation to yield convincing impact estimates of our model; and (if successful) our connection with state regional organizations and state ESSA plans will simplify replication.


The Long-Run Influence of Charter Schools – Phase I (2018-2024)
Laura and John Arnold Foundation

The Long-Run Influence of Charter Schools (phase I) project hopes to create a charter school lottery data clearinghouse, to make it possible for other researchers to store data, propose projects and to gain secure access to the data for their own research projects. The first phase of the project (the phase covered by this grant) proposes a large-scale data collection effort in a handful of urban centers and will focus on two goals.• Assembling of admission lottery data from past cohorts of charter school applicants in order to estimate impacts on long-term outcomes-- such as earnings, college attendance and home ownership (all based on tax records). • Assembling of admission lottery results from current cohorts of charter school applicants before those data are lost or destroyed in order to lay the groundwork to estimate impacts on both medium-term outcomes (e.g., student achievement and college-going rates) and long-term outcomes (e.g., wages) in the future. Three categories of variables will be used to facilitate the analysis of long-term outcomes from the charter schools:• Linking variables (e.g. student name, gender, parent name, parent address, student date of birth, student grade level, the school a student is currently attending)• Lottery outcome variables (e.g. the randomized ordering used, any priority categories, whether the student was initially offered a slot, and whether the student was eventually offered a slot at a charter school as the school worked through the waiting list) and • whether the student enrolled in the charter school in subsequent school years. In phase two of the project (not included in this grant) hopes to merge the data onto IRS records.


Thomas J. Kane, David Blazar, Hunter Gehlbach, Miriam Greenberg, David Quinn, Daniel Thal. "Can Video Technology Improve Teacher Evaluations?: An Experimental Study" Education Finance and Policy, Vol. 15, No. 3, pp. 397-427, (2020).

David Blazar, Blake Heller, Thomas J. Kane, Morgan Polikoff, Douglas Staiger, Steve Carrell and Michal Kurlaender. "Curriculum Reform in the Common Core Era: Evaluating Elementary Math Textbooks Across Six U.S. States" Journal of Policy Analysis and Management, Vol. 39, No. 4, pp. 966-1019, (2020).

Thomas J. Kane, "Develop and Validate--then Scale: Lessons from the Gates Foundation's Effective Teaching Strategy." Education Next, October 15, 2018.

Mark Chin, Thomas J. Kane, Whitney Kozakowski, Beth E. Schueler, Douglas 0. Staiger. "School District Reform in Newark: Within- and Between-School Changes in Achievement Growth" Industrial and Labor Relations Review, Vol. 72, No. 2, pp. 323-354, (2018).

Thomas J. Kane, "Making Evidence Locally: Rethinking Education Research under the Every Student Succeeds Act" Education Next, Vol. 17, No. 2, pp. 52-58, (2017).

Thomas J. Kane, Kerri Kerr and Robert Pianta (eds.) Designing Teacher Evaluation Systems: New Guidance from the Measures of Effective Teaching Project (Jossey-Bass). (Kane was the PI.), (2014).

Thomas J. Kane, "Shooting Bottle Rockets at the Moon: Overcoming the Legacy of Incremental Education Reform" Brookings Chalkboard May 29, 2014.

David J. Deming, Justine S. Hastings, Thomas J. Kane, Douglas O. Staiger. "School Choice, School Quality and Postsecondary Attainment" American Economic Review Vol. 104, No. 3, pp. 991-1013, (2014).

Thomas J. Kane, Daniel F. McCaffrey, Trey Miller and Douglas 0. Staiger, "Have We Identified Effective Teachers? Validating Measures of Effective Teaching Using Random Assignment" (Seattle, WA: Bill & Melinda Gates Foundation). (Kane was the PI.), (2013).

Jonah Rockoff, Douglas Staiger, Thomas J. Kane and Eric Taylor. "Information and Employee Evaluation: Evidence from a Randomized Intervention in Public Schools" American Economic Review Vol. 102, No. 7, pp. 3184-3213, (2012).

Atila Abdulkadiroglu, Josh Angrist, Susan Dynarski, Thomas J. Kane & Parag Pathak. "Accountability and flexibility in public schools: Evidence from boston's charters and pilots" Quarterly Journal of Economics, Vol. 126, No. 2, pp. 699–748, (2011).

Thomas J. Kane and Douglas 0. Staiger, "Volatility in School Test Scores: Implications for Test-Based Accountability Systems" in Diane Ravitch (ed.) Brookings Papers on Education Policy (Washington, DC: Brookings Institution, pp. 235-283), (2002).

Thomas J. Kane and Douglas 0. Staiger. "The Promise and Pitfalls of Using Imprecise School Accountability Measures" Journal of Economic Perspectives, Vol. 16, No. 4, pp. 91-114, (2002).

Thomas J. Kane, The Price of Admission: Rethinking How Americans Pay for College (Washington, DC: Brookings Institution), (1999).

Thomas J. Kane and Cecilia Rouse. "Labor Market Returns to Two-Year and Four-Year College" American Economic Review, Vol. 85, No. 3, pp. 600-614, (1995).

News Stories