Usable Knowledge is a trusted source of insight into what works in education — translating new research into easy-to-use stories and strategies for teachers, parents, K-12 leaders, higher ed professionals, and policymakers. Usable Knowledge is produced at the Harvard Graduate School of Education by Bari Walsh (senior editor) and Leah Shafer (staff writer). Contact us at email@example.com.
Focus on Evidence
Four points on what works, and what doesn’t, in evaluating education policy
Harvard Graduate School of Education Professor Thomas Kane, a Senior Fellow with the Brookings Institution’s Brown Center on Education Policy, recently published "Frustrated with the pace of progress in education? Invest in better evidence" on the Brookings' website. An excerpt follows.
The primary obstacle to faster progress in U.S. education reform is hard to put your finger on, because it’s an absence, not a presence. It is not an interest group or a manifest social problem. It is the infrastructure we never built for identifying what works. It is the organizational framework we’ve not yet constructed for building consensus among education leaders across the country to identify what’s working. Before you roll your eyes at another call for more research by a self-interested researcher, consider the following argument:
In education as in medicine, most new ideas will fail.
For the largest pharmaceutical companies, more than 80 percent of Phase II clinical trials failed between 2008 and 2010. Do we have any reason to believe that educational interventions will have a higher success rate? Student learning and the process of adult behavior change in schools are just as complex as the typical disease process — and probably less well understood. It is impossible to anticipate every obstacle and complication. We should anticipate that most new ideas will fail and develop the infrastructure for testing a large number of them on a small scale first.
The concept of a clinical trial — the small scale deployment of a promising idea, with a comparison group for measuring efficacy — is foreign to education. As with Race to the Top, we tend to roll out reforms broadly, with no comparison group in mind, and hope for the best. Just imagine if we did that in health care. Suppose drug companies had not been required to systematically test drugs, such as statins, before they were marketed. Suppose drugs were freely marketed and the medical community simply stood back and monitored rates of heart disease in the population to judge their efficacy. Some doctors would begin prescribing them. Most would not. Even if the drugs were working, heart disease could have gone up or down, depending on other trends such as smoking and obesity. Two decades later, cardiologists would still be debating their efficacy. And age-adjusted death rates for heart disease would not have fallen by 60 percent since 1980.
Sound far-fetched? That’s exactly how our ancestors ended up practicing bloodletting for 2,500 years of our history. It’s been six years since the Race to the Top Initiative, and there’s still no consensus on whether the key ideas behind those reforms are producing progress. How are we ever going to generate momentum for an education reform agenda without systematically testing the various components in limited ways before rolling them out broadly.
Read the full article here.
Get Usable Knowledge — Delivered
Our free monthly newsletter sends you tips, tools, and ideas from research and practice leaders at the Harvard Graduate School of Education. Sign up now.