When results of big international education assessments are released, the rankings are greeted with fanfare, generating headlines about how one country or region is dominating, or why another is not. But like the “best colleges” lists that gain similar attention, these global rankings are more noise than signal — creating misperceptions, risking ill-conceived policy decisions, and diverting attention from more nuanced (and effective) uses of the data.
That’s according to an article published today in Science by Judith Singer of Harvard University and Henry Braun of Boston College. The paper grew out of their work on a National Academy of Education steering committee, chaired by Singer, that studied the purposes, methods, and policy uses of so-called international large-scale assessments, or ILSAs — tests like the Programme for International Student Assessment (PISA) or the Progress in International Reading Literacy Study (PIRLS).
View the steering committee’s report, issued today, here. And access summaries, related papers, and videos here.
The Problem with Rankings
The report’s most significant takeaway, Singer and Braun argue, is the need to de-emphasize rankings when assessments are released. Although ILSAs provide a valuable framework for understanding how a jurisdiction’s education system is performing, and for motivating further investment in education, the rankings so dominate the releases that “they become the statement of truth, and the data that are underlying the rankings get lost,” says Singer.