School districts around the country have more data available to them than ever before, but figuring out what to do with all that information isn't always easy.
The P.S. 175 Data Wall commands a central place in Principal Cheryl McClendon's office at the Henry Highland Garnet School for Success in Harlem, N.Y. It's color-coded, with green and blue signifying proficiency in state math and English language arts exams. Yellow and red indicate failure to meet benchmarks.
McClendon acknowledges there is far too much yellow on the English scores, but she remains optimistic. The school's schedule now includes back-to-back reading and writing periods, providing more time for literacy training. And according to the sophisticated metric in New York City schools that includes student progress as well as qualitative measures on the school environment, P.S. 175 was rated B — a score better than 51 percent of city schools.
"We progressed from a C to a high B," says McClendon. "It was really, really hard work."
The preeminence of the display on McClendon's wall reflects the burgeoning role that statistical data plays in the U.S. educational system. Over the past 20 years, the accountability movement's reliance on data to quantify student learning has transformed pedagogical practice and opened up educational practice to show the public how well students are achieving — or not achieving — in their public schools.
Statewide standardized tests, mandated by the federal government under No Child Left Behind, have provided mounds of data for educators to analyze. Yet these piles of numbers have left many educators paralyzed and unable to figure out how to use them. Meanwhile, initiatives across the country, sparked by the federal Race to the Top competition, which link teacher evaluations in part to student achievement, have pushed our data-driven system into new frontiers.
Sarah Glover, executive director of the Strategic Data Project, under the Ed School's Center for Education Policy Research, says districts are hungry for strategies to make use of the data stacking up at district headquarters.
"We want to take advantage of the mounds of data that are accumulating and apply analytic methods to be more predictive, and help understand how we can keep things moving forward," she says. "If we want kids to graduate from high school on time, what are the markers they need to hit in the K–12 career to do so, and what are the practices that will get them there?"
The project, which now includes 10 school districts, a charter management group, and two state education departments, is the latest example of data's primacy in 21st-century education and of the growing influence of economists in education policy. It wasn't always that way. Back in the 19th century, clergy held sway in education circles. By the early 20th century, psychologists had burst on the scene as they began to measure learning. Then came the sociologists and lawyers of the 1960s and 1970s, who were concerned with equal opportunity and equity in the schools.
Economists, meanwhile, began to investigate education in the 1950s, with University of Chicago free-market proponent Milton Friedman asserting that schools would run more efficiently if governments separated school finance from school operations. His work led to the development of school voucher programs.
Labor economists used longitudinal models that had been developed to determine the impact of social programs on worker income to analyze student performance on standardized tests. The economists substituted test scores for wages to see progress — or the lack thereof.
Other economists used their analytical tools in lawsuits arguing that states failed to provide adequate education for all students, while others developed models to show that spending public money on dropout-prevention programs would actually save taxpayer money if the dropouts were later incarcerated.
Proponents for public investment in early childhood education have relied on the work of Nobel Laureate James Heckman, whose studies have shown the positive results of early childhood investments, based on higher earnings, less crime, and lower unemployment among adults who had been enrolled in high-quality preschool programs as children.
Raj Chetty, a professor of economics at Harvard, in 2010 published a study that estimates that having an above-average kindergarten teacher in a classroom of 20 will generate about $320,000 more in total lifetime earnings for each of his or her students, compared to the same class with a below-average teacher. Chetty analyzed data from randomized experiments involving 11,500 students conducted in 79 Tennessee elementary schools from 1985 to 1989. Chetty's team tracked down 95 percent of those tested to see if students who scored well on kindergarten tests were earning more than their classmates by the time they reached their mid-20s.
He found that high kindergarten scores predicted a wide variety of outcomes for the students: They were more likely to attend college, have retirement savings, be homeowners, and live in better neighborhoods.
"Economists have a huge tool kit through econometrics," says Chetty. "And we are now applying those tools to education. With today's focus on test scores and achievement, there's a tremendous amount of good data."
The reliance on student data to justify certain public investments and drive instruction in the classroom has attracted scores of economists to the field. Ed School Professor Richard Murnane, a research associate at the National Bureau of Economic Research (NBER) who helped start the school's Data Wise Project (see sidebar, page 32), recalls that in the 1980s, there were a handful of economists at the bureau involved in K–12 educational research. Today, Murnane says the bureau has about 120 NBER economists focused on education.
"The data is there, and the economists are finding support to carry out their experiments," says Murnane, coauthor of the 2011 book, Methods Matter: Improving Causal Inference in Education and Social Science Research. "There are lots of policy levers, and we are able to look at the consequences of these policies."
While economists are not the only ones using — or creating — data, critics decry the omnipresence of statistics. They say the overreliance on data has harmed education by narrowing curricula and focusing on test preparation to ensure that students pass mandated tests in math and English language arts. The recent scandals surrounding cheating by test administrators in cities like Atlanta and New York have also called into question the validity of test results.
"This test mania has gotten completely out of control," says Diane Ravitch, research professor of education at New York University and author of the 2010 book, The Death and Life of the Great American School System: How Testing and Choice Are Undermining Education. "Education is so much more than data about reading and math, and some of the data today is utterly untrustworthy."
That data, however, can't be ignored, says J.D. LaRock, Ed.M.'04, a senior analyst at the Organisation for Economic Co-operation and Development in Paris, who served as senior education advisor to Senator Edward Kennedy (D-MA) from 2006 to 2008, and policy director for the Massachusetts Executive Office of Education from 2008 to 2010. LaRock, working toward his doctorate in the Ed School's program in administration, planning, and social policy, says those jobs on Beacon Hill and in Washington, D.C., trained him to use data as the centerpiece for improving schools.
When he ran successfully for school board in the Boston suburb of Melrose in 2009, he made academic achievement, and the use of data to promote it, the top issue in his campaign.
"I think when people understand what the numbers mean, it can light a fire under them," says LaRock. "It wasn't so sophisticated what we did. But the simple act of ensuring that the committee looked closely at performance data on a regular basis gave life to new conversations that weren't taking place before."
The analysis led to a more well-defined plan to narrow the achievement gaps, with specific goals, and a deeper articulation of the strategy to reach those goals, LaRock says.
"Essentially, you can't argue with the facts," he says. "It put academic achievement front and center in what we do."
Having the facts, though, isn't enough as educators wrestle with reams of data and spreadsheets that extend out far beyond their computer screens. The Strategic Data Project helps districts decipher the data in ways that can improve instruction.
Nathan Kuder, a fellow with the project working in Boston's Office of Accountability, has helped the district make sense of its data concerning the delivery of services to English language learners. His charge: to help the department understand where these students were enrolled, what services they were receiving, and where the district needed to train teachers to serve those students.
Problems arose because teacher data was kept by the district's human resources department while student data was kept in a separate system. Kuder's initial report had 27 different categories of service delivery.
"The first reports were overwhelming," recalls Kuder. "There was too much information. So we simplified it, made a school-by-school report for each of our 125 schools."
That report, now circulated monthly, has guided the district to more effectively assign teachers certified to teach English language learners. By the end of the 2010–11 school year, Boston saw a 35 percent increase in students receiving complete services.
Now Kuder is working with Boston educators on designing a metric to help drive instruction that uses data in addition to results from the Massachusetts Comprehensive Assessment System, known as the MCAS, which has tests taken in March, with results not available until six months later in September.
"We'd like to get beyond MCAS," says Kuder. "We're trying to identify a new metric, so we are looking out the front window, instead of always looking behind us with old information."
While Kuder has developed a system to better allocate teaching resources in Boston, Ed School doctoral student Tom Tomberlin, Ed.M.'06, another Strategic Data Project fellow working with Charlotte-Mecklenburg Schools in North Carolina, is developing a teacher evaluation system. That system, like many under development around the country, will be based in part on how a teacher's students perform on statewide standardized tests.
That project began with an exploration of merit pay, using a value-added model, to measure student progress during their time with individual teachers. Tomberlin says opposition arose because additional measures of teacher effectiveness in the schools — in addition to test scores — had yet to be developed.
Now Tomberlin is working with teachers on several areas that could be included in the evaluation system: content pedagogy, participation in professional learning communities, student surveys, teacher work product, teacher observation, student learning objectives, and value-added measures to determine if students have achieved a year's work in their subject.
First, Tomberlin's group is doing a literature review to determine if research supports the importance of these factors in instruction. Then they will move on to determine how these factors get measured, through a standardized method of observation and measurement. That will generate unique data — a crucial element of America's data-driven education world.
"We've gone back to the drawing board to develop our multiple measures of effectiveness before we start talking again about compensation," says Tomberlin. "So we're engaging with our faculty to hash it out."
— David McKay Wilson is a New York–based freelance writer whose focused on President Obama's overhaul of the student financial aid system.