Usable Knowledge Nation's Report Card Shows Widening Gaps Results of first digitally based NAEP show slight progress in eighth-grade reading, much variety across states and urban districts Posted April 10, 2018 By Bari Walsh The release of the 2017 National Assessment of Educational Progress (NAEP) in reading and mathematics shows widening gaps between the highest and lowest achieving students in America, underscoring a continuing need for investment in efforts to make education systems more equitable. Compared to 2015 results, 2017 scores in reading and mathematics were higher for eighth-grade students who performed in the 75th and 90th percentiles of test takers, and they were lower for fourth-graders in the 10th and 25th percentiles.Those stubborn and widening gaps are one of the major findings of this year's NAEP results, also referred to as the Nation's Report Card. The gaps persist even as the data also show that overall scores in both reading and math for fourth- and eighth-graders have risen since the assessments were first given in the early 1990s, and that gaps between white and black and white and Hispanic fourth graders have narrowed.Read the full 2017 NAEP findings. “Digitally based assessment is a big step toward flexibility," says Andrew Ho. "We can connect to the past trends — we’ve got 25 years of data about educational progress that tells a powerful story — and we can be flexible and forward thinking about the kinds of skills that will be important to measure in the future." Despite widening gaps between highest- and lowest-scoring students, average scores in reading and mathematics were essentially flat from 2015 to 2017, with the exception of eighth-grade reading scores, where the percentage of proficient students increased by two percentage points.In reading, 37 percent of fourth-grade students and 36 percent of eighth-grade students scored at or above proficient.In mathematics, 40 percent of fourth-grade students and 34 percent of eighth-grade students scored at or above proficient.Results show a significant variation in performance between states and between the large urban districts that volunteer to participate in NAEP, showing how, for U.S. students, achievement remains strongly correlated with geography. In Florida, average math scores in fourth and eighth grade rose from 2015; in 10 other states, they declined. In Charlotte, North Carolina, 41 percent of eighth-graders were at or above proficient in mathematics; in Detroit, only 5 percent of eighth-graders were proficient.The Shift to Digitally Based AssessmentsOne important change in the 2017 testing is that students were primarily assessed digitally, using tablet computers. Scores from these digitally based assessments were then calibrated, through careful research, to ensure a fair and consistent measure of educational progress, according to Andrew Ho, a psychometrician and member of the National Assessment Governing Board.“That 'P' in NAEP stands for 'progress,' and that is important,” says Ho, who teaches at the Harvard Graduate School of Education and uses NAEP in his research. “What a lot of people pay attention to in any given release year are state rankings — who’s got the greatest percent of proficient students, which states are doing the best. But the central purpose of NAEP is measuring progress. That requires a consistent measure of essential knowledge and skills, across the country, across states, and over time.”That consistency is particularly important now, he says, since “over the past eight years — the so-called Common Core era — there has been dramatic and unprecedented volatility in state testing programs. Most states today cannot answer the question of whether or not their students are doing better now than they did in 2009. Only NAEP can.”Without stable, comparable state testing programs, NAEP needed to ensure that it could continue to do so, despite and amid the transition from paper-based to digitally based assessments, says Ho. NAEP had to “thread the needle between measuring what’s relevant and measuring consistent progress. And that’s an interesting tension," he says. "There’s an old mantra that says, ‘If you want to measure change, don’t change the measure.' But, of course, we always change the measure in practice. We have to, because what’s important in schools changes, and if you want to measure what kids are learning, you have to adapt to the times.“Digitally based assessment is a big step toward flexibility," he continues. "We can connect to the past trends — we’ve got 25 years of data about educational progress that tells a powerful story — and we can be flexible and forward thinking about the kinds of skills that will be important to measure in the future. And these skills are increasingly being taught and learned using digital tools, and in digital environments.” Usable Knowledge Connecting education research to practice — with timely insights for educators, families, and communities Explore All Articles Related Articles EdCast Student Testing, Accountability, and COVID Professor Andrew Ho on whether standardized testing is the best way to assess student learning — and learning loss — during COVID times. Ed. Magazine Beware of Test Assessments Judith Singer warns that international test score data can be misleading. Usable Knowledge Testing the Global Education Tests Country rankings win headlines, but they often mislead, creating false comparisons and risking policy confusion.