Despite widening gaps between highest- and lowest-scoring students, average scores in reading and mathematics were essentially flat from 2015 to 2017, with the exception of eighth-grade reading scores, where the percentage of proficient students increased by two percentage points.
- In reading, 37 percent of fourth-grade students and 36 percent of eighth-grade students scored at or above proficient.
- In mathematics, 40 percent of fourth-grade students and 34 percent of eighth-grade students scored at or above proficient.
Results show a significant variation in performance between states and between the large urban districts that volunteer to participate in NAEP, showing how, for U.S. students, achievement remains strongly correlated with geography. In Florida, average math scores in fourth and eighth grade rose from 2015; in 10 other states, they declined. In Charlotte, North Carolina, 41 percent of eighth-graders were at or above proficient in mathematics; in Detroit, only 5 percent of eighth-graders were proficient.
The Shift to Digitally Based Assessments
One important change in the 2017 testing is that students were primarily assessed digitally, using tablet computers. Scores from these digitally based assessments were then calibrated, through careful research, to ensure a fair and consistent measure of educational progress, according to Andrew Ho, a psychometrician and member of the National Assessment Governing Board.
“That 'P' in NAEP stands for 'progress,' and that is important,” says Ho, who teaches at the Harvard Graduate School of Education and uses NAEP in his research. “What a lot of people pay attention to in any given release year are state rankings — who’s got the greatest percent of proficient students, which states are doing the best. But the central purpose of NAEP is measuring progress. That requires a consistent measure of essential knowledge and skills, across the country, across states, and over time.”
That consistency is particularly important now, he says, since “over the past eight years — the so-called Common Core era — there has been dramatic and unprecedented volatility in state testing programs. Most states today cannot answer the question of whether or not their students are doing better now than they did in 2009. Only NAEP can.”
Without stable, comparable state testing programs, NAEP needed to ensure that it could continue to do so, despite and amid the transition from paper-based to digitally based assessments, says Ho. NAEP had to “thread the needle between measuring what’s relevant and measuring consistent progress. And that’s an interesting tension," he says. "There’s an old mantra that says, ‘If you want to measure change, don’t change the measure.' But, of course, we always change the measure in practice. We have to, because what’s important in schools changes, and if you want to measure what kids are learning, you have to adapt to the times.
“Digitally based assessment is a big step toward flexibility," he continues. "We can connect to the past trends — we’ve got 25 years of data about educational progress that tells a powerful story — and we can be flexible and forward thinking about the kinds of skills that will be important to measure in the future. And these skills are increasingly being taught and learned using digital tools, and in digital environments.”