A Closer Look At Learning Loss

  • Tuesday, October 6, 2020

Many think tanks and politicians are making dire predictions of student learning loss triggered by COVID-19. Certainly, some students across the nation did experience learning loss. However, some of these projections are worthless speculation considering the inconsistent back-to-school methods being used nationwide. Indeed, inequities were likely magnified for low-income students, those with disabilities, and students that lacked distance learning capabilities, when compared to students who were better supported.

We looked into the unsubstantiated numbers here in Tennessee and believe the learning loss statistics Commissioner Penny Schwinn shared are not buoyed by the data. Schwinn and the Tennessee Department of Education projects an estimated 50 percent decrease in proficiency rates in 3rd-grade reading and a projected 65 percent decrease in proficiency in math.


Those on both sides of the political aisle understand that improper use of projected statistics and predictions about student proficiency could have a damaging effect on the challenging work our hard-working educators are currently engaged in. Determining a cause and effect during a pandemic would be nearly impossible. There is a need for further analysis with actual evidence to support the claims on proficiency declines.

In general, we know that higher-achieving students lose less during prolonged school breaks. However, in some grades, they could actually gain knowledge. In Tennessee, proficiency rates were already low and students who tend to gain during school closures are overwhelmingly the proficient students.

Ms. Schwinn identified NWEA and CREDO studies for her claimed student proficiency losses. NWEA's study is based on their assessment, MAP. CREDO did use historical data and applied a set of assumptions to predict losses in terms of standard deviations, based on school closures and learning loss. They never translated their findings to proficiency rates. There is also room for lots of debate in the assumptions. For example, did all students lose the same amount due to school closures?

The Tennessee Department of Education shared no specifics to justify third-grade predictions. The department claims 40,000 checkpoints are now completed. They never stated how many of those were 3rd-grade math and 3rd-grade reading. Also, note these counts are NOT students, they are tests. That means if you took both the math and the reading, you count twice. Likely about 3000 3rd Grade checkpoints were administered to arrive at their estimation. Checkpoints are not validated for learning loss. The most important point, which is reflected in previous literature distributed by the Tennessee Department of Education, states: “Checkpoint is not predictive of, or comparable to, summative TCAP results."

Checkpoints are not validated for the purpose used by Ms. Schwinn. We lacked enough information to know if they are valid for anything. Were these created using the standard robust assessment creation process? Or did someone just pick out previous questions and drop them into a test? The checkpoints were taken by a non-representative sample of students. 3000 assessments are a sufficient number to make conclusions on. The issue is we know nothing about the demographic and prior achievement distribution of these students. We know this assessment is optional and those who take it are non-random, which leads us to understand their performance is not indicative of the state (even if the assessment was valid for this type of claim, which again it is not). For all we know, these are the lowest-performing students in the state. 

If Tennessee is indicative of other states, learning loss is going to be used for future policy battles. Does the public deserve to know where the data used to formulate projections came from? Overzealous data mining can seriously harm confidence in public education, and create privacy concerns if individual data is compromised. What was the sample size? Are they reliable? Are they valid? Reliability relates to the accuracy of their data. Reliability problems in education often arise when researchers overstate the importance of data drawn from too small or too restricted a sample. Validity refers to the essential truthfulness of a piece of data. By asserting validity, does the data measure or reflect what is claimed? Were the projections based on students from this current academic year? How did they consider the variable of a pandemic, which hasn’t happened since 1918?

As policy debates and discussions continue into the 2021 academic year, taking advantage of any crisis, whether actual or created, is a familiar tool by politicians. Some students across the nation did experience learning loss. Any analysis done is probably best used for comparison between schools within a community and similar school districts within a state, not to create statewide or national policy.

JC Bowman
Executive Director of Professional Educators of Tennessee

Opinion
Defense Was Delighted To Get Chattanooga Jury In Tyre Nichols Case
  • 5/8/2025

I agree with numerous people who have stated that they disagreed with the verdict in the Tyre Nichols case in Memphis. The not guilty verdict, concluded by a Chattanooga jury was a devastating ... more

Honoring Chris: A Classmate’s Call For Justice For Christopher Wright
  • 5/7/2025

As the closing arguments have now been made in the trial of Darryl Roberts for the murder of my friend, teammate and classmate Christopher Douglas Wright, I—like many of my fellow classmates ... more

Mayor Kelly's Budget Work Is Not Done - And Response
  • 5/7/2025

With much respect to Mayor Tim Kelly who I personally like and have done business with in the past, I have to say that the proposed city budget should be voted down by every member of the City ... more