A Closer Look At Learning Loss

  • Tuesday, October 6, 2020

Many think tanks and politicians are making dire predictions of student learning loss triggered by COVID-19. Certainly, some students across the nation did experience learning loss. However, some of these projections are worthless speculation considering the inconsistent back-to-school methods being used nationwide. Indeed, inequities were likely magnified for low-income students, those with disabilities, and students that lacked distance learning capabilities, when compared to students who were better supported.

We looked into the unsubstantiated numbers here in Tennessee and believe the learning loss statistics Commissioner Penny Schwinn shared are not buoyed by the data. Schwinn and the Tennessee Department of Education projects an estimated 50 percent decrease in proficiency rates in 3rd-grade reading and a projected 65 percent decrease in proficiency in math.


Those on both sides of the political aisle understand that improper use of projected statistics and predictions about student proficiency could have a damaging effect on the challenging work our hard-working educators are currently engaged in. Determining a cause and effect during a pandemic would be nearly impossible. There is a need for further analysis with actual evidence to support the claims on proficiency declines.

In general, we know that higher-achieving students lose less during prolonged school breaks. However, in some grades, they could actually gain knowledge. In Tennessee, proficiency rates were already low and students who tend to gain during school closures are overwhelmingly the proficient students.

Ms. Schwinn identified NWEA and CREDO studies for her claimed student proficiency losses. NWEA's study is based on their assessment, MAP. CREDO did use historical data and applied a set of assumptions to predict losses in terms of standard deviations, based on school closures and learning loss. They never translated their findings to proficiency rates. There is also room for lots of debate in the assumptions. For example, did all students lose the same amount due to school closures?

The Tennessee Department of Education shared no specifics to justify third-grade predictions. The department claims 40,000 checkpoints are now completed. They never stated how many of those were 3rd-grade math and 3rd-grade reading. Also, note these counts are NOT students, they are tests. That means if you took both the math and the reading, you count twice. Likely about 3000 3rd Grade checkpoints were administered to arrive at their estimation. Checkpoints are not validated for learning loss. The most important point, which is reflected in previous literature distributed by the Tennessee Department of Education, states: “Checkpoint is not predictive of, or comparable to, summative TCAP results."

Checkpoints are not validated for the purpose used by Ms. Schwinn. We lacked enough information to know if they are valid for anything. Were these created using the standard robust assessment creation process? Or did someone just pick out previous questions and drop them into a test? The checkpoints were taken by a non-representative sample of students. 3000 assessments are a sufficient number to make conclusions on. The issue is we know nothing about the demographic and prior achievement distribution of these students. We know this assessment is optional and those who take it are non-random, which leads us to understand their performance is not indicative of the state (even if the assessment was valid for this type of claim, which again it is not). For all we know, these are the lowest-performing students in the state. 

If Tennessee is indicative of other states, learning loss is going to be used for future policy battles. Does the public deserve to know where the data used to formulate projections came from? Overzealous data mining can seriously harm confidence in public education, and create privacy concerns if individual data is compromised. What was the sample size? Are they reliable? Are they valid? Reliability relates to the accuracy of their data. Reliability problems in education often arise when researchers overstate the importance of data drawn from too small or too restricted a sample. Validity refers to the essential truthfulness of a piece of data. By asserting validity, does the data measure or reflect what is claimed? Were the projections based on students from this current academic year? How did they consider the variable of a pandemic, which hasn’t happened since 1918?

As policy debates and discussions continue into the 2021 academic year, taking advantage of any crisis, whether actual or created, is a familiar tool by politicians. Some students across the nation did experience learning loss. Any analysis done is probably best used for comparison between schools within a community and similar school districts within a state, not to create statewide or national policy.

JC Bowman
Executive Director of Professional Educators of Tennessee

Opinion
How Long?
  • 4/20/2024

In Brooklyn agitators burned American flags chanted Death to America waving the flag of Hezbollah and displayed a poster that read, “Free Palestine or Else.” (Hindustan Times) “Or else” what? ... more

Profiles In Valor - Fred Mayer
Profiles In Valor - Fred Mayer
  • 4/19/2024

Recently, I profiled West Virginia native Chuck Yeager and noted that, like some other heroic folks from the Mountain State, including Medal of Honor recipient Woody Williams (USMC), this ancient ... more

TNGOP Budget Puts Big Business Over Working Families - And Response
  • 4/19/2024

The Republican-controlled Tennessee General Assembly passed yesterday a $53 billion budget that included a $1.6 billion cash handout for some property-rich corporations and a new $400 million ... more