May 21, 2019; Wall Street Journal, The Root, and Slate
In what The Root calls a “terrible-ass” idea, The College Board, the nonprofit that oversees the SAT, announced they were adding an “adversity score” for the purpose of identifying deserving students who might otherwise be left behind. Their motivation for taking this step is a recognition that the SAT itself has traditionally exhibited its own bias.
The Environmental Context Dashboard (ECD) is meant to capture the social and economic background of each student who takes the SAT. It comprises a variety of factors:
- Neighborhood environment
- Crime rate
- Poverty rate
- Housing values
- Vacancy rate
- Family environment
- Median income
- Single parent
- Education level
- ESL
- High School environment
- Undermatching
- Curricular rigor
- Free lunch rate
- Advanced Placement opportunity
But critics of this program worry that the whole approach will just make the system even worse. As Jane C. Hu writes for Slate: Since the details of the score and how it is computed will be kept secret by The College Board, students won’t know how they have been rated. The actual algorithm used to take the data and render it into a single number won’t be open to public scrutiny. As Jane C. Hu of Future Tense observes in Slate:
Universities that use the ECD for admissions are essentially trusting the researchers at the College Board to be the arbiters of what it means to face adversity, and whether the metrics they’ve chosen are a good representation of that. To the College Board’s credit, it has consulted with education researchers while creating the ECD, but by keeping the algorithm’s parameters a secret, it’s missing out on opportunities for feedback from a broader swath of experts and the students affected by these scores.
Finally, some see the system as suspect because the College Board is suspect. The TopTier Admissions blog asked, “Is it fair that the College Board, the group that has designed a test that has proven to be unfair and biased towards black and [Latinx] students and those from low income backgrounds, is now telling everyone that they have a secret score that somehow mitigates the discrimination?”
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
Hu writes that there is no reason to believe in this latest version of an automatic people sorting device and that this is no way to approach the problem:
The ECD is just the latest attempt to outsource nuanced decision-making to an algorithm at the risk of exacerbating societal biases. Many facets of our lives are ruled by such algorithms—which people charged with crimes must await trial in jail, whether police keep special watch over our neighborhood, which Facebook ads we see, whether our children are taken from us if someone calls child services—yet we have limited insight into how they work. Even the people who code these algorithms cannot necessarily predict what their creation learns as it’s fed data; often, models pick up the same biases that pervade our society. Amazon’s hiring model was scrapped after engineers discovered it discriminated against women; a model that taught itself English ended up with biases against women and black people.
She quotes a number of other experts about the larger implications of submitting students to such a process.
Given the racist history underlying the development of standardized tests and gatekeeper-y status the College Board has played over decades of college admissions, many are skeptical of the nonprofit’s motivations and abilities to craft an equitable admission tool. “Who gets to decide whether these students have faced adversity or not?” asks Blanca Vega, an assistant professor of higher education at Montclair State University. Students won’t know what’s part of the score, and how each component is weighed is not known to students—and “that’s part of the inequity,” says Vega. On Twitter, mathematician Cathy O’Neil pointed out that the opacity of the algorithm has the power to harm students, who will remain in the dark about how this mysterious score affects their shot at admissions. “Categories of students that are not well understood by the scores—and they will exist—will see their entire college application experience get worse,” she wrote.
The Adversity Score was beta-tested in 50 schools last year. For the next cycle, the test will be expanded to 150, and absent strong opposition, will become a standard component of the SAT process.—Martin Levine