March 29, 2018; Slate

Behold the mighty algorithm. It assists with decision-making regarding welfare benefits, access to housing, and intervention from child protective services. These calculations ostensibly assist with improving government and social service efficiencies. But, what if your perspective is that algorithms are making a flawed system more imperfect?

A recent article in Slate examined this framework. Virginia Eubanks authored a book entitled Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor that explores the impact of algorithms on decision-making in social services. From the interview with Eubanks, there are a number of lessons for nonprofits.

The suggested framework suggests examining in a specific way how decisions affect people’s lives, and not simply looking from the larger abstract perspective.

Lessons learned:

Lesson #1: If you automate inefficiencies and inequalities, you will have an automated system that continues those inefficiencies and inequalities. “We have a tendency to talk about these tools as if they’re simple administrative upgrades”, says Eubanks. “We often believe that these tools are neutral and unbiased. But, like people, technologies have bias built right into them.”

To further understand this quote, view this TED talk of MIT researcher Joy Buolamwini that discusses her work in exploring the racial bias in algorithms and machine learning, as she demonstrates how facial recognition software has difficulty seeing skin color.

How does this impact nonprofits? This embedded bias can affect who is hired or granted a loan, and how people are recognized or go ignored by Facebook bots.

Lesson #2: It is likely that the attempt to automate work processes will have unintended consequences. No matter how hard an organization like a nonprofit tries to be effective, efficient, or equitable, all decisions are based on values. For example: time or money (standard choice), convenience over service (cellphones), and increased volunteer and community input versus a need to take definitive action now (the means, or the ends).

An example comes from Indiana, where the state welfare program was partnered with IBM and ACS to automate and privatize the program to determine eligibility. The state administrator identified a possible spot for fraud: the relationship between caseworkers and families applying for services. As a result, the technology solution was built to minimize the casework-based system and to maximize a task-based system.

The new system was call-center-based, with caseworkers responding to a list of computer-prioritized tasks. Eligibility was determined by long, detailed, complicated forms, and every time a client called in, they received a different caseworker. Denials rose dramatically in the program’s first three years. As complaints from clients, caseworkers, and lawmakers increased; after three years, the state of Indiana and IBM countersued. Substituting personal caseworkers and a relationship with clients with computers and phone calls, while more efficient, was deemed ineffective.

Nonprofits can alter or prevent such outcomes by asking questions to determine what unintended consequences might follow from moves toward automating processes. Understanding the values of the nonprofit, such as relationships over efficiency, can help lead the nonprofit towards better tech decisions.

Lesson #3: Tools are not disrupters as much as amplifiers. Tools developed for use in a public assistance program could have a built-in punitive perspective, such as the Indiana example above. In that example, the values of preventing fraud were highlighted over the value of relationship with the clients.

A successful example highlighted is mRelief, which is both a tool and a nonprofit. Applicants who want to know if they qualify for food stamps answer a few questions online. mRelief, as in “mobile relief,” submits the data to the relevant state system and then responds to the potential recipient regarding their likely eligibility. The algorithm is based on a value orientation that says people should receive all the resources they’re eligible for, and with dignity. The values of mRelief are empathy, privacy, accessibility, and delivery. The applicant has a minimum of disruption in their life while determining their eligibility.

The takeaway from these three lessons is that algorithms, and the accompanying machine learning, can be used to minimize bias or to continue or amplify existing systemic biases. Nonprofits can and do design data systems that reflect their values, but there needs to be an intentional focus when developing, buying, or adapting data systems, and an focused examination of the consequences.—Jeanne Allen