How do we, as social change agents, engage the growth of algorithms and artificial intelligence? In an Edge Leadership event on creating infrastructure to support leaders of color, Kristen Caballero Saucedo asked this trailing question, which was not taken up at the time—seemingly out of the scope of our current considerations—but it has stayed with me.
One of the reasons is that in Assembly, Michael Hardt and Antonio Negri identify algorithms as the source of wealth in the current market. Prioritizing engagement with it is core to their argument about what is needed for transformative social change now. Their proposal goes like this:
- Wealth now is mainly created through our social interactions—we produce value through cooperation, social knowledge, and care.
- Because of this, labor is being transformed to focus on the creation of social relationships.
- This focus on the creation of social relationships interacts with growing machine knowledge and becomes algorithms.
- Capitalists’ production has shifted to focus on the extraction of this value
- A key task for the multitude—or civil society—is to reappropriate this value to serve the common good.
The best known and most profitable algorithm is Google’s PageRank, which determines the rank of a page on the internet by the number and quality of links to it. “PageRank is thus a mechanism for gathering and incorporating the judgment and attention value given by users to Internet objects.” (118) Social media has also learned how to use these mechanisms.
What private companies like Google do is expropriate this social production from the commons. Hardt and Negri write, “Whereas industrial capitalists discipline and exploit labor for profit, the rentier extracts the common and accumulates wealth with little involvement in its production.” (169–170)
The authors explain why understanding the role and power of algorithms is a key to social change right now.
Computer algorithms employed by giant corporations like Google and Facebook exact a kind of violence on all users through the expropriation of intelligence and social connection. The Google PageRank algorithm…tracks the links that users construct and on that basis creates hierarchies for web searches. Each link is a small expression of intelligence, and the algorithm, even without users being aware, extracts and accumulates that intelligence in the form of fixed capital. Machinic fixed capital, however, is not just a neutral force: it is wielded by the owners of property as a means to control and command living labor. If we were to reappropriate fixed capital, to take back what was taken from us, we could put the machines that have accumulated knowledge and intelligence in the hands of living labor and free them from the commands of dead capital…. Biopolitical weapons, such as digital algorithms, might in fact be the most important focus of contemporary struggle. (273)
It is with this in mind that I read the recent Wired article by Tom Simonite, “What Really Happened When Google Ousted Timnit Gebru.” Gebru was the co-leader of a group at Google studying “the social and ethical ramifications of artificial intelligence,” or algorithmic bias and fairness in machine learning. She was fired last year for a paper she co-authored on “the known pitfalls of so-called large language models.” Simonite points out that “Google’s own version of the technology was now helping to power the company’s search engine.”
The context for this is an AI world that is overwhelmingly white and male.
Simonite writes,
The company has been dogged in recent years by accusations from employees that it mistreats women and people of color, and from lawmakers that it wields unhealthy technological and economic power. Now Google had expelled a Black woman who was a prominent advocate for more diversity in tech, and who was seen as an important internal voice for greater restraint in the helter-skelter race to develop and deploy AI.
…
Google had now fully decapitated its own Ethical AI research group.
Before she left, Gebru sent an email to a company listserv for women who worked in Google Brain, “the company’s most prominent AI lab and home to Gebru’s Ethical AI team,” accusing Google of “silencing marginalized voices” and dismissing “Google’s internal diversity programs as a waste of time.”
As the disputed paper circulated and was largely seen as uncontroversial, sympathy for Gebru grew among AI researchers, many of whom signed a public letter castigating Google. The message Gebru’s ouster sent the AI research field was “AI is largely unregulated and only getting more powerful and ubiquitous, and insiders who are forthright in studying its social harms do so at the risk of exile.”
Simonite recounts how Gebru and Margaret Mitchell, the other co-leader of the Ethical AI team, built up their group “while parrying the sexist and racist tendencies they saw at large in the company’s culture.”
He widens the scope of the story by recounting Gebru’s path from refugee of the war between Ethiopia and Eritrea to center of the tech industry and rising star of the incipient AI movement, “before she became one of its biggest critics.” He notes, “Gebru’s career mirrored the rapid rise of AI fairness research, and also some of its paradoxes.”
Before her stint at Google, Gebru was at Apple. By the time she joined the lab of Fei-Fei Li, “a computer vision specialist who had helped spur the tech industry’s obsession with AI,” in 2013, she was seeing the technologies’ harm. One of Gebru’s first deep learning (as it was called) projects correlated a database of 70,000 cars in a sample of Google Street View images with census and crime data. “Her results showed that more pickup trucks and VWs indicated more white residents, more Buicks and Oldsmobiles indicated more Black ones, and more vans correspond to higher crime.”
Simonite notes,
This demonstration of AI’s power positioned Gebru for a lucrative career in Silicon Valley. Deep learning was all the rage, powering the industry’s latest products (smart speakers) and its future aspirations (self-driving cars). Companies were spending millions to acquire deep-learning technology and talent, and Google was placing some of the biggest bets of all. Its subsidiary DeepMind had recently celebrated the victory of its machine-learning bot over a human world champion at Go, a moment that many took to symbolize the future relationship between humans and technology.
As a refugee and Black woman, Gebru’s other passion is social justice, and as she moved the center of the AI field, she was seeing its role in the growth of injustice. Her response to her growing awareness of the harms was, “I’m not worried about machines taking over the world. I’m worried about groupthink, insularity, and arrogance in the AI community. If many are actively excluded from its creation, this technology will benefit a few while harming the great many.”
In 2014, the first annual Fairness, Accountability, and Transparency in Machine Learning (FATML) event was held, “motivated by concerns over institutional decision-making.” As Simonite warns, “If algorithms decided who received a loan or award trial in jail rather than at home, any errors they made could be life-changing.”
A significant case is the recidivism-risk algorithm called COMPAS used in courtrooms across the US. A 2016 ProPublica story revealed that it made more false predictions for Black people than it did for white people. (I wrote about this in “The Color(ing) of Risk.”)
While the FATML event grew, researchers generally avoided talking “about how economic pressures or structural racism might shape AI systems, whose they work best for, and whom they harm.”
In 2017, Gebru’s own PhD thesis, presented “to members of Silicon Valley’s elite,” showed “how algorithms could predict factors like household income and voting patterns just by identifying cars on the street.” This excited them: “The way Gebru had extracted signals about society from photos illustrated how the technology could spin gold from unexpected sources—at least for those with plenty of data to mine.” But she also concluded: “One of the most important emergent issues plaguing our society today is that of algorithmic bias.”
Simonite highlights that while Gebru could have easily gone into “building moneymaking algorithms for tech giants,” she instead sought to contain the power of the technology.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
Mitchell was also seeing this bias while in her role at Microsoft, before she went to Google. In 2015, she was working on Seeing AI, a program for blind people that “spoke visual descriptions of the world.” While this would have helped many, Microsoft didn’t want to invest in it. She also noticed that the machine-learning systems were describing people with pale skin as a “person” and those with dark skin as “Black person.”
These were the issues that Gebru and Mitchell were working on at Google. But Google was hesitant to “publicly venture into a discourse on the discriminatory potential of computer code.” Yet it seems that seemingly benign projects quickly ran into performance differences for people of different races and genders.
In fact, even programs developed by people of color were discriminatory. Inioluwa Deborah Raji, a young Nigerian-Canadian coder, helped create a machine-learning system that detects photos containing nudity or violence. “But her team discovered it was more likely to flag images of people of color, because they appeared more often in the pornography and other materials they’d used as training data.” When she realized this, she said, “I built this thing, and it was actively discriminatory in a way that hurt people of color.”
Gebru was meeting these lone people of color in tech work and adding them to her slowly growing “Black in AI” list. And research showing bias kept coming in.
In 2018, Gebru and Joy Buolamwini—“a Ghanaian American MIT master’s student who had noticed that the algorithms designed to detect faces worked less well on Black people than they did on white people”—were part of a project that showed
Services offered by companies including IBM and Microsoft that attempted to detect the gender of faces in photos were nearly perfect at recognizing white men, but highly inaccurate for Black women. The problem appeared to be rooted in the fact that photos scraped from the web to train facial-recognition systems overrepresented men as well as white and Western people, who had more access to the Internet.
The project was a visceral demonstration of how AI could perpetuate social injustices—and of how research like Gebru’s could hold companies like her own employer to account.
As someone who moved between the tech giants, Gebru was starting to see how AI systems created and perpetuated discrimination. For example, while at Apple, she worked on rigorous data projects in the development of the iPhone and learned that the AI field “had no equivalent culture of rigor around the data used to prime machine-learning algorithms.” In fact, it had a lax attitude about data, generally grabbing the most easily available and largest datasets.
Gebru highlighted this culture of lax data as the cause of the infestation of machine leaning bias. She then created the first transparency tools for implementing fairness, starting with Datasheets for Datasets, a framework for AI engineers to document the patterns and contents of their data.
It was these data sheets that got Gebru noticed at Google and invited by Mitchell to join the Ethical AI team there. Gebru was hesitant to join Google, having been warned about the company’s hostile environment for people of color and women, and experiencing it firsthand at tech events. But she was offered the co-lead role, and she took the opportunity to see what she could do there.
She soon found herself in the middle of high-level conversations about “the situation of women at Google.” But, Simonite writes, “Gebru and Mitchell’s work didn’t fit easily into Google’s culture, either. The women and their team were a relatively new breed of tech worker: the in-house ethical quibbler.”
By this point, tech giants were beginning to make “commitments to practice restraint in their AI projects.” Google announced seven principles that would guide its AI work. Microsoft had its six principles.
Despite this, in-house quibblers were excluded and punished. In the case of Gebru and Mitchell, resources were not made available for fairness work. They were excluded from key meetings. They were denied recognition for their work when it had impact. They were framed as confrontational and inappropriate, for challenging power, even though their role was about making power accountable.
They had to figure out creative ways to advance their ideas and make interventions—for example, by creating “a system for cataloguing the performance limits of different algorithms” and shopping it outside the company, positioning the product in a larger system than the company to perform as a form of disclosure, “like a nutrition label.”
Simonite’s detailed account concludes that, “Over time, the team seemed to show how corporate quibblers could succeed. Google’s Ethical AI group won respect from academics and helped persuade the company to limit its AI technology.” For example, it led Google to limit face-recognition services to well-vetted customers to prevent its use for law enforcement, in contrast to Microsoft and Amazon.
Ethical AI created a refuge culture at Google that valued diversity as key to spotting “problems or opportunities that Google’s largely white male workers might overlook.”
Its impact was to widen the “technical approach to fairness” to include “questions about how AI replicated or worsened social inequalities” and whether some of it should be off-limits. It did this by creating frameworks and tools to advance fairness work, drawing on critical race theory to propose to the tech industry that it reconsider its “obsession with building systems to achieve mass scale.”
But the field continues to grow in this direction. It has moved from its initial image recognition approach to large language models, or the programming of larger volumes of text, that is making automated writing possible. Simonite notes, “Some investors and entrepreneurs predicted that automated writing would reinvent marketing, journalism, and art.”
These new language systems are too large to “sanitize,” so they “could also become fluent in unsavory language patterns, coursing with sexism, racism….” Simonite points out that it is “an extreme example of the problem Gebru had warned against with her Datasheets for Datasets project.”
Big tech competes for the largest large language dataset, which adds enormous amounts of energy consumed to its propensity for bias, as it compounds its social impact.
With Gebru ousted and Mitchell fired soon after, Google has “built up a handful of other teams working on AI guardrails tied to the company’s priorities. It has effectively “sketched out a more locked-down future for in-house research probing.” It calls this new approach “Responsible AI.”
This recent history has defined a split between the AI work done inside tech companies and that done by nonprofits like New York University’s AI Now Institute, which partners with other nonprofits, including ACLU.
But, Simonite cautions, “any such divide is unlikely to be neat, given how the field of AI ethics sprouted in a tech industry hothouse. The community is still small, and jobs outside big companies are sparser and much less well paid.… Government and philanthropic funding is no match for corporate purses, and few institutions can rustle up the data and computing power needed to match work from companies like Google.”
Raji says, “Everyone’s now aware that true accountability needs to come from the outside.” That’s us, civil society.
How do we answer Hardt and Negri’s call to reappropriate the value of AI to the people? How can AI be used to advance justice? It is a critical and timely question.