
Editors’ note: This piece is from Nonprofit Quarterly Magazine’s winter 2024 issue, “Health Justice in the Digital Age: Can We Harness AI for Good?”
In this in-depth conversation about the effects of artificial intelligence on our society and our planet, Tonie Marie Gordon, NPQ’s senior health justice editor emerita, and Professor Shannon Vallor, Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute, University of Edinburgh, and author of Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) and The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford University Press, 2024), discuss best practices for navigating AI’s potential and risks.
Tonie Marie Gordon: Your recent book is framed around the metaphor of AI as a mirror—a conceptual framework that I found to be very apt.1 Can you elaborate on that?
Shannon Vallor: I think it’s important to recognize that AI is not one technology but many. AI is not a clean, tidy, scientific label that we can attach to this kind of software and not this other kind. It’s more of a marketing term than a scientific term, at present. There are many different kinds of tools that use machine learning and rule-based programming—and other kinds of algorithms—to develop applications. People call it AI when it’s an algorithm that does something that they think is clever enough, or new enough, to be able to market as AI.
There are many different types of AI, and the different types have different structures, different capabilities, different limitations, different risks, different benefits. I wrote The AI Mirror to talk about one large class of AI technologies that are built using machine-learning techniques to create models of us, basically—models of human speech, models of human writing, models of human image making, models of human decision-making—by using large volumes of our data to then create a reflected image of us, which is how I came to the metaphor of the AI mirror for this class of AI technologies. So the starting point is to realize what kind of AI I’m talking about when I use the metaphor of a mirror. For example, if you think about a machine-learning algorithm like DeepMind’s AlphaFold, which is a model of protein structures that is used to predict certain kinds of biological and biomedical possibilities for protein folding—an important feature for research into medical treatments, pharmaceuticals, things like that—that kind of research doesn’t really fit the metaphor of the AI mirror.2 Because what’s being reflected in that mirror doesn’t look anything like us. If it’s a mirror of anything, it’s a mirror of protein structures. So I’m not really thinking about AI tools that deal with data from somewhere else in nature other than our own speech, thought, and action.
But—and here’s the first qualification—these AI tools that are built to generate reflections of human intelligence don’t reflect all of us. They don’t reflect all of society, because the training data that are used to produce these mirror images are like the light falling on a mirror. And a mirror can only reflect the light that reaches it. It can’t reflect the light that falls outside the edge of the mirror. We don’t have digital data for a lot of human intelligence, because people weren’t valuing the people who produce “those” books, or the people who produce “that” art—it didn’t get digitized, didn’t get celebrated, didn’t receive comments on Reddit or Facebook. So the training data that we use to build models of human intelligence, like ChatGPT, which are not themselves intelligent but simply reflect the human intelligence that trained them, are actually a very small selection of the data that we have, and those data are a small selection of the outputs of human intelligence on the planet. The data heavily overrepresent men. They heavily overrepresent English speakers. They heavily overrepresent cultural artifacts produced in the Global North. They heavily overrepresent cultural artifacts produced by the wealthy and people who had access early on to digital technologies and the bandwidth to digitize their activities online. So there are billions of people on this planet who don’t get to be represented in these mirrors, and their intelligence is like the light that falls outside the mirror.
That important qualification notwithstanding, we have centuries of data collected because of how much work we’ve done digitizing the past—digitizing old paintings, digitizing old books, digitizing old historical records—which we’ve also been using to train these models. So, these models are reflections of our past—and again, reflections of a certain select subgroup of humanity’s past. But what they do is very much like a mirror. So, if you think about a mirror, you’ve got a glass surface, and the properties of the mirror’s coating determine how much light it will reflect, how well it will reflect it, whether the image that results will be distorted or magnified. So you can create different kinds of mirrors by using different kinds of coatings and glass properties. Machine-learning algorithms differ in the same sort of way. That is, we can change the algorithm in order to change what we want to show—what we want the algorithm to magnify, what we want it to minimize, what we want it to exclude, what we want it to center, what we want it to amplify. So we build algorithms not to just neutrally reflect data—just as there’s no mirror that’s neutral, there is no neutral algorithm. AI models are mirrors that are manufactured to produce certain kinds of reflections of human intelligence.
“We don’t have digital data for a lot of human intelligence, because people weren’t valuing the people who produce ‘those’ books, or the people who produce ‘that’ art.”
And the final thing is that the outputs of these AI models—what you get out of something like ChatGPT or what you get out of any large language or image or video model—is not a thought or a work of art or even a human sentence that has been spoken. What you get is a reflection of that—something that looks very much like it but that doesn’t contain or encompass what stands behind it. Think about when you look in the bathroom mirror—you know there’s nobody on the other side of that mirror. You see a body in the mirror, but you know that it’s just a reflection, that it’s shallow. There’s no depth to it. You can’t press deeper into it. There’s nothing behind that glass. But when we talk to an AI tool like ChatGPT, it’s very easy to be deceived, or to fool ourselves, into thinking that we’re talking to another mind, that there’s another mind over there on the other side of the prompt window that’s speaking back to us. But that would be like confusing the image in your bathroom mirror with another person and thinking that you’re having a conversation when actually you’re just talking to yourself.
That’s the other important thing about this metaphor—it helps us to understand what AI tools are and what they are not. Because what they are not are other minds. They are not machine minds. They are not machine intelligences. They are mirror reflections of our collected minds and intelligence, and they’re very shallow in the way that a reflection is, and that means we can’t rely on them to do all the things that real minds do. Yet these tools are being marketed as replacements for us, as if they were not our mirror reflections but rather a new kind of mind that has been built to compete with ours. And that’s the kind of illusion that I’m trying to puncture in the book.
TMG: You were just talking about how, through pulling in centuries of data, a lot of these algorithms are reflections of the past. They reflect these very large volumes of data that come from past thoughts and past interactions. They also reflect the long arc of what we know is a history with a lot of inequality baked into and embedded within those thoughts, those actions. That basis in past mistakes, past brutality, and marginalization of a lot of different folks is what makes it particularly problematic in terms of how AI is functioning in our society today. It affects how people look to AI in terms of making future decisions or thinking about our immediate and long-term future.
SV: Yes, the book focuses a lot on that: the fact that we’re being told that these technologies are the future, but the only capacity they actually have is to sniff out the patterns of the past and our data and use them to make predictions—which is essentially a way of saying we’re going to do things exactly as we’ve done them before, only more so.
We know that we live in a world that is unjust, increasingly unequal, increasingly fractured and divided, and increasingly unsustainable—environmentally, politically, economically—and yet what we’re doing with AI tools—not because we have to use them this way, by the way, but because we’re choosing to use them this way—is to reproduce all of the unsustainable patterns of the past that have gotten us into what many people have called the permacrisis, where all of our institutions and systems seem to be increasingly stressed and under threat of fracture and collapse.3 And if the patterns that led you into unsustainable ways of life are reproduced in the machines that you build to automate society, that’s essentially the same as seeing that you’re heading over a cliff and then pressing on the accelerator. A lot of these AI tools are designed to do just that, because it’s easy and because it seems like the rational and efficient thing to do.
Take policing, for example. If we build a tool to do automated distribution of policing resources across the city, and we train that model on the decisions that were made in the past about where to send a police presence, that will result in sending police out primarily to surveil and arrest people in poor and minoritized neighborhoods rather than wealthier or Whiter neighborhoods, unless we deliberately engineer it to do something different. Or take public benefits. It has been very common for governments to request AI models that can identify fraud in a public benefits system—for instance, in the applications they receive for childcare benefits or unemployment benefits. Well, those models have failed spectacularly—and routinely—in many different countries. A failure of that sort brought down the Dutch government in 2021: so many innocent lives were ruined that the government had to express a ceremonial apology that included resigning, in order to acknowledge the harm that had been done.4 These things happen because we train those models on all of the biased human decisions about benefits that were made before—the decisions that were biased against single parents, the decisions that were biased against disabled people, the decisions that were biased against immigrants—and that treated these groups with greater suspicion than other groups of people. When we build the algorithms and train them with data from past human decisions, all of those biases get automatically baked in, whether we were conscious of those biases—and named them—or not.
There is the famous case from 2018 of Amazon building a hiring algorithm because they didn’t think they were getting the highest-quality hires, due to their recruiters’ biases with respect to race, gender, class—things that don’t strictly correlate with engineering ability.5 They wanted an algorithm that might be better than the humans; but all their data were historical data on, presumably, who humans had hired before, how humans ranked their applications before, who had been promoted in the company before, who had succeeded in engineering roles, who had stayed at the company in engineering roles, and so forth. And all those data reflect the same sorts of biases—against women engineers, against people from minority or lower-economic-class backgrounds. Biases were therefore baked right into the algorithm trained on that data, and what the algorithm did was things like downgrade a woman’s CV simply because one of the items on her resume was that she was the president of a women’s chess club.
Now, normally, if you’re an engineer, having been the president of your chess club is indicative of a certain kind of leadership. It’s also indicative of a certain kind of skill: chess correlates quite well with the kinds of mathematical and planning abilities that engineers need. If you were the president of the women’s chess club at your university, that should count extra for you, right? But what the algorithm was doing was downranking any application where the word woman or women’s appeared, or any proxy for that. So if you went to a college that was traditionally associated with women, it might downrank your application.
So they had to scrap the algorithm, because they couldn’t trust it not to reproduce the same biases that had been ingested from the training data. And all this happened without anyone telling the algorithm even who the applicants were! The applicants weren’t even classified by gender. But gender-associated terms had appeared in the training data in ways that were associated with negative outcomes because of the bias in the system. Algorithms don’t have the intelligence to discern between patterns that we don’t want to reproduce and those we want to carry forward. So not only do algorithms pick harmful patterns up and carry them forward, they can also strengthen them.
“[AI tools] are not machine intelligences. They are mirror reflections of our collected minds and intelligence…. Yet these tools are being marketed as replacements for us.”
And we see that. We’ve seen algorithms perform this way in the healthcare space, for instance. There was an American hospital algorithm that was designed to better triage patients with respect to who needed the most medical attention, especially as regards sudden worsening of a condition.6 The model was actually boosting White patients over Black patients who were more critically ill and more likely to have sudden worsening—exactly the opposite of what the model was designed to do—because a piece of the training data they had included was the expected amount of healthcare dollars that would be spent on that person, naively assuming that that was a good proxy for how much healthcare that person might need. But if we know anything about the American healthcare system, which is where the data came from, Black people in America get much less money spent on their care than an equivalently ill White patient. So the model was simply looking at how many healthcare dollars could be predicted to be spent on a patient, and it turns out that in America, that has as much to do, or more to do, with the color of your skin than it does with what’s going on with your body.
So the model just reproduced that bias, and made it worse—because that model was being deployed in many hospitals, and the biased medical decisions were being amplified by computer decisions. Now, once they discovered it, of course they were able to revise the model.7 But that’s just the tip of the iceberg. Similarly designed models are operating all over society in ways that are amplifying those old historical biases that we have decided are illegitimate and unjust, and in many cases—like in the Amazon case—we are actively trying to root out. If we naively rely on these algorithms to be objective predictors of what should happen, we will only reproduce the past that we’re trying to change.
The really important point I want to stress here is that better design of AI systems can have the reverse effect. We can design AI systems to identify and correct for or root out these kinds of biases. We can use an AI tool to identify unfair patterns in a decision-making process, such as what was happening with the Amazon case and the public benefits case. We could automate flagging of applications that are in danger of being rejected for the wrong criteria. But instead, we do the reverse. We allow algorithm models to be used to deny people benefits to which they are entitled. We allow them to be used for unfair political and economic decisions. And sometimes, the way these models are used is driven or affected by just pure laziness or naivete—and all of that is avoidable.
TMG: Another of the things that I loved about your book is how you talk about human virtues at their best, and what we need in order to meet all of the pressing, seemingly intractable problems that we’re facing—problems that we have decided only AI can solve. Can you speak to some of these virtues and their current place within society, and how AI is shaping or influencing our ability to express them? Some of the ones that I picked up on were love, courage, collective wisdom, our capacity to care for one another. AI does a really poor job, of course, of enabling these things. In your book, you talk about AI as kind of eating away at our capacity to express these things. That was something that I hadn’t really contemplated before.
SV: That aspect of the book is built upon my first book, Technology and the Virtues, which explores the relationship between technologies—not just AI—and our moral strengths, that is, our virtues.8 Technology and the Virtues focuses on how our virtues are shaped by our technologies, and that our technologies can either help us express our virtues more capably and consistently or they can degrade our ability to do that. And the reason is that virtues—character traits that we admire and approve of in one another and encourage ourselves and one another to develop—courage, love, honesty, generosity, fairness, justice, responsibility, creativity, care, and service, and so forth—are not things we are born with but rather must be developed gradually, and expressed wisely.
The first philosopher to talk at length about this was Aristotle, who focused on the fact that virtues must be cultivated through conscious effort and habit—that we learn to be honest, for example, by building up a pattern of telling the truth. And the more we tell the truth, the better we get at telling the truth, the better we know when to do it and how to do it right—because there are certain ways of telling the truth that actually do a lot of harm, right? So even if you want to be honest, there’s a way of being honest that is good for the situation. That’s what virtue is. It’s both the character trait and knowing how to express it wisely in your present situation. For example, there are questions a child will ask you that you have to be very careful about how you answer, but you also don’t want to lie. It can be very tricky to figure out what version of honesty in any given situation suits this person’s needs and is right for the relationship. And that kind of knowledge comes from experience—it comes from doing. You aren’t born knowing those things.
I use the child example, because one of the ways that people can best relate to this is the experience of parenting, because parenting is one of the hardest things that people do. Most people don’t feel like they know what they’re doing at the beginning, because it requires you to exercise your virtues in a totally new situation. Yet with practice, and also with support from one’s family and community, one can get better at expressing patience with a child, honesty with a child, compassion with a child. So it requires that commitment, but that’s there in all our relationships. We have to build that when we relate to each other as friends or coworkers or fellow citizens. All those virtues are still just as important in those other kinds of relationships. And we work just as hard, even though we’re not always realizing it, at trying to be good people for others all the time. But in order for that to happen, we have to have the opportunity to make decisions about how we interact with one another.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
“Algorithms don’t have the intelligence to discern between patterns that we don’t want to reproduce and those we want to carry forward. So not only do algorithms pick harmful patterns up and carry them forward, they can also strengthen them.”
I now have an email app that, if I choose to use it, will automate any response that I want to send to something that has landed in my inbox. Of course, I can edit it if I like, but the tempting thing is to just look at it, think, Okay, that looks good, and hit “Send.” Now, when I automate my responses to you, I miss the opportunity to ask myself, In this situation, is this a caring way of speaking to you? Is this an honest way of speaking to you? Is this a compassionate and courageous way of speaking to you? I don’t ask those questions if I’m just looking at the words that a large language model threw up as an auto reply. And frankly, if I’m just responding to a marketing email or something that’s not significant—then sure, that’s fine. But the temptation, of course, is to push us to be more productive, meaning less thoughtful and less prudent in our interactions, because that takes time and judgment. To be more productive, we’re being encouraged to automate the way we speak to each other, to automate the decisions we make about each other, to automate the decisions we make about ourselves, to automate the decisions about what habits we will take up.
So I think what happens over time, then, is we lose the habit of making ourselves who we are, of consciously choosing to become certain sorts of people, and instead we become whomever the algorithm automates us to be. We become—we form—whatever habits the algorithm has led us into. And we do so without any kind of conscious awareness of how that’s affecting our relationships with other people, and who it’s helping, who it’s harming. It’s just about efficiency and productivity without thinking about what the broader goal of it all is. So, what I encourage in that book is a new relationship with our technologies, AI included, that looks at the kinds of people that we want to become, and then asks, How could technologies help us cross that gap between who we are today and who we would like to be? And sometimes they can’t. Sometimes it’s just still on us to do the hard, manual labor of making ourselves who we are. So the idea is not that technology has to always be the answer, but rather, when we use it, we should have a good reason for it. And that reason should be: this technology can help me become who I want to be, who I need to be for others, in a way that I can’t do on my own.
I am not antitech. I am not anti-AI. Our societies are so large and so complex that we can’t run them without technology. I think these technologies have an important role to play in any society as large and complex and dynamic and unpredictable as ours. If all decisions were brought down to human decision-making speed, society would grind to a halt. I understand that. But we’re not using technologies wisely and selectively right now in the areas where they’re most needed. We’re using them instead in ways that someone else can make money from most quickly. And unfortunately, that’s damaging the reputation of AI. So you see a lot of backlash against AI right now, because you see artists who have been ripped off by AI platform companies that have scraped their artwork and used it without compensation or credit.9 And frankly, the world was not desperate for an AI plagiarism machine. One of the great needs of society five years ago was not, Hey, we need something to produce machine-generated photographs and paintings and novels. That was not on the list of great needs. That’s what we got though, right? But if you’d asked me five years ago—if you’d asked yourself five years ago—What are five problems that human beings and our institutions don’t seem to be able to solve because we don’t have the speed or scale or analytical capacity to manage them? What are the five biggest problems that we need to make progress on rapidly? You would have named things like finding quicker paths to clean energy, finding ways to make crops more resilient to climate stress. You would have thought of finding ways to get drugs and food more efficiently to parts of the world where we’re not able to equitably meet people’s basic needs. You would have had a long list of things that people need.
And yet we’re not using AI to focus on those things. We’re not using AI to focus even on things like fighting corruption, which is an application I talk about in the new book that I think is just wildly underexplored. We have a vast problem of political corruption, where outside forces turn governments and institutions against the interests of those they represent, through bribery and extortion and various other kinds of criminal activity. And it’s extremely easy to use AI tools to find patterns in that kind of activity. In fact, we do know that it’s being used by criminal investigators and national security services to identify patterns in organized-crime networks and things like that. But somehow, it never quite gets back to the level of political corruption, where the people in power—the people who currently have status—are the ones under the AI microscope. Somehow that never comes to pass.
“To be more productive, we’re being encouraged to automate the way we speak to each other, to automate the decisions we make about each other, to automate the decisions we make about ourselves, to automate the decisions about what habits we will take up.”
And, of course, we know why. AI is currently being deployed in the ways that serve those who already have a disproportionate amount of power and wealth in society. It’s used to consolidate their power, it’s used to grow and consolidate and protect their wealth, and it’s used to ensure that power and wealth become ever more concentrated in their hands. AI is not inherently a tool that must be used to consolidate wealth and power. It’s our current regulatory and legal failings that allow that to happen. But it’s very possible to imagine a world where the regulatory environment and the laws are such that the incentives are to use AI to actually strengthen our institutions and make them more transparent, more accountable to the people they serve. That’s completely within our capacity.
TMG: Another main thread of The AI Mirror is calling attention to a lot of the alarmist, fearmongering ideas about AI—that it is an existential risk—and how that is conceptualized and put forward in the public arena. And you talk about reshaping this idea of risk. What would be a more constructive and productive discussion about AI and risk than this kind of “AI is going to become AGI [artificial general intelligence] and the computers are going to replace us” rhetoric?10
SV: I think when you see that those narratives are being pushed most aggressively by the people who are profiting the most from the AI boom, that should be the first red flag that you examine with a critical eye. Because the existential risk narrative is a very useful one for people who are heavily invested in AI, for two reasons. First, the existential risk narrative tends to be focused on long-term horizons, which diverts people from shorter-term concerns. Although sometimes they’ll bring that horizon forward to scare people into action, in reality, many of those scenarios are ones we are not likely to confront for 50 or 100 years—if then. We are nowhere near having the ability to build conscious machines. We are nowhere near having the ability to build AGI. Now, we might, in the near future, have some radical jump in computing capacity that brings us to that point, but no one actually knows how to make that jump happen.
It’s not that it’s irrational to be worried about this possibility. Even though there is no scientific evidence that we are heading toward building conscious machines with desires of their own, more powerful AI machines could still do a lot of damage even if they remain mindless. But when we redefine AI risk as AGI risk, what we are saying is, “You can ignore what’s happening now. That’s nothing in comparison to what could be coming.” But it’s not nothing, and unlike the entirely speculative AGI risks, today’s AI risks are already being realized, hurting actual people and communities. When the existential risk narrative pushes for governments to invest in safety and regulation for future AGI but not today’s AI, that sounds to me like refusing to call the fire department to extinguish your burning roof, because you want to install a better burglar alarm first. The burglar’s not here and may never arrive. The fire is already lit, and someone needs to make sure it doesn’t hurt anyone else. A lot of companies like OpenAI have been promising us that these technologies are what is going to get us to intelligent machines that think like humans do.11 And even companies like OpenAI now admit that that’s not where we are, and that no one really knows when that’s coming.12 These technologies are not the thing that creates the Terminator. But if you are heavily invested in AI, and if you can get people, particularly in government, to worry about those longer-term scenarios, those more hypothetical and speculative risks that lie far in the future, you can get them to stop asking questions about what should be happening today with the harms that AI is causing right now.
“AI is currently being deployed in the ways that serve those who already have a disproportionate amount of power and wealth in society.”
Another reason that is also quite attractive for people who are heavily invested in the AI narrative is that it imbues them with a great sense of personal power. If you’re the person who created the thing that is either going to save or destroy humanity, then either way it goes, you’re the most powerful human who ever lived, right? You’re the most historically important human who ever lived if you believe that you have created the technology that will either destroy human civilization or save it. So that’s why you have these doom narratives right alongside these “AI is going to save us all” narratives. Neither of these things is true. These are reflections of a certain kind of technologist’s ego that wants to believe that they’ve created the thing that’s going to overtake, that’s going to succeed, humanity.
But if you strip away the ego, and if you strip away the kind of cynical incentive to push people’s attention farther down the road, what you see is actually a technology that is pretty impressive in some ways but is not a great mystery, is not a mind, is not anything like the kinds of AI that we encounter in science fiction—even though we can interact with it superficially for a little while and pretend that that’s what’s happening. But very soon, you realize you’re not talking to anything that understands you or the world around you. So we’re dealing with tools that are predictive word and sentence generators that can find complex patterns in our language and repeat them back to us—and that’s it. They’re all dark inside. There’s no awareness. There’s no desire. There’s no need to come after us for anything. These are just tools that predict words and pixels and other kinds of representations that we feed into them. Most machine-learning researchers who aren’t seeking celebrity know all this and will tell you this is so, if you ask. Most machine-learning researchers I work with are very frustrated by the AI hype-and-doom cycles, because they recognize that both of them are a distortion of what’s really happening. And most machine-learning researchers actually value honesty about what they’re building. It’s the celebrity ones and the ones who are heavily invested in power and the wealth that can be created by these tools who are pushing the hype-and-doom narratives.
But that said, there is indeed a more immediate existential risk here—two, actually—that I point to in The AI Mirror. One is existential in the sense of what existential risk has largely come to mean, which is something that could destroy human existence. And that risk is the immense carbon appetite of large AI models, and the ways in which the explosive growth in AI development may prevent us from effectively responding to the climate crisis and meeting our targets for reducing dependence on fossil fuels. They also consume a great deal of water, which is another resource that we need to worry about. And frankly, from a climate standpoint, all of the current models of climate change are pointing to a future that is even more concerning than climate scientists were worried about 10 years ago. The worst-case scenarios are turning out to look more likely than the scenarios positing a slower acceleration of climate change. The train is picking up speed in a lot of ways, and we’re seeing concerns now about disruption of the Atlantic circulation current, also known as the Atlantic Meridional Overturning Circulation, that could, if disrupted even in the slower phase scenarios, bring about a new ice age in Europe while leaving the rest of the planet in extreme heat.
So, we would have massive challenges to deal with from a standpoint of agriculture and basic human survival. That’s an existential risk that is right in front of our faces and that we need to be managing right now. It is not speculative, it is not hypothetical, we can see it starting to happen. And AI is potentially contributing to it and making it worse. The environmental cost of AI is hitting us now, and the climate risk is right here in front of us, not something that might come. We’re already locked into some pretty rough years for the human family because of climate change, and what we need to do is conserve all of the ability we have to manage those climate stresses and keep human civilization together. And we need to be able to use AI responsibly and very selectively to do so.
The other more immediate existential risk is in the more philosophical sense. It is what the existentialists of the 20th century named as “existential” concerns: human purpose and human freedom, and our awareness of our purpose and freedom. One of the things that I am concerned about is that we’re increasingly being encouraged to give up on ourselves, to give up on each other, to think that human intelligence isn’t actually worth much, to believe that we’re destined to be improved upon by machines, to believe that humans can’t be trusted to solve our own problems. And I’m telling you, and I’m telling your readers: The people who have an interest in you believing that, are the people who already have their hands on the wheel, and want to make sure that you don’t grab for it. They want you to believe that AI is at the wheel and that all you can do is go and sit down in the back and hope that the ride takes you somewhere nice. But in fact, it’s still people with their hands on the wheel—powerful people, powerful companies, powerful governments—shaping the future in a direction from which they profit substantially in the short term, even though everyone loses in the long term if we stay on the current road. And what they want most of all is for the passengers not to stand up and grab the wheel.
“That’s our existential task right now: carry the human family and the communities of life on this planet that we depend on—and that depend on us—safely into the future. And we have to become the kinds of beings who can do that—and I don’t think AI is the answer for that.”
So, the narrative that AI is smarter than you, that AI is more trustworthy than you, that AI is more objective and rational than you—that is a marketing ploy. But it’s also a political ploy to cause people to give up their political agency. To surrender their sense that they have power, that they have the right to determine what sort of societies they live in. So the more that you push automation as a way of replacing human judgment, the more people are encouraged to let go of their individual responsibility and our collective responsibility for ourselves. To me, that’s an existential risk in the sense that existentialism pointed out that humans are just animals plus freedom and responsibility. Animals who somehow got the ability to ask ourselves if we want to be this way or if we want to be something different. That’s presumably a question a cow doesn’t ask, right? What kind of cow should I be today? Should cows be like this or like that in the future? Humans can ask those questions. Should humanity and our societies be organized this way or that way? Should they be democratic or authoritarian? Should they be equitable or inequitable? Should they be compassionate, or should they be cruel? We can ask those questions, and we can answer them, but not if we give up on the power of humans to actually drive our own future. And that’s what some of the AI narrative is trying to do: to lull us into a sense of passivity, into a loss of confidence in ourselves and one another. I see that having a very powerful effect.
My new book is, in a way, trying to restart that engine of human confidence and a sense of human empowerment, and the fact that we are entitled—all of us—to have a voice in how our lives go and the sorts of societies that we share and the sorts of futures that we and our children will have. We all have the power and the right to have a say in where the human family goes. And we can only use that right if we believe that it’s in our power to do so—believe that it’s in our power to be wise, to be courageous, to be compassionate. Humans will never be perfect. We will never have perfectly just societies. We will never be perfectly kind. We will never be perfectly wise. But we have, over thousands of years of human history, learned to make ourselves more wise, to make ourselves more kind, to make ourselves more far seeing and more far reaching in our concern and compassion. Five hundred years ago, it was unthinkable that you would have groups of humans who were concerned for the fate of, and actively working to preserve good conditions for, humans and nonhumans on the other side of the planet whom they will never meet. It was beyond our capacity five hundred or a thousand years ago to spend one’s life working for the rights of people in a place where you don’t live; or trying to protect the lives of other sentient, sensitive creatures, or our biosphere.
These are things that were not part of the collective human experience of morality or justice back then. Yet we discovered these ideas, and we built them together, and we pushed them forward. And we did all of that without AI. And we need to do that again now. We need to look to the future and say, The future, because of climate change and some other difficult challenges, is going to require more of us collectively than we are able to achieve today, so how do we make ourselves into the kinds of beings that can carry the human family to safety? That’s our task. That’s our existential task right now: carry the human family and the communities of life on this planet that we depend on—and that depend on us—safely into the future. And we have to become the kinds of beings who can do that—and I don’t think AI is the answer for that.
TMG: A lot of the people we’re speaking to—our readers—are people who, of course, work in nonprofits, work in NGOs, work in government agencies, are leading their own organizations that are mission focused. Given how ubiquitous and diffused AI has become, people in those positions are now having to wrestle with making decisions whether or not to use the technology and, if using it, how to do so responsibly. But we haven’t equipped people very well toward making those decisions. Is there any practical advice you can offer in terms of the kinds of things people should be weighing when it comes to AI in such settings?
SV: That’s a great question. I think the first thing to ask oneself is, What is the problem that I’m trying to solve, and why is AI the right tool to solve it? And then ask, Do I have/know the solution, or am I just assuming that AI provides a solution for any problem? Because it doesn’t. There are some problems that AI is not very good at solving or helping you with. The first thing to ask, put more deeply, is, What is the problem, and what about AI makes it the right and best approach? What makes this the most effective and, from a climate standpoint, responsible approach to take? What are the alternatives? Because it might be, for example, that the problem could be solved with AI but only with an inordinate amount of computing power that would be both economically costly and environmentally damaging. At that point, you need to ask another question: Is there a smaller, more efficient AI solution? Or is there a non-AI solution that may take a little bit more time but that can be done without a huge climate impact? So, thinking about the problem, why AI is the right tool, and whether the cost of using the tool is justified by the nature of the problem. Sometimes the answer is yes. Sometimes the answer is: the good that can come from this will result in lives being saved and resources elsewhere being used so much more efficiently than they are currently that this cost can be justified. But a lot of times, people don’t ask those questions at all. They just move straight to, How can I use AI? Let me find a problem to hit with it. To a human with a hammer, everything looks like a nail. Do not be the human with a hammer. Do not make everything into a nail for AI.
“[I]f you’ve crossed that first step where you have a good reason to use AI and the cost of it is justified, you still have to have human responsibility at every point in its use.”
The second thing to ask is, Who are the people who are most vulnerable to either the misuse of this technology or the use of it not going as planned? So assuming that you don’t want to do harm, these tools can cause harm even when used in a well-intentioned way. You need to know what the risks are of this tool doing harm, and who is endangered by those risks. And then you need to make sure that, if at all possible, you have a way of consulting with those people and seeing if you can mitigate those risks by bringing them into the design process. For example, if you’re a nonprofit that works with disabled people, and you’re exploring how to use AI to help those you serve gain access to information and to public services more efficiently, you will need to be concerned about such things as the risks that a generative AI model you might use could create false information—because these models often do that, right? They create fabricated content—some people use the word hallucinated, but I prefer not to use that mentalizing term for AI—and these fabrications can be false, and mislead people. Well, you have to consider that risk. And you should be consulting with your audience, with the people you’re serving, and make sure you know what their interests and needs are and whether the risks are ones that they’re willing to accept, and whether they can work with you to design a way that the technology can be used that’s safer or fairer. We call this coproduction, or participatory design—it’s the idea that the people whom you’re presumably affecting with the use of this technology should be part of the process. Their knowledge and their lived experiences should inform, and even lead, what you’re doing—and from the start, not at the end. It’s not enough to just ask people for consent once you’ve already built the application or the tool and are now deploying it on them. Do not wait for that. You want their voices in the conversation from the beginning, understanding where human oversight and accountability are in any process.
So again—if you’ve crossed that first step where you have a good reason to use AI and the cost of it is justified, you still have to have human responsibility at every point in its use. Who is responsible for making sure it’s designed properly? Who is responsible for making sure that the data that are used to train it are acquired ethically, managed correctly, and of high enough quality to do the job that you want? Who is responsible for testing the trained model? Who is responsible for determining whether or not it is fit for purpose and ready for deployment? Who is accountable for monitoring it after it has been deployed and making sure it is working as intended and not having unintended effects? And who is accountable for doing something about it if it turns out that it is having harmful effects? You should have a whole life cycle plan for the AI tool, from conception to after deployment, where it’s clear where human responsibility for the outcomes is all the way through the process. And you should be making sure that the entire process is guided by values, like trust and service, that are vital to the nonprofit sector. There’s a reason that nonprofits exist, and it has to do with the value they provide in society that would otherwise be lacking—so it is critical to make sure that those values, the values of service and care, are the ones driving the AI process.
Notes
- Shannon Vallor, The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford, UK: Oxford University Press, 2024).
- John Jumper et , “Highly accurate protein structure prediction with AlphaFold,” Nature 596 (July 2021): 583–89.
- Alexandria Herr, “A Look Back on Life in Permacrisis,” Atmos, December 20, 2022, earth/permacrisis-word-of-the-year-2022-climate-crisis-change/.
- Jon Henley, “Dutch government resigns over child benefits scandal,” The Guardian, January 15, 2021, theguardian.com/world/2021/jan/15/dutch-government-resigns-over-child-benefits-scandal.
- Amazon started the algorithm in 2014; by 2015, Amazon knew it wasn’t working—and they disbanded the effort in See Jeffrey Dastin, “Insight—Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, October 10, 2018, www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/.
- Ziad Obermeyer et , “Dissecting racial bias in an algorithm used to manage the health of populations,” Science 366, no. 6464 (October 2019): 447–53.
- Tomas Weber, “Rooting Out AI’s Biases,” Hopkins Bloomberg Public Health, November 2, 2023, magazine.publichealth.jhu.edu/2023/rooting-out-ais-
- Shannon Vallor, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford, UK: Oxford University Press, 2016).
- George Denison, “AI data scraping: ethics and data quality challenges,” Prolific, August 13, 2024, www.prolific.com/resources/ai-data-scraping-ethics-and-data-quality-
- See Émile Torres, “The Madness of the Race to Build Artificial General Intelligence,” Truthdig, March 14, 2024, www.truthdig.com/articles/the-madness-of-the-race-to-build-artificial-general-intelligence/.
- Will Douglas Heaven, “Now we know what OpenAI’s superalignment team has been up to,” MIT Technology Review, December 14, 2023, technologyreview.com/2023/12/14/1085344/openai-super-alignment-rogue-agi-gpt-4/.
- Sharon Goldman, “In Davos, Sam Altman softens tone on AGI two months after OpenAI drama,” VentureBeat, January 17, 2024, com/ai/in-davos-sam-altman-softens-tone-on-agi-two-months-after-openai-drama/.