April 17, 2019; Vox
A month ago, the nonprofit Open AI announced a radical structural change. In order to fulfill its mission of ensuring “that artificial general intelligence (AGI)…highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity,” it needed to transform itself into a for-profit organization. Weeks later, we are learning more about their understanding of the limitations of the nonprofit structure which prompted them to abandon it after only a few years of operation. We’re also learning about the risks they and we face if this transition goes wrong.
As computing systems become faster, smarter, and more powerful, their potential for extreme outcomes, both good and bad outcomes, grow as well. In 2016, Open AI’s founders sought to share knowledge broadly as a check against the harm bad actors might cause if technological capability were kept secret. As NPQ reported at the time, “Everything Open AI learns will be made freely available to permit the world’s innovators to serve as a counterbalance to the huge investments in AI being made by a few behemoths, namely Google, Microsoft, and Facebook. While their work is often shared in research papers, commercial interests guide their advancements.”
Open AI saw an open nonprofit structure as best able “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.
Three years later, they have concluded that being a nonprofit organization and being open will make it harder to accomplish this mission. Ruth McCambridge, writing in NPQ, said that the change was prompted by their understanding of the fiscal limitations of a nonprofit organization: “The cited reason for the change is a need to raise ‘billions of dollars’ and attract the best and the brightest with massive signing bonuses.”
Open AI needed a structure that could both retain its mission and attract profit-seeking investors. In a recent interview by Vox’s Kelsey Piper, Open AI’s CEO/CTO Greg Brockman shared more of the thinking behind the shift.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
A nonprofit is just great for having a pure mission that’s very clear how it works. But you know the sad truth is that not enough gets done in a nonprofit, right? And in a for-profit—I think too much gets done there.
The new hybrid structure “promises to pay shareholders a return on their investment: up to 100 times what they put in. Everything beyond that goes to the public. The Open AI nonprofit board still oversees everything.” Vox notes that this profit-sharing limit is quite significant in the world of emerging technology: “Jeff Bezos reportedly invested $250,000 in Google back in 1998; if he held onto those shares, they’d be worth more than $3 billion today. If Google had adopted Open AI LP’s cap on returns, Bezos would’ve gotten $25 million dollars—a handsome return on his investment—and the rest would go to humankind.”
If more investment is needed to accomplish their technological goals, Open AI’s leaders also concluded that to safeguard the public good, less openness would be required.
Let’s repeat that for emphasis: reconsidering the context and content of the work, Open AI’s leaders concluded that to safeguard the public good, less openness would be required.
According to Brockman, “Open AI is about making sure the future is going to be good when you have advanced technologies. The shift for us has been to realize that, as these things get really powerful, everyone having access to everything isn’t actually guaranteed to have a good outcome.” In their current iteration, they will closely control the technology they openly share, keeping responsibility for making sure their work benefits the common good on the shoulders of their corporate board and not on the community.
Open AI decided that nonprofit structures cannot tackle the work they believe must be done for the well-being of society. They’ve also concluded that “normal” for-profit structures won’t let them fulfill their mission. They know what will not work, not what will. “One gap in our current structure that we do want to fill is representative governance. We don’t think that AGI should be just a Silicon Valley thing. We’re talking about world-altering technology. And so, how do you get the right representation and governance in there? This is actually a really important focus for us, and something we really want broad input on.”
The power of artificial intelligence, for good and ill, as the story is told, motivated Open AI’s funders to launch the organization to serve as a control rod, protecting the public from runaway technology. They’ve learned their funders are not willing to donate at a level that can keep them in the game, so they’ve turned to a new source of funding, hoping they can attract investors and still retain their ability to stay on mission. Taking this leap without a structure in place that can protect mission and principals into the future is certainly questionable enough that the organization should legitimately give up its nonprofit status because it holds no assurances that the work they are doing will be in the public good—as the public defines it.—Martin Levine