March 11, 2019; Financial Times
In the face of advances in artificial general intelligence that alternately terrify and delight futurists, Open AI, the nonprofit collaboration started in 2015 by Elon Musk, among others, to carry the banner for the human race, has decided it will cease being a nonprofit and become what’s described as a chimerical entity called a “capped profit.” It’s apparently switched sectors willy-nilly, so what does that mean for the mission intentions it has wildly flung about?
Most of Open AI’s future work will be done under the name OpenAI LP. But, never fear, its intentions are pure, says the release, citing the fact that OpenAI LP will be governed by the OpenAI nonprofit board, only some of whom will be allowed to have a financial interest—maybe we should call this “partial inurement”?
OpenAI Nonprofit governs OpenAI LP, runs educational programs such as Scholars and Fellows, and hosts policy initiatives. OpenAI LP is continuing (at increased pace and scale) the development roadmap started at OpenAI Nonprofit, which has yielded breakthroughs in reinforcement learning, robotics, and language.
The cited reason for the change is a need to raise “billions of dollars” and attract the best and the brightest with massive signing bonuses. But, aside from any basic divergences in basic motivation, Open AI says it’s still following the same mission and will limit the amount investors and workers can make from it. (That’s the aforementioned “cap.”) In fact, investors in the first round are “only” allowed to earn up to 100 times their initial investment.
The group put its need for capital down to the massive computing resources needed to run its data-crunching algorithms, as well as a desire to build its own AI supercomputers. In a demonstration of how sheer computing brawn can bring big advances, OpenAI last month demonstrated a language-producing system it had built that can construct coherent-sounding text from any prompt. The system works by analyzing mountains of text and then guessing which word is most likely to come next in any situation, turning writing into a statistical guessing game.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
Here is what ValueWalk has to say about one reaction to OpenAI’s humanitarian work:
With artificial intelligence taking a major leap in recent years and estimates showing it will likely grow even more, we have seen many discoveries which could do more harm than good. One such example is the text-generator developed by OpenAI. The machine learning algorithm can turn only a small portion of text into lengthy and convincing paragraphs. Now MIT has collaborated with IBM’s Watson AI lab to develop a machine learning algorithm to fight AI-generated text like that generated by OpenAI’s algorithm.
Language models have now improved dramatically, leaving plenty of room for manipulation. In other words, people with malicious intents could use text generators to spread propaganda or false information.
Musk left the board of OpenAI Nonprofit in February 2018 and is not formally involved with OpenAI LP. Remaining are employees Greg Brockman (Chairman & CTO), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Holden Karnofsky, Reid Hoffman, Sue Yoon, and Tasha McCauley.
To quote Kurt Vonnegut, “So it goes.”—Ruth McCambridge