OpenAI's board fired Sam Altman. What does it mean?
The most surprising CEO ouster in decades could have big implications for AI tools and safety.
Yesterday, OpenAI’s board fired CEO Sam Altman. He’s the most successful CEO of the decade, is responsible for the earthquake that ChatGPT sent through not just tech but the entire global economy, and has built one of the world’s most valuable companies in just a few short years.
Note: If you haven’t been following the story closely, I suggest scrolling down and reading the Context section at the bottom of this newsletter first.
What happened? We don’t know yet for sure. But to cut to the chase, the story that makes the most sense to me is:
There was a growing rift on the board between a mission-driven “slow down before AI kills us all” faction of the board, and a more traditional VC/profit-driven “growth 4eva” faction of the board. (See my most recent newsletter for more info about AI existential risk.)
Somehow, the slower-downers got a short-term upper hand, probably triggered by some medium-level screwup on Sam Altman’s part (as of yet unknown to the rest of us).
They seized that opportunity to oust Altman and to push his fellow-growther Greg Brockman off the board, thus cementing their majority.
Why does any of this matter?
My guess is that we will look back at this moment in a couple of years and see major impacts in at least three different ways:
A huge test for nonprofit-led, mission-driven corporate structures. The two leading generative-AI first companies in the world are OpenAI and Anthropic. They were both founded and have both been led by people who say that AI poses existential risks to humanity. They both have unusual corporate structures (Anthropic’s here, more on OpenAI’s in the context section below) where they have a mission-oriented board with the power to remove the CEO of the for-profit entity, specifically designed to prevent the company from pursuing profits at the expense of the good of humanity. Whether or not OpenAI’s board made the correct decision for their mission in this moment, it does seem likely that they were trying to do so. If the lesson that Silicon Valley takes away is “never again", no more capital for weird mission-driven corporate structures, that seems like probably a bad thing for the world.
Determining who wins the AGI race. Instability at OpenAI — or intentional introduction of more cautiousness — makes it more likely that one of the Big 5 tech companies will reach artificial general intelligence (AGI) first. Ironically, I didn’t really trust OpenAI’s governance structure before. As of this moment in time, my trust has risen that they can make hard decisions against the profit motive — but for the same reason they are now less likely to win the AGI race, so that trust matters less.
Effects on the AI tool landscape. This is less important in the big picture. But for clients of mine who have been considering making big investments in enterprise-level AI tools, one of the barriers has been that the landscape is very volatile. It was already hard to tell whether the ChatGPT Enterprise vs Writer.com vs Microsoft Copilot basket was the right place to put your eggs. Now we have yet another demonstration of how unpredictable the tool landscape still is; my guess is that in the parallel universe where the board didn’t fire Sam Altman, the tool landscape would look quite different a year from now than it will in our universe.
Who will be added to the board?
A final thought I haven’t seen anyone else talking about: OpenAI is down to 4 board members now; presumably they will add more. I don’t think it’s hyperbole to say that the future of humanity could hinge on who gets added to the board.
Microsoft, who owns 49% of the company, will presumably be pressuring them very hard to add its own candidates, who are of the profit- and growth-focused school of thought. I’d really prefer that they instead add smart, savvy, mission-driven operators who understand both the peril and the promise of AI.
Three candidates I’d be thrilled to see considered, all of whom I’ve had the privilege of working directly with over the past year:
Alondra Nelson, former senior White House staff member with AI focus; now with roles at the Institute for Advanced Study and the Center for American Progress.
Katherine Maher, CEO of Web Summit and former Executive Director of Wikipedia.
Tom Perriello, former Congressman and head of US programs at the Open Society Foundation; now studying AI with a fellowship at Stanford and the Carnegie Endowment for International Peace
One downside to my short list: They are all American. OpenAI is going to be making decisions that affect the entire world; their board should reflect that and also represent the entire world. Would love to hear your suggestions in the comments!
Context, if you don’t already have baseball cards of every board member
OpenAI has an unusual corporate structure in which the for-profit is a subsidiary of a non-profit. This diagram’s list of board members is out-of-date, but otherwise it’s the best visual aid I’ve found:
The OpenAI nonprofit’s stated mission is “to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.”
Before yesterday, OpenAI had 6 board members, of whom 4 are now left:
Employees:
CEO Sam AltmanPresident Greg Brockman (also previously chair of the board)Chief Scientist Ilya Sutskever
Non-employees
Helen Toner, of the Georgetown Center for Security and Emerging Technology
Tasha McCauley, CEO of small tech company GeoSim (yes, Crunchbase is the best bio website of her online)
Adam D’Angelo, CEO of Quora
Microsoft (and other investors / tech industry bigshots) reportedly had no idea that any of this was coming in advance. Microsoft especially has to be PISSED; they have tens of billions of dollars at stake in OpenAI, and Altman was doing his level best to make them an enormous profit. It’s hard to believe that any replacement CEO has as good a bet of getting Microsoft their 100x payoff.