Why OpenAI’s board fired CEO Sam Altman
To say it’s been a chaotic few days for the folks at OpenAI, a heavyweight in artificial intelligence development, would be an understatement. That certainly includes its now-former CEO, Sam Altman.
Here’s a quick recap. On Friday, the company’s board announced it had let Altman go, citing a lack of confidence in his “ability to continue leading OpenAI.” Several staff members then resigned and hundreds of others threatened to do the same if Altman wasn’t reinstated as CEO.
That option is pretty much moot now that Microsoft — a major OpenAI investor — has hired Altman to lead a new AI research team, along with former President Greg Brockman, who resigned in solidarity.
Marketplace’s Lily Jamali spoke with Reed Albergotti, tech editor at Semafor, about what the dramatic ouster was really all about.
Reed Albergotti: The disagreement was really around AI safety. Based on conversations that I’ve had, there’s been a lot of reporting out there about this, Altman was really the driving force behind commercializing this technology. And there are a lot of people who believe that this is potentially dangerous technology, that AI could one day, you know, threaten all of humanity. Ultimately, that’s what this was about.
Lily Jamali: Absolutely. And there’s this delicate balance very much on the fly, I might add, between scale, which Altman and other AI developers want, and that safety piece, which you just mentioned. So did the pendulum swing one way or the other this weekend when it comes to that?
Albergotti: Yeah, I mean, ironically, it seems, and this is based on what I hear from the AI safety crowd — what some people call the AI doomers — is really kind of worried that in trying to, you know, oust Sam Altman in the name of safety, this board may have actually accomplished the opposite thing, which is make the AI safety crowd look kind of, you know, foolish and bumbling, because this obviously was not a thought-out plan. And I think you’re seeing prominent people in the tech industry really speak out against this and specifically call out this movement, Effective Altruism, which, you know, if you haven’t heard that, it’s a popular movement within the tech industry. It started with sort of giving charity, based on data and trying to be, you know, smart about how you give your money. But a really big part of that movement has become AI safety, this sort of existential risk that is posed by AI. And so that movement is really, at least for a big chunk of Silicon Valley right now, really just kind of a bad word at this point.
Jamali: Yeah. And two of the board members that were involved in ousting Sam Altman had been associated with that Effective Altruism movement. Let’s talk about oversight at OpenAI because the board there is a nonprofit that can decide the leadership of a for-profit entity. What’s weird about that is that investors don’t have the formal avenues that they usually have to exert their influence. Why is it that the company is structured in this way?
Albergotti: Yeah, I mean, OpenAI started in 2015 as a nonprofit. This idea, it was actually co-founded by Elon Musk with the idea that OpenAI was supposed to research AI and figure out how to make it safe in order to kind of stop, I guess, irresponsible development of AI. And it sort of changed, right? So what happened was Elon Musk was really disappointed with the progress of OpenAI, and he walked away. And when he walked away, his money, his funding of this project, stopped. And so at some point, around that time, around 2018, Sam Altman and other people at OpenAI realized that the future were these large language models, which is the technology behind ChatGPT. They’re so big, they require so much [computing] power, it’s incredibly capital-intensive, so they needed money. And Altman’s idea was, let’s start a for-profit company that kind of, you know, reports to this board of the nonprofit, and that way we can raise money, but we’ll also still be true to our original mission. That’s kind of like the original sin here is that the structure was not really set up for it to be a high-growth startup. And the other interesting wrinkle here is that Altman did not take any equity. And as far as I can tell, the reason was he had invested in so many hot startups, and you know, [spent] his time running this accelerator called Y Combinator, that he’d made a bunch of money. And he also knew it would kind of look bad to take equity in this for-profit when he had started it as a nonprofit. So actually, a lot of venture capitalists did not invest in OpenAI because they were worried that the CEO didn’t have any skin in the game, [so OpenAI] wouldn’t be incentivized to work in the interest of shareholders.
Jamali: It is remarkable, you know, when I first started reporting on Silicon Valley, Sam Altman was very well known in Silicon Valley, but not really anywhere else. And he is now the global face of AI. He’s the face on every article we see about it, it seems. So the board which ousted Altman wrote that his behavior and lack of transparency and his interactions with the board undermined their ability to effectively supervise the company in the manner it was mandated to do. So I want to ask you, Reed, what is the bigger takeaway here? Under Altman, was there too much attention centered on the business side and not enough on the implications of pushing deeper into AI? The safety implications, the implications for all humanity, if you will?
Albergotti: That is definitely the thing that got him ousted. And I think what Altman realized is that as soon as ChatGPT was launched and the world just became absolutely infatuated with this — it was the fastest-growing consumer internet product in history — that started a race. And then every company from Google to Microsoft to Amazon, to even Meta, was now racing to develop this technology. And so either the company is going to try to win that race and stay ahead and keep its lead, really, or, you know, it’s going to fall behind. And ultimately, the view of Altman, and I think a lot of people here, is that you have to be ahead in order to develop this technology in a safe way, right? Because it’s only the people who are on the cutting edge of developing the technology who will be able to kind of shape and steer its direction.
Was OpenAI’s board “usurped” by its investors?
Albergotti: Well, I don’t know if it was usurped, but it was kind of sitting there and watching this happen. And I think that just this was kind of bubbling up over time, where even the release of ChatGPT was kind of controversial within the company, right? Because people weren’t really sure whether this was ready yet. They weren’t sure how safe it was. And then you have the Developer Day, which now seems like eons ago — it was just a little over a week ago, or it was two weeks ago, sorry. They announced this “GPT store” where people could sort of take their own idea for a chatbot and shape ChatGPT and then kind of put it up on the store for anyone to download and anyone to use. And I think even that was viewed by some people within the company as almost irresponsible because who knows what’s going to happen? What [are] people are going to do with that technology?
Jamali: Yeah. So I wonder where does this leave OpenAI then? Microsoft has exclusive rights to use OpenAI’s models. So as best as you can tell, what is the dynamic now between Microsoft and OpenAI?
Albergotti: The dynamic is Microsoft holds all the cards and really, by hiring Brockman and Altman — it seems likely the majority of the OpenAI team — if the OpenAI staff moves over to Microsoft, I mean, obviously, it’s basically over for OpenAI. Another possibility is that OpenAI just continues on like it was before, developing this technology and acting in good faith and working with Microsoft. But I think if you sort of play that out, what happens then over the years is that nobody’s gonna give OpenAI any more money. Obviously, they’re not acting in the interest of investors and Microsoft’s really not going to, you know, they’re not going to make any additional big commitment. So over time they’ll just be burning cash, and they’ll have to cut down on staff and then, you know, eventually it will kind of fizzle out. So, unless there’s some X-factor that we don’t know, I mean, there really is no long-term future for OpenAI at this point.
Jamali: Yeah, sounds like this does not bode well for them. No doubt, Microsoft didn’t want to lose Sam Altman within its ecosystem. And Brockman too, plus all these other people that were working at OpenAI. How competitive would you say the market is for AI executives at that level, and, you know, developers in the trenches?
Albergotti: Well, for executives like Altman and Brockman, I mean, they could have done, honestly, whatever they wanted. The fact that they went to Microsoft, it’s more because Microsoft has built this whole infrastructure around AI. That takes a lot of time to build. So it’s a much quicker way to kind of get to where you’re going. This investment in all the cloud computing and graphic processing units that they’ve had to install on their data centers, that stuff has taken years. As far as the AI researchers go, the top AI researchers who really understand deep learning and can push this technology forward, I mean, they can name their price at any of these top companies. I was at Google last year and just talking about recruiting these people. I mean, you know, Sundar Pichai, the Google CEO, is actually personally going in and recruiting these people at Stanford. I mean, it’s that big of a deal. So there aren’t a lot of them. There might be a lot of software engineers, but these people who can think in these, like almost philosophical, mathematical ways to try to push these algorithms forward, they’re few and far between.
Jamali: We’re coming off the conviction of Sam Bankman-Fried, which was, you know, an attempt to bring about accountability, or if you’re more cynical, perhaps the optics of accountability to some of the tech industry’s mistakes of the past. So is there a sense that that’s what OpenAI’s board was after here?
Albergotti: You know, obviously, I haven’t spoken with these people so — I mean, I’ve spoken with some of them in the past, but not since this all happened. My read is that these are actually well-meaning people. They see this as a moral stand. And if you read the letter from employees that leaked, they talked about the board being willing to let the company collapse. I think that’s really interesting. I mean, they basically were saying, we would rather see OpenAI collapse than have it move forward in a way that we think is dangerous to humanity. They could be totally wrong, but I think they are actually genuine in their conviction.
Wild to think that it hasn’t even been a year since ChatGPT was released to the public. The launch anniversary is next week.
Wired has a piece on the company’s origins. Building artificial general intelligence — a machine that could do what the human brain can do — and making it safe for humanity was the original goal, Steven Levy wrote.
The people that work, or worked, at OpenAI assumed that “AI’s trajectory would eventually surpass whatever peak biology can attain.” Levy noted that the company’s financial documents even stipulate a kind of exit contingency if and when AI wipes out our whole economic system.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.