“The thing people forget about human babies,” mused Sam Altman, the entrepreneur whisperer turned artificial intelligence diviner, to The New Yorker’s Tad Friend in 2016, “is that they take years to learn anything interesting.” Tough but fair, although any babies reading this oughtn’t feel too embarrassed: Altman pointed out elsewhere in the piece, which was titled “Sam Altman’s Manifest Destiny,” that we grown-ups aren’t too quick on the uptake ourselves.
“There are certain advantages to being a machine,” said Altman, the 38-year-old who—with the hectic exception of the past five-ish days; more on that in a moment!—has been the high-profile and highly influential CEO of OpenAI since 2019. His company recently flirted with an implied valuation of $80 billion; it is behind products like the smart image generator DALL-E and the beguiling large language model chatbot ChatGPT. “We humans are limited by our input-output rate—we learn only two bits a second, so a ton is lost,” Altman told Friend. “To a machine, we must seem like slowed-down whale songs.”
It’s easy to imagine a tech leader like Altman sympathizing with the plight of such a bot. When you’re a guy who likes to operate not just on a different wavelength from most other mortals, but in a whole nother realm of consciousness—one in which the goal of achieving AGI, or “artificial general intelligence,” is considered possibly world saving or world ending, depending on who is doing the extrapolations—those brisk whirs of industry tend to resonate better than humanity’s low, musical moans.
Why, just the other day—last Thursday, to be specific—Altman sat at a developer conference and described a recent experience that had left him positively vibrating with wonder. “On a personal note,” he told interviewer Laurene Powell Jobs and the rest of the APEC audience, “just in the last couple of weeks, I have gotten to be in the room when we sort of, like, push the veil of ignorance back and the frontier of discovery forward.” I’ve heard people use this kind of language to describe, like, the glory of childbirth, but in Altman’s case, he was describing the arrival of a different little bundle—lines of code on a computer that could go on to change the world.
And yet, even the sleekest, purringest, many-billion-dollar flywheel can get smoked by a dumb, sudden bird strike; even the deepest-dwelling whales can surface at random and upend a vessel. Why, just the other day—last Friday, to be specific—the OpenAI board of directors abruptly decided it would be prudent to fire its CEO into the sun. And so, without telling anyone, including its publicly traded partner and mega-investor Microsoft, it went ahead and did it, with a ruthlessness that might have pleased the machines if everything hadn’t turned out so aggressively, humanly awkward instead.
It’s always jarring when a real story feels fake, when everyone is skeptical of buying what you’re telling. Sometimes, the very people most familiar with a story are the ones most moved to try to explain things via shared fiction.
Even among the techno journos and cyber doomers and network statists and “See, corporate governance matters!” nerds who have been glued to the sudden goings-on and votings-out at OpenAI—even among those of us who are terminally online enough to have tuned in eagerly last Friday to a highly speculative and information-light Twitter Spaces event about Altman’s odd ousting cohosted by Martin Shkreli; ask me how I know—we couldn’t help but notice that the past five days have unfolded like something you’d find on TV.
Like an episode of Succession! Like a whole season of Succession, I should say, with enough rapid twists and U-turns in the power struggle timeline to make GoJo seem slo-mo by comparison. On Wednesday morning, when I woke up to the news that we’d reached a finale and Altman was coming back to OpenAI as CEO, my rotted brain could only think about Tom Wambsgans saying to Kendall Roy: “I’ve seen you get fucked a lot, and I’ve never seen Logan get fucked once.” And when I learned that Altman’s return involved a board of directors shake-up that installed both former Salesforce co-CEO Bret Taylor and former jetsetting Harvard president and compulsive opiner Larry Summers (?!?) … I mostly thought about how ’ol “Lawrence of Absurdia” would have been quite the character on Silicon Valley. (“Larry sucks up, and he bullies down” has the makings of a Russ Hanneman motivational speech, you know?)
But mostly, all this time, I’ve thought about Survivor: specifically, one of those humdingers where the tribal council has started but there are still 24 minutes left in the episode. Just consider that, between the close of the stock market’s trading hours at 4 p.m. Eastern time Friday and the opening of the stock market’s trading hours at 9:30 a.m. Monday, all of the following happened:
- OpenAI’s board of directors—a riddle wrapped in a mystery inside an enigma, which sits on the original nonprofit side of the organization but has absolute control over the newer for-profit side too, due to a once-idealistic, now-unusual corporate governance structure—announced that Altman was out. It informed him of this decision via Google Meet; it informed most of the rest of the world via a press release that cryptically described Altman as having been “not consistently candid in his communications with the board.”
- Into this absence of information flowed many theories. The abruptness of the decision suggested the worst. On the Twitter Spaces event I joined, Shkreli posited that perhaps it had something to do with a recent New York magazine story titled “Sam Altman Is the Oppenheimer of Our Age,” in which Altman’s sister, Annie, spoke out about her estrangement from her brothers and followed up on past accounts of familial abuse. (The hosts of the Twitter Spaces event concluded that this explanation for Sam’s ouster seemed less likely once big names in Silicon Valley began speaking out with public statements of support for him.)
- Greg Brockman, OpenAI’s president and a member of the board who was also blindsided by a vote of removal, bid adieu in protest.
- Another theory behind the decision began to take hold around social media and hasn’t quieted since: that the board of directors had fired Altman out of some sense of moral duty because members felt or knew that he was being too cavalier, or maybe too commercial, with the technology’s rate of veil-lifting, frontier-pushing growth. Was this an attempt to keep OpenAI from breaking with its nonprofit origins and expanding its for-profit operations? Was it a way of slowing the company from iterating its way into the brave new world of actual AGI too soon? It’s not unusual for a board of directors to make decisions based on an organization’s mission or first principals or founding charter. But when that mission is related to the very future of mankind, the stakes are slightly raised.
If it’s true that they torched an $80 billion company cause they thought they were too close to building God, then that’s orders of magnitude the most punk rock thing I’ve ever heard of.— Joe Weisenthal (@TheStalwart) November 18, 2023
- Terms like “doomers” (used to describe fretful people who regard the potential of AGI with dread), “safetyists” (self-explanatory), and “decels” (people who think we should just sloooow down, man, before someone gets hurt) were all over my timeline, deployed with varying amounts of derision or respect.
People make fun of academic jargon, but the phrase "decel safetyist" is currently being uttered by dozens of perfectly respectable people in the worlds of business and tech. Everybody is ridiculous.— Phillip Maciak (@pjmaciak) November 20, 2023
- An October tweet from board member Ilya Sutskever, who was said to have delivered the news to Altman, resurfaced and was widely analyzed for clues: “if you value intelligence above all other human qualities,” he had written, “you’re gonna have a bad time.”
- Altman posted “I love you all” on Twitter; followers with big Swiftie energy pointed out that the first letters of each word spelled out ILYA.
- Elon Musk, who cofounded and named OpenAI in 2015 and had served on the board for a time (along with Shivon Zilis, a former Yale hockey goalie who has worked at Tesla and Neuralink and who is also the mother of one of Musk’s sets of twins), stoked the existential crisis flames. He retweeted Sutskever’s quote; “I am very worried,” Musk added. “Ilya has a good moral compass and does not seek power. He would not take such drastic action unless he felt it was absolutely necessary.”
- OpenAI’s chief operating officer, Brad Lightcap, wrote an internal memo viewed by several media outlets that explained all the reasons that weren’t behind Altman’s firing: The move “was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices,” Lightcap wrote. “This was a breakdown in communication between Sam and the board.” About what, he did not say.
- The Verge and other outlets reported that Altman was in talks to return to the company. Soon after, he posted a photo of himself wearing an OpenAI guest badge. Another AI employee posted a photo of Altman taking said selfie, as proof of life.
first and last time i ever wear one of these pic.twitter.com/u3iKwyWj0a— Sam Altman (@sama) November 19, 2023
- OpenAI made an announcement confirming that Altman would not be returning as CEO—because the company had made a new indefinite-term hire. A warm welcome to onetime Microsoft intern Emmett Shear: the former CEO of Twitch, a noted Harry Potter fan, and one hell of a reply guy. Shear, a self-described safetyist/doomer who also seemed not to know exactly why his predecessor had gotten got, vowed to launch an investigation immediately.
- Microsoft CEO Satya Nadella released a scorcher of a corporate communication around midnight Pacific time Sunday, expressing optimism about the company’s many-billion-dollar investment in OpenAI and adding that, oh, by the way, he had decided to hire Altman and Brockman into Microsoft directly so they could start a new in-house artificial intelligence and also that, oh, by the way, Big Clippy would be happy to hire any of the hundreds of OpenAI employees who sought to follow their former leaders to the BigCo. This was bonkers stuff. (Also, something about the glint of “We look forward to getting to know Emmett Shear” made my blood run cold.)
- OpenAI employees loyal to Altman—including Mira Murati, who had ever so briefly been interim CEO—flooded Twitter with heart emoji and the line, “OpenAI is nothing without its people,” which sounds precisely like the kind of thing a scheming AI would say to butter us tenderhearted humans up. (I did see someone on Twitter joke that maybe if he joined in and tweeted the line too, he could slip right into a seven-figure job at Microsoft undetected.)
- ILYA TWEETED THAT HE DEEPLY REGRETTED HIS PARTICIPATION IN THE BOARD’S ACTIONS AND WROTE THAT HE NEVER INTENDED TO HARM OPENAI. (?????) (!!!!!!!!)
- SAM RETWEETED ILYA’S TWEET AND ADDED SOME HEART EMOJI.
- As reported by Kara Swisher, a petition went around imploring the remaining board holdouts—one of whom included the CEO of Quora, because of course—to step down or face the mass resignation of what would eventually be something like 95 percent of OpenAI employees. ILYA SIGNED THE PETITION. (It’s unclear whether he was clad in a hot dog suit at the time.)
And that accounting of the weekend absurdity doesn’t include the most Silicon Valley detail of them all, the one so on the nose it seemed scripted, but only because it happened after the opening bell:
- The CEO of a “smart mattress” company called Eight Sleep tapped into the mainframe and emerged with some data: Few people in San Francisco got a good night of sleep on Sunday. Maybe it’s because they’re being surveilled by their mattresses?
Breaking news: The OpenAI drama is real.— Matteo Franceschetti (@m_franceschetti) November 20, 2023
We checked our data and last night, SF saw a spike in low-quality sleep. There was a 27% increase in people getting under 5 hours of sleep. We need to fix this.
Source: @eightsleep data
The breakneck pace of updates continued once the workweek got under way: There were lots of reports about meetings, more OpenAI employees signing the petition; wives doing work; Salesforce’s Marc Benioff getting roasted; Shear trying and failing to learn why Altman got sacked in the first place; things of that nature. For a time, Altman existed in a sort of quantum state, employed (though not quite yet) by Microsoft and fired from (but still looming over) OpenAI. On Tuesday night, a New York Times story noted the deep rift between Altman and some of the members of the board—one of whom, Helen Toner, had criticized OpenAI in an academic paper she wrote and had also said that the company, and the mission, and humanity, could be better off without Altman.
I fell asleep thinking this might last for a while, feeling sorry for tech reporters whose Thanksgiving might be ruined. And when I woke up, Sam was back.
I know some readers might be thinking: What’s up with all the Sams? And you know what, they’re right to do so. Because there really are a number of similarities between Altman and another Sam of recent yore—Bankman-Fried—whose fraud trial I spent my October observing.
Both have totally aptronymic last names, if you think hard about it, man. Bankman-Fried had a disagreement with a business partner named Tara Mac Aulay that led to a professional schism; Altman had a disagreement with a now-former board member named Tasha McCauley that led to Friday’s professional schism. (As a side note, McCauley married Joseph Gordon-Levitt in 2014, which I understandably have no parallel for, but it feels essential to mention.) Both had game-changing moments while on hikes just outside San Francisco: Bankman-Fried charmed Michael Lewis into writing a book, while Altman “relinquished the notion that human beings are singular” and began thinking more deeply about the power and might of simulating intelligence. (So, like, same, except exactly the opposite.) Bankman-Fried named his investment firm Alameda Research in an attempt to sound less crypto-y; Altman had an early entity he called Hydrazine, named after the compound used in rocket fuel.
And both Sams ultimately became well-known and willing avatars for their respective nascent industries, always ready to don those little nude nub microphones they hand out at tech conference panels and opine about P values and the future of crypto or AI. They may not have written the code underlying their ventures, but they sure spoke the media’s lingua franca. (Wait, were they the personality hires?!) In their own ways, they cultivated press relationships: Bankman-Fried’s attention to his own narrative was so deliberate that the prosecution used it against him in court, while Altman’s rapport with some reporters may have helped him this weekend, as one opined.
But the other quirky Samilarity is that both of their ascents had ties to effective altruism, the rationalist-adjacent worldview that seeks to define, quantify, and ultimately encourage the actions that can do the most good for all of humanity—both now and in the future. For Bankman-Fried, effective altruism was, at least nominally, an ethical framework that compelled him to seek greater and greater sums of money and encouraged him to take bigger and unwieldier financial swings. (He struck out.)
Altman’s engagement with EA is murkier. On Twitter, a coalition of shitposters, venture capitalists, and chaos slurpers—whoa, everything really IS (a) securities fraud and (b) college football—have started half-jokingly calling themselves “effective accelerationists,” or “e/acc,” of late, a salvo against what they consider to be the gloomier-and-doomier EA types. Altman offers glimpses of futures that both EA believers and e/acc trolls want, and some in the latter group have interpreted his reverse-Grandpa Simpson as a sign that perhaps he shares their merrier approach to AI R&D. Whether he actually does is something I assume we’ll find out when our strawberry overlords come to town.
While the Altman drama was in full flux, much of Silicon Valley hearkened back to its most notorious founder ousting of all time: that of Apple’s Steve Jobs, a farewell so famous that Uber’s Travis Kalanick later tried to turn the breakup into a verb. “If only twitter had been around during the john sculley / steve jobs conflict,” wrote Founders Fund principal Delian Asparouhov (recently described as “the man speed-running the new space race”). “History is so much more interesting when you watch it play out live on a timeline.”
It wasn’t just the firing of Jobs that is relevant to Altman’s situation, though. It was the way his eventual return only enhanced his power and influence.
Walter Isaacson’s biography of Jobs quotes him in 1983, two years before the split with Apple, when he recruited Sculley away from PepsiCo with this winning pitch: “Do you want to spend the rest of your life selling sugared water, or do you want a chance to change the world?” But in the real world, Jobs and Sculley clashed over the disappointing sales of, among other products, the Macintosh. An attempt by the Apple cofounder to appeal to the board of directors following a demotion led instead to his departure from the company. “I am but 30 and want still to contribute and achieve,” Jobs wrote in a parting letter to the company’s vice chairman.
A little over a decade later, at the end of 1996, Apple was floundering, and Jobs was brought (and bought) back into the fold. At the time, I was a teenage employee of an online chat company with Apple roots that had Sculley as a board member and investor, as well as a huge Apple dweeb who handled the return of Jobs like a Marvel fan glimpsing a bygone fav in a mid-credits scene. When Apple debuted its “Think different” campaign in the fall of 1997, I downloaded a grainy QuickTime of the ad and watched it again and again.
By then, the company was back on the rise. Earlier that summer, I had attended the Macworld expo in Boston, where Jobs went on stage and made a pivotal announcement about a big, stabilizing $150 million investment from … Microsoft. Jobs was but 42, and still had a whole lot to contribute and achieve; to doom and bless the world with. Or, as he might’ve put it, he had a few more one more things up his sleeve.
Altman’s exile, depending on whether you calculate the end of it as his show of support from Microsoft or his return to OpenAI specifically, lasted roughly between one-twentieth and one-tenth of 1 percent as long as Jobs’s did. But it included a larger, undefined number of heart emoji tweets, that’s for sure. Like Tom Sawyer and Huckleberry Finn sneaking into their own funeral service, both Altman and Brockman got to observe an enormous amount of employee support for their leadership. Now, back atop the company, they get to figure out what to do with it, and how to ensure that all this goodwill doesn’t break bad.
Since the will they or won’t they nature of this story has given way to (some?) clarity, this is the biggest focal point surrounding OpenAI’s future. In a pair of televised appearances Monday evening, Microsoft’s Nadella had amiably and CEO-ishly hedged about whether he thought Altman would wind up in-house at Microsoft or whether he’d be able to return to OpenAI. “I’m open to both options,” he said on CNBC. “One thing I will not do is stop innovating.” (He’s running!) Over the past few days, Microsoft served as an important backstop for OpenAI, a sort of employer-of-last-resort during what felt like the HR version of a bank run. In exchange for Nadella’s trouble, it stands to reason that OpenAI’s new board—which, at the moment, consists of just three people: Taylor, Summers, and the Quora CEO Adam D’Angelo, who already had a seat—will have a much friendlier and likely more commercial relationship with the company who provides all that computing power in addition to capital. And Microsoft will ostensibly at some point want to push for a board seat of its own.
There are two other parts of the 2016 New Yorker story that feel especially relevant today. The first is a quote from the venture capitalist Paul Graham, a longtime Altman colleague and advocate who once approvingly wrote that “software is eating the world” and had a track record of finding Altman to be formidable. “Sam is extremely good at becoming powerful,” Graham told Friend in the story. It echoed something that Graham wrote back in 2008, linking to a video of Altman presenting his Gossip Girl–approved app, Loopt, at an Apple developers conference while wearing two polo shirts with popped collars: “Sam Altman has it. You could parachute him into an island full of cannibals and come back in 5 years and he’d be the king.”
The second resonant part of that New Yorker story is an anecdote about a leading AI researcher from Google visiting Altman and Brockman. The researcher asks them—I mean really asks them—how they would define OpenAI’s goal. Brockman’s answer is classic Silicon Valley, and classic Silicon Valley. “Our goal right now … is to do the best thing there is to do,” he declares. “It’s a little vague.”
What isn’t as vague is that, going forward, OpenAI is well and truly Altman’s baby—a baby that has a much scarier and expedient learning curve than our human ones do. These past few days have been filled with everyone talking over one another—investors, founders, and observers alike. But to the machines, it was all just background noise, some distant hum of human discord. Sometimes you eat the whale, and sometimes the whale eats you.