03 – The Mission Bay Incident

The Mission Bay Incident#

While the race to catch up with GPT-4 continued to accelerate, an unexpected event would shake OpenAI to its core and expose the internal tensions that had been quietly building inside the leading organization.

Let’s move to November 2023. On Friday the 17th, OpenAI’s board of directors did the unthinkable: they removed Sam Altman as CEO, under the premise that they had “lost confidence in his leadership” and that Sam had not been “consistently honest in his communications.” The implications, motives, and consequences are enough to make an entire movie. In fact, one is already being made.

Among the most frequently cited rumors was that Sam Altman wanted to iterate on the product at a faster pace, sidelining the safety work on which Ilya Sutskever placed much of his focus. According to unconfirmed accounts, Mira Murati had been secretly leaking chats, emails, and internal information to Ilya for over a year—details about actions Sam was taking that both of them considered misguided. Over time, they convinced themselves—and eventually the board—that removing Sam was the right decision. Even as late as 2025, new details continue to surface about what happened behind the scenes to trigger the incident.

The news spread around the world and caught the entire community off guard, spawning a wide range of opinions, speculation, and memes—my personal favorite being: “What did Ilya see?”

But to summarize what were days of absolute chaos: after the threat of a mass resignation by hundreds of OpenAI employees and Microsoft’s offer to build Sam an entire organization/lab inside Microsoft itself—along with his entire entourage—the board was left with no real choice but to reinstate Sam just five days after his dismissal, only to be removed themselves shortly thereafter.

This episode demonstrated something many of us had long suspected but were not yet ready to accept: OpenAI was no longer that non-profit organization devoted to developing AI for the benefit of humanity, governed by a board free of conflicting interests and guided by what was best for the world. Instead, it had become a company like any other, subject to power dynamics among founders and the pressure of investors.

SSI (Safe Superintelligence): Ilya’s Lab#

The consequences of the November incident did not take long to materialize. Several months later, in June 2024, the first of two major splinterings within OpenAI occurred: the resignation—or more precisely, the indirect dismissal of Ilya Sutskever, co-founder and chief scientist of the company. We speak of an “indirect dismissal” because, according to multiple sources, after the failed attempt to remove Sam Altman, Ilya was effectively isolated within the company, with reduced access to resources and virtually no influence.

Ilya’s response was to create his own startup: Safe Superintelligence Inc., or simply SSI. With a single stated objective: to build superintelligence safely, without any intermediate products. SSI is arguably the most secretive lab in the industry to date.

Although the specific details of their work remain undisclosed, some directions can be inferred from interviews Ilya has given over the past couple of years. Unlike the industry’s dominant approach of ever-larger, increasingly general models, SSI appears to be pursuing something fundamentally different: not absolute intelligence, nor a hyper-ambivalent model like today’s LLMs, but a system (model and learning algorithm) that behaves more like a human teenager. That is, a system that, without being an expert in everything from the start, can be deployed in different environments to learn how to perform specific tasks in the real world. In this vision, Ilya has repeatedly emphasized the importance of a powerful value function—the mechanism that allows a model to evaluate which actions bring it closer to its goals—as the key element underlying human learning capability.

SSI’s opacity is striking. Other details about the startup remain tightly guarded: its stage of development, the exact size of the team, the specific techniques being explored. The few data points that have surfaced are fragmentary: the team is small (estimated at a couple dozen researchers), much of it comes from Israel, they are using Google TPUs for their compute infrastructure, and their valuation, according to 2025 reports, has already reached 30 billion dollars.

Some say—rightly so—that if the history of AI were a movie, Ilya Sutskever would be its protagonist.

Thinking Machines Lab: Mira’s Project#

The second major split would arrive just over a year after the Sam Altman incident. In February 2025, Mira Murati, who until then had served as OpenAI’s CTO, decided to embark on her own venture: Thinking Machines Lab.

Unlike Ilya’s quiet departure, Mira’s exit was a sizable exodus. She took with her a significant number of key OpenAI researchers, as well as select talent from other competing labs.

Without having launched a single product, Thinking Machines Lab secured an estimated initial valuation of 10 billion dollars. In the age of AI, pedigree and promises are worth almost as much as results.

But Mira and her team would not remain as mysterious as Ilya for long. Months after raising their initial capital, they began publishing a series of blog posts sharing genuinely interesting research: they explored the deep causes of non-determinism in LLMs (that seemingly random behavior that persists even at temperature 0), introduced a novel optimization algorithm variant called Manifold Muon, developed advanced techniques using LoRA (Low-Rank Adaptation, a method for efficiently adapting large models) for LLM fine-tuning, and shared new approaches to model distillation.

Who would have thought? They ended up being more transparent and more “open” than today’s OpenAI.

However, Thinking Machines Lab’s most significant launch was neither a paper nor a model, but a commercial service: Tinker. This is an API-accessible platform that allows companies and developers to perform customized fine-tuning of LLMs, still in private beta as of late 2025. You control the algorithm and the training data; they provide all the computational infrastructure and scalability.

And in the AI world of 2025, a handful of blog posts and a working API are enough to open the doors to the next funding round. According to estimates, this round would place Thinking Machines Lab at a valuation of 50 billion dollars.


Be that as it may, this would not be the first time that a seed fallen from OpenAI’s tree germinates and flourishes spectacularly on its own soil.

Right, Dario?


Author’s Note:

  1. The final question refers to Dario Amodei, co-founder of Anthropic and former Vice President of Research at OpenAI, who left the company in 2021 along with other