How OpenAI Showed Silicon Valley’s Ugly-Side
Flashback: What happened at OpenAI in November
If you are one of the fortunate few who happened to be away hiking in Patagonia that weekend, here’s a quick summary:
Ilya Sutskever, one of the big-brains behind OpenAI, sided with OpenAI board member Helen Toner on her concerns about Altman’s leadership, which were serious enough to convince the whole board - minus Greg Brockman - to push him out.
In the mind-bending series of events that followed: Brockman stepped down in solidarity with Altman, and through coordinated heart-posts on X (and an open letter) pretty much the entire team signalled that they would do the same. Altman and Brockman briefly floated the idea of a new venture until Satya Nadella, CEO of Microsoft, swept in and offered to scoop them all up.
Finally, the smoke cleared, and it seemed like Altman had secured a win. The problematic board was cleared out (with the exception of Ilya, who now expressed regret for his initial position) and replaced with a group of ‘more serious people’.
This is where the story ended. The engineers went back to work, and the EAs and the e/accs went back to bickering about exactly how to address the impending supremacy of AI.
If you come at the king…
Like the final season of Game of Thrones, the end of this saga was underwhelming, confusing, and full of unanswered questions.
Wasn’t the corporate structure of OpenAI designed to prevent scenarios exactly like this? In years past, Altman had used the board’s authority to reassure skeptics about his position in the organization. “The board can fire me, and I think that’s important,” he told the interviewer at a Bloomberg conference.
What caused Ilya’s change of heart? What assurances were he given about the future of the company, the direction of OpenAI’s research, or the strategic direction?
What was the reason that Ilya and Helen wanted to remove Altman in the first place? There were a couple of very generic statements made to rule out the obvious causes, but nothing which really points to the cause.
What about Altman’s co-founder and donor, Elon Musk, who has been a vocal critic of OpenAI’s recent direction? Why was he unusually reluctant to weigh in on the debate?
Unfortunately, we don’t know the answers to any of these questions, which is in itself a huge red flag.
What we saw unfold over that weekend should trouble anyone who believes in that mission. Not only are Musk’s concerns about OpenAI’s closed, for-profit pivot valid, but it also turns out that - when necessary - Altman can overpower the board with his leverage on employees and shareholders. He can do so from the shadows, with no explanation, no investigation, no transparency.
Hooked on acceleration
Follow the money, and you’ll find the incentives that explain the actions.
An observer might ask why OpenAI, as a non-profit foundation, ever took money from venture capitalists. It’s explained in part by the capped-profit entity owned by the foundation, which allows OpenAI to operate as a business… but it’s still inconsistent with the outsized, fund-returning investments that VCs look to make. Clearly, those early backers, like Andreessen Horowitz and Khosla Ventures, expected OpenAI to become a monster long before it was clear to the public.
Similarly, you might ask why OpenAI issues such huge grants ($500k) of ‘profit participation units’ (a form of equity compensation) to employees. If this were to be a non-profit, largely R&D-driven endeavor, why would you make ‘profit’ the primary metric of success and reward for your workforce?
Unfortunately for Toner, who tried to execute her duty as a board member, seeing OpenAI go wildly off mission, these new profit- and growth-driven incentives spread their roots throughout the company. Any threat to Altman, in the name of ‘more responsible development’, threatened that growth and thus the financial future of every employee and shareholder.
In general, these incentives are a net positive. Were this any other company, they would contribute to a fast moving, motivated organisation under the respected leadership of Altman. What is inexplicably weird, is that this was allowed to develop under the guise of a non-profit, funded by donors.
The future of AI governance
We can hope that a stronger board of more engaged figures will help keep OpenAI on the right track and deliver an outcome that will satisfy both the cautious and the keen. A board with some real heavyweights from the world of tech with the clout to guide Altman. A board like the OpenAI board of 2019, but with firmer commitment. Even then, it would operate under a ‘Sword of Damocles,’ representing Altman’s willingness to walk away with the team and Nadella’s willingness to provide a home.
The ultimate solution for a company with the funding, mission, and responsibility of OpenAI might be more drastic; there’s a very good argument that OpenAI should be a public company.
Over the years, Musk has expressed regret for taking Tesla public, and flirted with the idea of reversing the move. Mostly that has to do with dealing with short sellers, a phenomenon unique to public companies. The other features, like publishing quarterly statements and shareholders’ meeting reports, have all helped build trust in the Tesla brand through added transparency. Those additional duties certainly don’t appear to have slowed Tesla down.
In fact, if you go back a few years, the argument that OpenAI should go public for the purpose of ensuring transparency was also raised. At the time, the argument against that proposition was that public companies have a duty to ‘maximize shareholder value’ by generating profit - which should not be a priority for OpenAI. Today, it appears that no such conflict exists anyway, as OpenAI is already focused on maximizing revenue. There’s also no real legal precedent to bind OpenAI to maximize profit, should they decide to change strategy.
It doesn’t matter how arcane the corporate structure is, ambitious private companies with private investors are fundamentally biassed towards profit and rapid growth. Public companies generally have less extreme growth expectations, and enforce standards on transparency and accountability. That seems like a far more robust and comprehensive solution to OpenAI’s dilemma, and a possible future for all organisations like it.