Khazen

by Matt Marshall — @mmarshall venturebeat — OpenAI’s announcement last night apparently resolved the saga that has beset it for the last five days: It is bringing back Sam Altman as CEO, and it has agreed on three initial board members – and more is to come. However, as more details emerge from sources about what set off the chaos at the company in the first place, it’s clear the company needs to shore up a trust issue that may potentially bedevil Altman as a result of his recent actions at the company.

It’s also not clear how it intends to clean up remaining thorny governance issues, including its board structure and mandate, that have become confusing and even contradictory. For enterprise decision makers, who are watching this saga, and wondering what this all means to them, and about the credibility of OpenAI going forward, it’s worth looking at the details of how we got here. After doing so, here’s where I’ve come out: The outcome, at least as it looks right now, heralds OpenAI’s continued shift toward a more aggressive stance as a product-oriented business. I predict that OpenAI’s position as a serious contender in providing full-service AI products for enterprises, a role that demands trust and optimal safety, may diminish. However, its language models, specifically ChatGPT and GPT-4, will likely remain highly popular among developers and continue to be used as APIs in a wide range of AI products.

More on that in a second, but first a look at the trust factor that hangs over the company, and how it needs to be dealt with. The good news is that the company has made strong headway by appointing some very credible initial board members, Bret Taylor and Lawrence Summers, and putting some strong guardrails in place. The outgoing board has insisted that an investigation be made into Altman’s leadership, and has blocked Altman and his co-founder Greg Brockman’s return to the board, and have insisted that new board members be strong enough to be able to stand up to Altman, according to the New York Times.

Altman’s criticism of board member Helen’s Toner’s work on AI safety One of the main spark points for the board’s wrath against Altman reportedly came in October, when Altman criticized one of the board members, Helen Toner, because he thought a paper she had written was critical of Open AI, according to earlier reporting by the Times. In the paper, Toner, a director of strategy at Georgetown University’s Center for Security and Emerging Technology, included a three-page section that was a detailed and earnest account of the way OpenAI and a major competitor Anthropic approached the release of their latest large language models (LLMs) in March of 2023. OpenAI chose to release its model, in contrast with Anthropic, which chose to delay its model, called Claude, because of concerns about safety.

The most critical paragraph (on page 31) in Toner’s paper carries some academic wording, but you’ll get the gist: “Anthropic’s decision represents an alternate strategy for reducing “race-to-the-bottom” dynamics on AI safety. Where the GPT-4 system card acted as a costly signal of OpenAI’s emphasis on building safe systems, Anthropic’s decision to keep their product off the market was instead a costly signal of restraint. By delaying the release of Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur.“ After complaining to Toner about this, Altman messaged colleagues saying he had reprimanded her because it was dangerous to the company, especially at a time when the FTC was investigating OpenAI’s usage of data, according to a source quoted by the Times.

Toner then reportedly disagreed with the criticism, saying it was an academic paper that researched the complexity in the modern era of how companies and countries signal their intentions in the market. Senior OpenAI leaders then discussed whether Toner should be removed, but co-founder Ilya Sutskever, who was deeply concerned about the risks of AI technology, sided with other board members to instead oust Altman for not being “consistently candid in his communications with the board.” All of this came after some previous board frustrations with Altman about his moving too quickly on the product side, with other accounts suggesting that the company’s recent DevDay was also a major frustration for the board. Altman’s stand-off with Toner was not a good look, considering the company’s founding mission and board mandate, which was to create safe artificial general intelligence (AGI) to benefit “humanity, not OpenAI investors.” This background helps to explain how the company came to its decision last night about the conditions of bringing Altman back. After days of back and forth, Toner and another board member Tasha McCauley agreed yesterday to step down from the board, the Times’ sources said, because they agreed the company needed a fresh start. The board members feared that if all of them stepped down, it would suggest the board was admitting error, even though the board members thought they had done the right thing.

A board primed for growth mission

So they decided to keep the remaining board member who had stood by the decision to oust Altman: Adam D’Angelo. D’Angelo did most of the negotiating on behalf of the board with outsiders, which included Altman and the interim CEO until last night, Emmett Shear. The other two initial board members announced by the company, Taylor and Summers, have impressive credentials. Taylor is as Silicon Valley establishment as you can get, having sold a $50 million business to Facebook, where he was CTO, having also served at Google, and then later becoming co-chief executive of Salesforce; Lawrence Summers is a former U.S. Treasury secretary, with an excellent track record for steering the economy. Which brings me back to the point about where this company is headed, or at least seems to be headed given the outcome so far: toward an awesome product company. You can’t really start with a more rock-star board than this, when it comes to growth orientation. D’Angelo, a former early CTO of Facebook, and co-founder of Quora, and Taylor, have stellar product chops. Given the various cards each player had in this game, the outcome appears to have a certain logic to it, despite the appearance of a very messy process and apparent incompetence.

Jettisoning two members of the board that had most espoused a philosophy of effective altruism and (EA) also appears to have been a necessary outcome here for the OpenAI to proceed as a viable company. Even one of the most prominent backers of the EA movement, Skype co-founder Jaan Tallinn, recently questioned the viability of running companies based on the philosophy, which is also associated with a fear about the risks AI poses to humanity. “The OpenAI governance crisis highlights the fragility of voluntary EA-motivated governance schemes,” Tallinn told Semaphor. “So the world should not rely on such governance working as intended.” Whether Tallinn is actually correct on this point isn’t exactly clear. As the example of Anthropic shows, it may be possible to run an EA-led company. But in OpenAI’s case, as least, there was enough friction that something needed to change.

Diversity required

In its statement last night, the company said: “We are collaborating to figure out the details. Thank you so much for your patience through this.” The deliberation is a good sign, as the next steps will require that the company put together an expanded board of directors that is equally as credible as the first three – if this company expects to stay on its massive success trajectory. A reputation for fairness and thoughtfulness is critically important, when it comes to the needs for AI safety. And diversity, of course: As a reminder, Summers was forced to resign from Harvard president because of some comments he made about the reasons for under-representaton of women in science and engineering (including the possibility that there exists a “different availability of aptitude at the high end”).

Conclusion

We’ll see over the next few days how the company puts the remaining pieces together, but for now the company looks set to move toward a more established, for-profit, product direction. From our reporting over the last few days and months, though, it appears that OpenAI is headed in the direction of working at scale for hundreds of millions of people, with general purpose LLMs that millions of developers will love, and which will be good at many tasks. But its LLMs won’t necessarily be capable, or trusted, to do the task-specific, well governed, safe, unbiased, and fully orchestrated work that enterprise companies will need AI to do. There, many other companies will fill the void.