Skip to Content
Categories:

Sam Altman’s Return to OpenAI Is Another Warning

Sam Altman returned as CEO of OpenAI last week. (Creator: Justin Sullivan | Getty Images)
Sam Altman returned as CEO of OpenAI last week. (Creator: Justin Sullivan | Getty Images)
Getty Images

Last week, the tech and AI world was shaken, with the biggest news centering around the drama with Sam Altman, who was driven out of his former position as CEO of OpenAI on Friday, only to return a few days later. The initial ousting was shocking to all of those in the tech and AI industry, as Altman had famously led OpenAI to develop complex programs such as ChatGPT and DALL-E. The move shook employees inside the company as well, with more than 700 employees out of 770 at OpenAI threatening to resign due to the board’s “inexplicable” actions.

The tech company was on the brink of collapse, as pressure and questions came fuming at the OpenAI board from its employees and members of Microsoft, which has a $13 billion investment in OpenAI, essentially giving it a 49 percent stake in the company. A few days later, Altman seemed to be on his way to take a role at Microsoft, and his days with OpenAI seemed to be over. However, the drama took an unexpected turn when he made a shocking return to OpenAI five days later. On November 22nd, OpenAI announced on X (formerly known as Twitter), “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.”

The drama raised numerous questions from others in the industry, far beyond just the company’s issues on hiring and firing employees: Who were the people making these scandalous decisions that could determine so much of OpenAI’s future? What were their reasons to force Altman out in the first place? 

To give some background, OpenAI has an odd structure in terms of governance and power. It was founded as a nonprofit in 2015 with Sam Altman and Elon Musk, and others, with its goals to use the rising artificial intelligence as a means to “benefit humanity as a whole,” as stated by OpenAI’s website. However, despite this mission, the company shifted into a for-profit subsidiary in 2018, as it began gaining billions of dollars from its investors. They claimed that this subsidiary would be controlled by the board, whose duty would be for “humanity, not OpenAI investors,” but this seems faulty. 

OpenAI’s investors have no formal way to influence decisions by the OpenAI board, which is a bit irregular. Thus, when four members of the board––including OpenAI’s co-founder and chief computer scientist, Ilya Sutskever––suddenly ousted Sam Altman, claiming that he no longer aligned with their missions of altruism, chaos was bound to follow. 

The board’s fury grew with Altman’s actions throughout the year, with Mr. Sutskever claiming that Altman needed to control himself and keep himself honest as the company grew in greater scope than they expected. Gradually, the company became completely divided, as there now seemed to be two conflicting sides: the board vs. Altman.

According to an article from The New York Times, one episode between three members of the board exemplifies this division. In October, both sides had been working on a research paper written by Helen Toner, a board member. Altman rebuked parts of the paper, claiming that the paper criticized OpenAI’s efforts to keep their technologies safe. Mr. Sutskever unexpectedly disagreed openly with Altman, taking the side of Ms. Toner and two other board members, Adam D’Angelo, chief executive of Quora, and Tasha McCauley, a senior management scientist at the RAND Corporation.

On such occasions, tensions increased, and many members often worried that Altman’s actions were self-centered. Another member, Mr. Sutskever claimed that Altman was not always being honest when talking with the board. Other board members worried that Altman focused primarily on expansion of their company, while they wanted to maintain growth with A.I. safety. 

Ultimately, such doubt over Altman’s leadership increased to the point that many researchers warned the board about the AI breakthrough by sending letters to its members, according to Reuters. To gradually mitigate their concerns, the board has said that it will analyze ways to potentially change their unusual structure in the next six months.

All of this points to a bigger picture about AI’s influence on the future. In a world where the influence of artificial intelligence is exploited more than ever, having control of this power makes one the most powerful figure in the technology world. Altman, after he has now claimed the throne of OpenAI again after all his troubles, is, by effect, one of the most powerful men in the AI world. But the same questions that circulated around OpenAI’s members are still unanswered. To what extent does Altman hold his power in the hands of AI, and is it rational that the future of this life-changing technology is held accountable by just a few people?

Mr. Sutskever and the other board members must have pondered the same questions upon their decisions. While they must’ve known that the ousting of Mr. Altman would jeopardize the company’s situation for a while, they also knew about the threat that can follow if AI is not handled properly by leaders like Altman. But with Altman’s return, where does the future point at? Although the answer to that question may be uncertain, one thing is clear.

AI is coming, and those handling the technology must work with care, caution, and ethics. Not simply for themselves, but for humanity as a whole.

More to Discover