twitter
youtube
instagram
facebook
telegram
apple store
play market
night_theme
ru
arm
search
WHAT ARE YOU LOOKING FOR ?






Any use of materials is allowed only if there is a hyperlink to Caliber.az
Caliber.az © 2025. .

Azerbaijani President Ilham Aliyev gives exclusive interview to local TV channels

WORLD
A+
A-

What went on with OpenAI executives this week?

25 November 2023 03:30

The dramatic management feud at OpenAI, the parent-tech company that released tools such as Chat GPT, serves as a case study highlighting the complexities and challenges inherent in the development and governance of AI technologies. The Economist reported on the calls for a broader discussion on the role of corporate structures and the need for effective regulatory frameworks. Caliber.Az reprints this article.

"Five very weird days passed before it seemed that Sam Altman would stay at OpenAI after all. On November 17th the board of the maker of Chatgpt suddenly booted out its chief executive. On the 19th it looked as if Mr Altman would move to Microsoft, OpenAI’s largest investor. But employees at the startup rose up in revolt, with almost all of them, including one of the board’s original conspirators, threatening to leave were Mr Altman not reinstated. Between frantic meetings, the top brass tweeted heart emojis and fond messages to each other. By the 21st, things had come full circle.

All this seems stranger still considering that these shenanigans were taking place at the world’s hottest startup, which had been expected to reach a valuation of $90 billion. In part, the weirdness is a sign of just how quickly the relatively young technology of generative artificial intelligence has been catapulted to fame. But it also holds deeper and more disturbing lessons.

One is the sheer power of AI talent. As the employees threatened to quit, the message 'OpenAI is nothing without its people' rang out on social media. Ever since Chatgpt’s launch a year ago, demand for AI boffins has been white-hot. As chaos reigned, both Microsoft and other tech firms stood ready to welcome disgruntled staff with open arms. That gave both Mr Altman and OpenAI’s programmers huge bargaining power and fatally undermined the board’s attempts to exert control.

The episode also shines a light on the unusual structure of OpenAI. It was founded in 2015 as a non-profit research lab AImed at safely developing artificial general intelligence (AGI), which can equal or surpass humans in all types of thinking. But it soon became clear that this would require vast amounts of expensive processing power, if it were possible at all. To pay for it, a profit-making subsidiary was set up to sell AI tools, such as Chatgpt. And Microsoft invested $13 billion in return for a 49% stake.

On paper, the power remained with the non-profit’s board, whose aim is to ensure that AGI benefits everyone, and whose responsibility is accordingly not to shareholders but to 'humanity'. That illusion was shattered as the employees demanded Mr Altman’s return, and as the prospect loomed of a rival firm housed within profit-maximising Microsoft.

The chief lesson is the folly of policing technologies using corporate structures. As the potential of generative AI became clear, the contradictions in OpenAI’s structure were exposed. A single outfit cannot strike the best balance between advancing AI, attracting talent and investment, assessing AI’s threats and keeping humanity safe. Conflicts of interest in Silicon Valley are hardly rare. Even if the people at OpenAI were as brilliant as they think they are, the task would be beyond them.

Much about the board’s motives in sacking Mr Altman remains unknown. Even if the directors did genuinely have humanity’s interest at heart, they risked seeing investors and employees flock to another firm that would charge ahead with the technology regardless. Nor is it entirely clear what qualifies a handful of private citizens to represent the interests of Earth’s remaining 7.9bn inhabitants. As we published this article, new reports suggested that a replacement board would in time be appointed that included Mr Altman and a representative from Microsoft. However, personnel changes are not enough: the firm’s structure should also be overhauled.

Fortunately for humanity, there are bodies that have a much more convincing claim to represent its interests: elected governments. By drafting regulation, they can set the boundaries within which companies like OpenAI must operate. And, as a flurry of activity in the past month shows, they are paying attention to AI. That is just as well. AI technology is too important to be left to the latest corporate intrigue.

Caliber.Az
Views: 118

share-lineLiked the story? Share it on social media!
print
copy link
Ссылка скопирована
ads
telegram
Follow us on Telegram
Follow us on Telegram
WORLD
The most important world news