sign up log in
Want to go ad-free? Find out how, here.

Katharina Pistor shows why OpenAI's efforts to preserve its founding non-profit mission never stood any chance

Technology / opinion
Katharina Pistor shows why OpenAI's efforts to preserve its founding non-profit mission never stood any chance
Sam Altman

Already on a long winning spree, capital has just scored another big victory in a clash over the ethics of artificial intelligence. In the drama over OpenAI CEO Sam Altman’s sudden firing and rehiring, a non-profit company with a mission to prioritise AI safety over profits has failed spectacularly to keep its for-profit offspring on a leash.

OpenAI, Inc. was founded in 2015 with the goal of ensuring that artificial general intelligence – autonomous systems that can outperform humans in all or most tasks – does not become uncontrollable, if and when it is ever achieved. AGI’s potential raises the same dilemma that Mary Shelley introduced in Frankenstein. Our creation might destroy us, but who can stop anyone from pursuing the fame, power, and wealth that “success” would confer? The Altman saga offers one answer: We cannot count on ethical rules, corporate-governance structures, or even principled governing board members to keep us safe. They tried, much to their credit, but it wasn’t enough.

Originally, OpenAI, Inc. sought to raise enough funds through donations to compete in a fast-developing and highly competitive domain. But with only $130 million generated in three years, it fell far short of its $1 billion goal. It would need to turn to private capital while trying to preserve its original mission within an elaborate governance structure.

That meant creating two for-profit subsidiaries, with one wholly owned LLC serving as the general (managing) partner of its sibling within a limited partnership. Since limited partners do not have voting rights, OpenAI, Inc. exercised all control over the partnership, at least in theory. The limited partner then established its own LLC, OpenAI Global LLC, to attract private capital, including a $13 billion investment from Microsoft, which did not wield formal control rights. Finally, the original mission was secured by appointing several board members from the original non-profit to serve doubly as employees of OpenAI Global LLC, including Altman as chief executive.

What could possibly go wrong? Everything, as it turned out. When the board decided to fire the CEO of its sub-subsidiary – apparently for what a majority of its members saw as conflicts between his ambitions and the company’s mission – the entire structure collapsed. Microsoft swooped in and offered to hire Altman and anyone willing to join him. That put OpenAI’s financial future at risk. As it had warned in its operating agreement, “Investing in OpenAI Global, LLC is a high-risk investment. Investors could lose their capital contribution and not see any return.”

That warning was no deterrent for Microsoft, which was less interested in dividends than in OpenAI’s products and the people developing them. Though Altman has since been reinstated at OpenAI, together with a new board that seems more likely to do his bidding, it is safe to assume that Microsoft will be the one ultimately calling the shots. After all, Altman owes Microsoft his job and the future of the company he runs.

For all the media coverage that this drama generated, it does not represent anything new. Historically, capital usually wins out when there are competing visions for the future of an innovative product or business model.

Consider all the ambitious promises that private companies have made to address climate change (presumably in the hope of avoiding regulation or worse). In 2022, Larry Fink, the CEO of BlackRock, the world’s largest asset manager, predicted a “tectonic shift” toward sustainable investment strategies. But he soon changed his tune. Having since downgraded climate change from an investment strategy to a mere risk factor, BlackRock now prides itself on ensuring “corporate sustainability.” If the board of a non-profit with a firm commitment (in writing) to AI safety could not protect the world from its own CEO, we should not bet on the CEO of a for-profit asset manager to save us from climate change.

Likewise, consider the even longer-running saga of broken promises for the sake of profits in private money-creation. Money is all about credit, but there is a difference between mutualised credit, or state money, and privatised credit, or private money. Most money in circulation is private money, which includes bank deposits, credit cards, and much more. Private money owes its success to state money. Without the state’s willingness to maintain central banks to ensure the stability of financial markets, those markets and the intermediaries populating them would fail frequently, bringing the real economy down with them. States and banks are the oldest example of “public-private partnerships,” promising to benefit bankers and society alike.

But winners like to take all, and banks are no exception. They have been granted the enormous privilege of running the money mint, with the state backing the system in times of crisis. As other intermediaries have figured out how to join the party, few states have been willing to reassert control, lest they trigger capital flight. As a result, the financial system has grown so large that no central bank will be able to resist the call for yet another bailout the next time a crisis looms. The party always continues because sovereigns dance to capital’s tune, not the other way around.

It is no surprise that OpenAI failed to stay on mission. If states cannot protect their citizens from the depredations of capital, a small non-profit with a handful of well-intentioned board members never stood a chance.


Katharina Pistor, Professor of Comparative Law at Columbia Law School, is the author of The Code of Capital: How the Law Creates Wealth and Inequality (Princeton University Press, 2019). Copyright: Project Syndicate, 2023, and published here with permission.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

2 Comments

Once again, if we're not careful, the future will be less Star Trek and more Elysium, Player Piano, Squid Game, and They Live. Stephen Hawking warned about it as such:

"Have you thought about the possibility of technological unemployment, where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them? Some compare this thought to the thoughts of the Luddites, whose revolt was caused in part by perceived technological unemployment over 100 years ago. In particular, do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done?

The outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality."

Up
2

They are on the wrong track towards AGI anyway.  Amazing things have and will be done with code running on computers with an IQ of zero trained on vast amounts of data we have captured over decades.  But impressive tricks like that don't cut it for AGI. 

I am convinced the way to AGI is through hardware wired up to inputs that emulate an animal's nerves, and to let intelligence build up in layers on top of those primitive drives.  I can't see much progress in that direction.  The fast money is selling chatbots and things that write essays etc.

Up
0