OpenAI: when philosophy enters the boardroom

Photo by Brian Ach/Getty Images for TechCrunch - Flickr

This week saw a remarkable example of how not to conduct boardroom business with the dramatic dismissal and reinstatement of Sam Altman, CEO at OpenAI.

With some arguing that the outsized influence of Effective Altruism was behind this, Lucy Thompson unpacks the philosophy that has taken Silicon Valley by storm.

There is no prize for guessing which company is going to have the most awkward Christmas party this year. OpenAI, the darling of Silicon Valley, has had quite the week with the swift expulsion and reinstatement of CEO Sam Altman in just five days.

The case of OpenAI is an interesting one, not least because of the complex structuring of its board, but the controversial philosophy that some say is at the root of the company’s split.

The split in question is between the commercially driven ambitions of Altman and its investor Microsoft, and OpenAI’s board members who jostled Altman out over this ‘aggressive push for faster AI progress’.

Some argue that Effective Altruism (EA) is behind the actions of the board and is at the root of their restraint over AI advancement. EA is fundamentally a Utilitarian movement that aims to use research and reasoning to solve the world’s greatest challenges – one of them being AI - for the benefit of the maximum number of people.

It is grounded in the philosophy of Peter Singer, who is regarded as EA’s originator, and his famous 1972 essay ‘Famine, Affluence and Morality’. The essay revolves around a philosophical scenario of a person stumbling across a toddler drowning in a shallow pond. Most people would agree that the person has a moral obligation to wade in and rescue the child, even if it meant their £100 shoes would be ruined. The point Singer makes is that we are all in the situation of the person passing the shallow pond – even if these children are in different countries and circumstances. We are spending money on superfluous goods (like £100 shoes) when we have a moral responsibility to use this money to save children who would otherwise die. 

This has inspired the rise of ‘effective altruists’, a term coined by a small group of Oxford Philosophers, including Toby Ord and William MacAskill. They seek to maximise the impact of charitable giving and are committed to the practice of ‘earning to give’, which involves giving a significant portion of your salary away. Since the more that people earn, the more they can give away, in MacAskill’s eyes if you want to have an impact, you would be better off becoming a millionaire banker or trader, rather than a lowly paid aid worker, or a doctor who may well save hundreds of lives.  

This is the advice that he gave FTX founder Sam Bankman-Fried, who was at the time a brilliant maths student looking to leave the world in a better place. What happened next to Bankman-Fried, who tried to claw back his charitable donations after the exchange collapsed, exposes one of the gaping flaws of the movement.

As Tyler Cowan of Bloomberg observes there is a danger in judging acts only by their consequences. “Obligations to be honest, to be just, to be loyal, to respect property rights, and many more – count only to the extent that they bear on the happiness calculation. Effective Altruists are therefore obliged to say: Yes, stealing to give to the poor might be good”. Ultimately, the dogmatic nature of the movement could lead to extreme actions and the pursuit of dicey and high-risk investments – like that of Bankman-Fried’s.

This movement has been thrust into the spotlight because of the significant backing it has among the billionaires in Silicon Valley. The support it has in this group is in many ways unsurprising. The movement does little to understand how power works and with its call to ‘earn more to give’ it is promoting the profit-making power structures that give rise to billionaires and Silicon Valley titans like Elon Musk (another endorser of the movement). Going back to Singer’s analogy, there is no suggestion that the pond the child has fallen into should have fences, with moral responsibility entirely falling to the passerby and the depth of their pockets.

So how did this radical philosophy infiltrate OpenAI and why has artificial intelligence become a focus for the movement? Effective altruists have a binary view of what will happen when we reach AI superintelligence– it will either be a utopia or humans will be wiped off the face of the earth. This comes under a branch of effective altruism called ‘long-termism’ which stresses the moral worth of future people and a responsibility to protect their interests. 

Broadly speaking the movement does not believe AI advancement should be halted, but it has helped bankroll the “AI pause” letter calling for a moratorium on “giant AI experiments”, and has been a key influence on the UK government as it ramps up its focus on technology’s threats.

With its mission to “ensure that AI benefits all of humanity” and non-profit structure, OpenAI was supposed to be the counterweight to big tech’s profit-driven ideals and ensure that AI ultimately benefits people.

Many of the members of the board have ties to the EA movement and its controlling shareholder remains the nonprofit OpenAI Inc. and its board of directors. This is the unusual structure that enabled it to oust Altman without investor input.

The irony of this situation is that the profit-making structures that effective, or seemingly ineffective altruism fails to challenge, can win out against its overall mission to keep AI development safe. For right or wrong, at OpenAI money and the might of Microsoft ultimately trumped a philosophy that has a strong focus on individual agency.   

We seldom change institutions and practices on our own and today Sam Altman is in the very same position he held at the start of the week thanks to support from 700 out of 770 of his employees backing him.

All this being said there is room for philosophies in the boardroom. We see the workings of Aristotle in Paul Polman’s book Conscious Capitalism, in his efforts to find Unilever’s ‘golden mean’, which is the desirable middle between the extremes of excess and deficiency. We can see the traces of Immanuel Kant in the shaping of business practices that prioritize fairness, honesty, and transparency and the development of corporate social responsibility.

There are principles we can draw from in EA - we should be promoting more rational donations to causes that benefit many and even if it’s not doomsday, we must be wary of the rise of AI for profit – but it is about balance.

In the case of OpenAI, this balance has not been found yet, raising serious questions about how purpose and profit will co-exist when it comes to AI. Profit has won today, but what does this mean for the AI of tomorrow?


By Lucy Thompson, Senior Associate at Audley

Previous
Previous

Weekend Box: CZ Come CZ Go, Open AI Plays With Firing & Gets Burnt & more

Next
Next

Autumn Statement 2023: Briefing