by Andrew Grant
The OpenAI Debacle! What happens when your innovation vision and strategy are misaligned?
I was preparing an opening talk for Amazon Web Services at the time the sudden shakeup of the tech giants OpenAI and Microsoft erupted. The high-profile upheaval had resulted in confusion and chaos between two companies that are set to shape the future of our world.
In my opening presentation, I was planning to include a commentary on the breaking news that the OpenAI CEO Sam Altman had been fired. But each time I thought I had locked down the talk an explosive new development emerged. After Altman was fired by the board, 80% of his team resigned, then they were almost poached by Microsoft, the board was fired, and everyone was reinstated back at OpenAI again! I had to wake up early that day to ensure I could catch the latest news and was fully up to date before committing to my opening spiel. It seemed like communication was so slow that not even Satya Nadella (CEO of Microsoft) knew what was fully going on at the time.
Reflecting back on this event there were three key issues worth taking notice of that disrupted the biggest of the biggest. These issues were related to the innovation vision and strategy. It included a lack of understanding of the following 3 different yet interelated topics.
- How innovation tensions threaten sustainable growth
- The need for an aligned innovation vision and strategy
- The need for clear communication about the innovation vision and strategy
I go through each of these challenges in this article, identifying what the specific core issues are and providing a metaphor for each to help explain the problem in more detail.
If it can happen to them, it can happen to any leadership team – no one is immune. Whilst this news hit the tech world with a bang it continued to implode well after the initial shock. Our research shows that this tension is not a one-off and will always be present. The real question is, “How can this tension be managed so leaders can enable businesses to innovate sustainably?”
Debacle 1: Team “Speed” vs. Team “Slow”,
How Innovation Tensions Threaten Sustainable Growth
The root cause of the initial crisis was the tension between two camps: Team ‘Speed’ (led by Sam Altman, the CEO) and Team ‘Slow’ (led by the Board). Both camps had valid reasons for moving at the speed they advocated, but the issue was that the strategies were not adequately connected to their Vision, Mission, and Values (VMV), nor addressed or discussed effectively. This was not a one-off event. In the following months, several other key leaders departed from the team tasked with ensuring that AI stayed aligned with the goals of its makers, rather than acting unpredictably and harming humanity. Jan Leike left Open AI, stating that “safety culture and processes had taken a backseat to shiny products.”
This tension is not unique to OpenAI or the tech industry. In fact, our research shows that in every company these tensions exist between those who want to race ahead and explore with breakthrough innovations and those who are more risk-averse and want to stick to slower more incremental changes. This tension can either rip a team apart or can fuel sustainable innovation if managed effectively. As leading innovation is a very different skill set from being an innovator (or being innovative at coding, running a business), my partner Dr. Gaia Grant has measured innovative leaders to identify how they can sustain innovation, and she has identified that successful innovation leaders are ‘ambidextrous’ as they can manage both the need for rapid novel responses to challenges and the need for a slow sustainable approach to build a solid foundation for growth.
The metaphor of “building an airplane in the air” or “building fast and breaking things” is often used in the tech industry to describe their approach to innovation. However in the case of AI, this approach to the ‘innovation race’ is becoming very dangerous. OpenAI tested ChatGPT on the public: with over 1 million of us. No government would allow a drug company to do this. Those who build it are mostly a group lacking cognitive diversity, consisting of very skilled techies, with minimum connection to the larger global population, and largely missing degrees in ethics, history, politics, or philosophy. So what has emerged has largely been raw and untested. Ideally a board would help to moderate this and check for risks, and between them, the tension can then be kept in check.
Innovation vs. Regulation, || Accelerationists vs. Doomers.
A never-ending discussion now ensues, involving businesses, governments and media, all attempting to comprehend the rapid expansion of AI. Camps are being set up between those who want to race ahead and explore techno-utopias and those who are being referred to as “doomers.” Perhaps the most important first step is to acknowledge this paradox and measure it. And maybe, if there was enough respect, then the tension between both parties could be used to pull a company into innovating sustainably.
The OpenAI happy again team, photographed after the return of their hero, pictured here in a Silicon Valley office. This group just might just be responsible for the future direction of the planet!
Debacle 2: Purpose vs Profit
The need for an aligned innovation vision and strategy
OpenAI was founded in 2015 as a non-profit organization with a noble mission: to ensure that artificial intelligence (AI) is aligned with human values and can benefit all of humanity. However, in 2023 the startup faced a major crisis that threatened its existence and reputation. What went wrong?
Despite its initial non-profit status, as the costs grew OpenAI had to raise funds from investors who expected returns on their capital. In 2020, the startup created a for-profit entity called OpenAI LP, which aimed to commercialize its research and products. This move created a conflict of interest between the original mission and the profit motive.
OpenAI’s mission statement was vague, broad and conflicting, leaving room for interpretation and disagreement. What does it mean to align AI with human values? Whose values? How to measure the impact and safety of AI? How to balance the trade-offs between openness and secrecy, collaboration and competition, innovation and regulation?
These questions appear to not be adequately answered by the founders, board and leaders of OpenAI, leading to confusion and frustration among the stakeholders. As Scott Galloway, a professor of marketing at New York University, has written: “Serving ‘all of humanity’ was adorable until $90 billion distractions showed up and the management team and investors began avoiding eye contact with the original mission. Altman and the board were supposed to straddle that divide, but it proved impossible. If this was a battle between capital and (concern for) humanity, capital smothered humanity in its sleep.”
The metaphor of having an out-of-date fire extinguisher on the wall illustrates the importance of ensuring there is a clear, measurable and tested Vision, Mission, and Values (VMV). These also need to be tested and updated regularly, especially in a fast-changing and uncertain field like AI. OpenAI failed to anticipate and prepare for the scenarios where its VMV would clash with reality, such as when its researchers created a powerful language model – GPT that raised ethical and social concerns. Instead of relying on a VMV that was outdated and untested, OpenAI should have used it as a guide and a tool for decision-making and communication.
Debacle 3: A Lack of Transparency
The need for clear communication about the innovation vision and strategy
The lack of clear communication between Sam Altman and the board of directors no doubt contributed to the OpenAI crisis. Altman was essentially fired by the board because his “communication was not considered transparent.” Microsoft’s CEO Satya Nadella, who partners closely with OpenAI, was caught unaware when the crisis hit, so it was apparent the board hadn’t been communicating with its key stakeholders.
The metaphor of communication in crisis for Antarctic expeditions helps to explain why a lack of clear communication leads to challenges. Researchers who study diaries of Antarctic expedition teams say that poor communication is the number one issue that threatens the success of these teams, more than every other issue put together. In these contexts, a lack of clear communication can easily lead to dangerous misunderstandings with deadly consequences.
The lesson learned from the OpenAI debacle is that communication is not a luxury but a necessity for any organization. Communication needs to be considered at all stages of any process, and it should be considered critical to strategic planning and implementation. The ‘how, who, when, where, and why’ should all be considered. Sending a tweet into the stratosphere is not enough. Communication needs to be thoughtful, clear, detailed and transparent. It should engage with and involve all stakeholders, including the board, the CEO, the employees, the partners, and the public. Communication should become not just a tool but a cultural practice that needs to be cultivated and practiced every day.
OpenAI may have survived that potential collapse, but it’s worth noting that nearly 80% of the workforce resigned when Altman was offered a job at Microsoft. The OpenAI debacle is worth noting for any leadership team.
Companies that want to ensure sustainable innovation should ensure there is a solid and consistent vision and strategy in place and that the messaging around this is communicated clearly. Believing that simply having a good idea, the right personality to deliver it to market and great tech will suffice is not enough. Sustainable innovation requires strong innovation leadership and consistent purpose, from the leadership team through to the board.
— — —
WE UNDERSTAND THE NEED FOR AN ALIGNED INNOVATION VISION & STRATEGY PLUS CLEAR COMMUNICATION
This case study is a reminder of how important and how closely integrated these core topics are. At Tirian we have been offering engaging tailored solutions (workshops, keynotes, research etc) addressing all these core challenge areas for over 25 years now, and we are happy to discuss how we can help to address your organisation’s specific needs with you. Dr Gaia Grant’s research into this AI-driven paradox can measure the tension leaders face, and help facilitate a process whereby all parties can learn to innovate sustainably. This can include a facilitated discussion as to where AI fits into a company’s strategy addressing dual competing demands and how leaders can manage the process. https://tirian.com/key-topic-suites/
Innovate, Communicate, Narrate