Many have suggested that OpenAI, the golden child of the generative AI industry, is currently experiencing a brain drain.
But the recent departures of CTO Mira Murati and AI safety advocate Miles Brundage reflect more than internal discord—they reveal a deeper shift in OpenAI’s priorities.
The organization started with a mission to build technology for public good but it is now simply running a race to dominate the market. As the company accelerates commercialization, human-powered governance—the oversight, creativity, and judgment only people can provide—is being sidelined. This shift risks not only misaligned technology but also the very trust required to innovate sustainably.
I have 35 years of experience advising the C-suite of Fortune 500 companies, and I believe OpenAI’s leadership crisis ultimately demonstrates that innovation without human oversight is a dangerous game. Here’s why:
From mission-driven to market-driven
OpenAI’s pivot from a nonprofit research lab to a commercial enterprise has reshaped its culture—and alienated key leaders. Murati, who once championed the mission to “build technology that benefits people,” recently left the company, reflecting growing frustration with OpenAI’s shift to product-first priorities. And Brundage’s departure further underscores this tension. On his way out, he urged employees to resist groupthink, a reminder that innovation thrives on diverse perspectives, not conformity.
These departures highlight a deeper challenge: The rush to commercialize AI often puts governance at risk. OpenAI’s disbanding of its “AGI Readiness team”—tasked with managing the risks of artificial general intelligence (AGI)—raises serious concerns. AGI systems, which could eventually act autonomously across industries, may seem distant, but the time to prepare is now. Failing to address these risks early makes it harder to implement safety protocols down the line.
Disbanding the AGI Readiness team signals a worrying shift: Without proactive oversight, safety becomes an afterthought. Prioritizing speed over safety opens the door to unintended harms that will inevitably erode trust in both the technology and the company.
Technology needs human oversight to thrive
AI systems excel at processing data, but they lack the empathy, ethical reasoning, and contextual understanding that only humans can provide. As powerful as AI tools are, they cannot independently navigate complex questions of fairness, privacy, or social responsibility. It takes human-powered leadership to ensure AI serves society’s needs without perpetuating harm.
The risks of unchecked technology are not hypothetical. Biased hiring algorithms and flawed facial recognition systems have already caused real-world harm, exposing companies to public backlash and regulatory scrutiny. If OpenAI continues to marginalize safety experts and dismiss dissent, it risks building powerful technologies that solve technical problems but create social ones.
Leaders are responsible for facilitating this human oversight and there are several considerations leaders need to keep in mind.
Accountability drives innovation
Leaders must embrace the fact that accountability drives innovation.
Organizations like IBM and Microsoft offer valuable lessons in balancing governance and innovation, particularly in the realm of AI. IBM has developed AI governance frameworks, such as AI FactSheets, to ensure transparency in how algorithms operate. Its decision to abandon facial recognition technology over bias concerns reflects a willingness to sacrifice market opportunities for ethical responsibility.
Similarly, Microsoft’s AETHER Committee, alongside the Office of Responsible AI, embeds ethics into engineering processes, ensuring that oversight is built into product development from the ground up. Through multi-stakeholder efforts, these structures ensure that governance is treated as a core business function rather than an afterthought.
However, both companies have also faced challenges: IBM has been criticized for the lack of explainability in some of its AI-powered healthcare tools, such as Watson, and Microsoft has encountered privacy concerns over data handling practices in its cloud services and ethical dilemmas regarding the use of AI in defense projects.
These examples show that governance is not a one-time achievement—it’s an ongoing effort that requires continuous adaptation and prioritization by leaders. OpenAI’s leaders must embrace similar transparency and external oversight to ensure its technology aligns with public trust.
Trust is the currency of innovation
The case of OpenAI also demonstrates how vital trust is to innovation.
In the world of AI, trust is not optional—it’s essential. Companies that lose public confidence—like Meta, which faced backlash for privacy violations and misinformation—often struggle to recover. With the European Union advancing its AI Act, companies face increased pressure to demonstrate accountability. Without that trust, technologies like OpenAI’s may fail to gain the adoption and regulatory support needed to succeed.
OpenAI’s leadership turmoil threatens to erode the trust it has built. If talent continues to leave and safety concerns remain unresolved, the company risks becoming a cautionary tale: a reminder that even the most advanced technology can fail without responsible leadership that advocates for the humans that power it.
Governance is technology’s best ally
And ultimately, I believe Open AI’s leadership crisis demonstrates that governance can be the ally, rather than the adversary of, technology.
The departures of Murati, Brundage, and others signal a critical moment—not just for OpenAI, but for the entire tech industry. Innovation without human oversight is reckless. To lead the future of AI, OpenAI must reintegrate human-powered governance into its strategy, embedding safety into every layer of development and fostering a culture where diverse perspectives are valued.
The future of AI won’t be defined by the speed of innovation—it will be shaped by the integrity, courage, and accountability of the people guiding it. I believe OpenAI still has the chance to lead, but only if it embraces governance not as a box to check, but as the compass that ensures technology serves humanity, not the other way around.