Supremacy Lessons: How Sam Altman and Demis Hassabis Rewired the Race for AGI
Vision ignites the journey — but only adaptability, timing, and narrative control decide who survives. What builders can learn from two of AI’s fiercest architects as ambition collided with survival.
1. Introduction: Why Supremacy Matters to Builders
The race to build Artificial General Intelligence (AGI) is not merely a technological competition — it is a struggle for control over the architecture of the future itself.
AGI promises to reshape global economies, redefine political power, and transform the fundamental fabric of human life.
Who builds it first — and how they navigate the brutal pressures of survival, governance, and public perception — will determine who dominates the next century.
Supremacy, written by veteran technology journalist Parmy Olson, offers an unparalleled look inside this high-stakes race through the stories of two pivotal figures: Sam Altman and Demis Hassabis.
Both started with soaring missions to create AI for the good of humanity.
Both tried, at least initially, to resist the gravitational pull of Big Tech.
And both, in very different ways, ended up confronting the brutal reality that ambition alone is not enough.
This review focuses squarely on what Altman and Hassabis’s diverging paths reveal for today’s builders — not merely as historical curiosities, but as living case studies in the hardest challenges of frontier innovation:
How missions drift under pressure,
How speed and timing can redefine industries,
How public narrative control becomes as important as technical execution,
And how governance failures can destabilize even the most idealistic organizations.
Here’s what their rivalry teaches us — and why every founder, product builder, and strategist operating in high-stakes environments should be paying attention.
2. The Architects: Sam Altman and Demis Hassabis
The story of AI supremacy begins with two profoundly different founders — shaped by their personal histories, motivations, and visions for what intelligence could be.
2.1 Sam Altman: Architect of Survival and Narrative Control
Early Life and Formative Experiences
Sam Altman grew up in a middle-class Jewish family in St. Louis, Missouri.
His father instilled in him a public-spirited ethos: the importance of helping others.
As a high school student, Altman displayed early leadership traits — captaining the water polo team, editing the yearbook, and coming out as openly gay at a time when it was not widely accepted.
He built early networks of support and learned how to align with power while still challenging authority.
Education and Early Entrepreneurial Ventures
At Stanford University, Altman’s interests spanned far beyond computer science, including humanities and creative writing.
He honed his skills in psychological strategy playing poker, funding much of his college life through winnings.
Dropping out to pursue entrepreneurship, Altman co-founded Loopt, a location-based social networking app, funded through Y Combinator.
Though Loopt raised significant funding and secured carrier deals, it ultimately failed to achieve breakout success.
Altman internalized two key lessons:
Emotional detachment from outcomes.
Facing uncomfortable truths openly — such as discussing Loopt’s privacy problems with the press.
Rise at Y Combinator
After selling Loopt, Altman became a part-time partner and later president of Y Combinator.
As YC president, he expanded its ambitions, pushing founders to pursue "hard tech" startups and billion-dollar ideas.
Altman cultivated a personal network across Silicon Valley’s most powerful figures, investing through his fund Hydrazine Capital.
Obsession with AI and the Founding of OpenAI
Altman became increasingly captivated by the potential of AI to replicate — and surpass — human intelligence.
Concerned about AI being monopolized by players like Google, he co-founded OpenAI in 2015 with Elon Musk and others.
OpenAI’s mission was initially framed as: build AGI and share its benefits with humanity, organized explicitly as a nonprofit to resist financial incentives.
Relationship with DeepMind and Hassabis
A mini "cold war" developed between Altman and Demis Hassabis at DeepMind.
Altman viewed Hassabis as "uncooperative" and insufficiently concerned about AI’s existential risks.
Altman aggressively recruited DeepMind engineers, personally interviewing candidates to align them to OpenAI’s mission.
Evolution of OpenAI and Strategic Pragmatism
After Elon Musk’s departure and the loss of major funding, Altman pivoted OpenAI’s structure:
Creating a capped-profit entity ("OpenAI LP") to allow external investment while preserving a nonprofit shell.
He secured a massive partnership with Microsoft, fundamentally transforming OpenAI’s financial base and operational control.
Leadership Style and Narrative Mastery
Altman is described as charming, strategic, and emotionally detached.
He specialized in identifying what mattered most to people — and delivering it to win loyalty.
He carefully controlled information flow, using controversy and selective disclosures to generate mystique and strategic leverage.
Views on AI Risk, Altruism, and the Future
Altman publicly acknowledged the existential risks posed by AGI, aligning himself loosely with the effective altruism movement.
He advocated that AGI could unlock immense global wealth — projecting visions of "$100 trillion in new value" to be shared with humanity, even though specifics remained vague.
Privately, he took steps to prepare for global catastrophes and existential threats, building personal resilience plans.
The November 2023 Power Struggle
In November 2023, Altman was briefly fired by OpenAI’s board — citing issues around transparency and trust.
Massive internal staff support, investor pressure, and Microsoft’s backing led to his rapid reinstatement — and the ousting of the board members who had opposed him.
This incident underscored the immense de facto power Altman had built, despite OpenAI’s supposed nonprofit structure.
2.2 Demis Hassabis: Architect of Scientific Ambition
Early Life and Influences
Demis Hassabis grew up in North London, born into a half-Cypriot, half-Singaporean family of creative professionals.
A child prodigy in chess, Hassabis was defeating adults by age four and ranked second in the world for under-fourteens.
He became obsessed with video games as simulations of real life, drawing deep inspiration from "god games" like Populous.
At seventeen, he helped design the commercially successful simulation game Theme Park, showing early entrepreneurial drive.
Philosophically, Hassabis was fascinated by the search for underlying truths — blending interests in religion, physics, and the question of whether science could uncover the mysteries of life and reality.
Reading Dreams of a Final Theory by Steven Weinberg reinforced his ambition to use AI to unlock the universe’s fundamental secrets.
Early Career and Elixir Studios
Hassabis pursued his passion for gaming, joining Bullfrog Productions under Peter Molyneux.
He later founded Elixir Studios, aiming to create complex, AI-driven simulation games like Republic: The Revolution.
Despite technical ambition, Elixir struggled commercially:
Republic was seen as too complex and not fun for players.
Hassabis realized that building better AI — not more intricate games — was his true calling.
Founding and Leading DeepMind
In 2010, Hassabis co-founded DeepMind with Shane Legg and Mustafa Suleyman.
While Legg was motivated by merging humans with machines, and Suleyman by solving societal problems, Hassabis’s primary goal was scientific discovery — perhaps even finding "God" through understanding intelligence.
Hassabis believed in Turing’s concept that the brain is essentially a machine — and that if we could simulate it, we could unlock profound truths.
DeepMind’s Mission and Culture
DeepMind operated under intense secrecy and scientific rigor, aspiring to be the Bell Labs of AI.
The company's motto: "Solve intelligence, and then solve everything else."
Hassabis emphasized simulated environments as training grounds for AI, rather than public deployment.
Successes like AlphaGo and AlphaFold reinforced his belief that a slow, methodical approach would lead to true breakthroughs.
Acquisition by Google and the Push for Independence
DeepMind was acquired by Google for $650 million, under conditions including:
A prohibition on military use,
And a commitment to establishing an independent ethics and safety board.
Hassabis later attempted to turn DeepMind into a "global interest company," a nonprofit-like structure inside Alphabet — but was ultimately overruled as Google’s commercial interests intensified.
Rivalry with OpenAI
Hassabis quietly resented OpenAI’s approach to releasing language models publicly, fearing it risked unsafe proliferation.
He initially dismissed OpenAI’s emphasis on language generation, believing simulation-based training was a superior path to AGI.
However, after ChatGPT’s explosive success, DeepMind was forced to pivot and race to build its own large language model: Gemini.
Leadership Style and Company Culture
Hassabis was described as deeply charming but more distant than Altman — often secluded in his office or high-level meetings.
DeepMind maintained a strictly hierarchical, academically prestigious culture, heavily emphasizing peer-reviewed publications and internal secrecy.
Shift Toward Business and Realism
With the merger of Google Brain and DeepMind into Google DeepMind in 2023, Hassabis assumed leadership of the combined entity.
He acknowledged that earlier dreams of independent ethics boards had been "slightly too idealistic," accepting Google's internal governance structures.
While Hassabis still believes AGI could transform society — unlocking vast scientific, economic, and social advancements — he now frames these ambitions as more of a "hobby" outside of his day-to-day role running Google’s AI empire.
3. Strategic Divergence: How Their Paths Shaped the AI Race
Although Sam Altman and Demis Hassabis shared a common goal — building Artificial General Intelligence (AGI) — their approaches diverged sharply across mission structuring, organizational culture, technology bets, and strategic risk appetite.
These early differences ultimately determined who would set the pace of the AI revolution.
3.1 Mission vs. Survival: How Early Ideals Eroded Under Pressure
Both OpenAI and DeepMind began with high-minded missions centered on safety, benevolence, and human flourishing.
Yet both ultimately had to compromise — restructuring their organizations to survive and scale.
DeepMind’s Idealism and Its Erosion
DeepMind’s founding mission was clear: "Solve intelligence, and then solve everything else," with strong ethical safeguards.
The company negotiated an ethics and safety board as a condition of its Google acquisition, intended to provide independent oversight of AGI development.
However, in practice, the board was largely powerless — composed of Google executives, lacking external authority.
Over time, DeepMind was increasingly integrated into Google's commercial operations, prioritizing shorter-term product applications over fundamental scientific research.
OpenAI’s Idealism and Its Transformation
OpenAI launched in 2015 as a nonprofit committed to building AGI safely and sharing its benefits with the world.
Its founding philosophy stressed freedom from financial obligations — a bulwark against commercialization pressure.
But after Elon Musk’s departure and the loss of key donors, OpenAI faced serious financial constraints.
Altman led a restructuring to form OpenAI LP, a "capped-profit" entity that allowed external investment — notably securing $1 billion from Microsoft.
Deep technical integration with Microsoft followed, as OpenAI licensed its models to support Azure’s growth, raising questions about the company’s evolving independence.
Strategic Takeaway for Builders:
Mission statements alone don't guarantee resilience.
Embedding a mission into governance, legal structures, and funding terms is essential to preserving idealistic goals under survival pressure.
3.2 Speed vs. Perfection: Why Timing Beat Technical Rigor
One of the clearest divergences between OpenAI and DeepMind was their attitude toward deployment speed.
DeepMind’s Methodical Approach
DeepMind maintained a scientific mindset, preferring to deploy technologies like AlphaGo and AlphaFold only after years of careful validation.
Hassabis's focus was publishing peer-reviewed breakthroughs, not rushing products to public release.
DeepMind insisted on training AI in controlled simulated environments, avoiding early exposure to real-world unpredictability.
OpenAI’s Iterative Culture
OpenAI, under Altman, fostered a startup-like culture of rapid experimentation.
The team often described their process as "throwing spaghetti on the wall" to discover viable applications.
Engineering decisions prioritized speed and public feedback over academic perfection.
This philosophy led to early launches of models like GPT-2 (limited release) and ChatGPT (public release) — even while recognizing internal flaws like hallucinations and bias.
Strategic Takeaway for Builders:
In fast-moving technology markets, shipping early often outweighs waiting for technical perfection.
Capturing user mindshare and iterative feedback can create first-mover advantages too large for slower, more cautious competitors to overcome.
3.3 Narrative Control as a Strategic Moat
OpenAI’s success was not just technical — it was narrative.
OpenAI’s Mastery of Public Storytelling
Even as it deepened its ties to Microsoft and shifted toward commercialization, OpenAI continued positioning itself publicly as a safety-first, humanity-driven lab.
Sam Altman's public statements framed OpenAI as a force for good, committed to distributing AGI wealth broadly — even if internal transparency and openness diminished over time.
Despite growing secrecy, OpenAI maintained public credibility as an ethical innovator, softening regulatory threats and enhancing its talent pipeline.
DeepMind’s Scientific Focus, Public Reluctance
Hassabis largely avoided crafting public narratives, preferring DeepMind’s breakthroughs to speak for themselves.
While this preserved scientific credibility, it meant DeepMind often failed to shape broader cultural and regulatory narratives around AI.
Strategic Takeaway for Builders:
Narrative control is not optional in frontier industries.
Public trust, regulator goodwill, and cultural positioning can become moats just as powerful as technological lead time.
4. Breakout Technologies: How Innovation Choices Built Advantage
While leadership styles and strategic philosophies shaped OpenAI and DeepMind’s trajectories, it was their technical bets — and how they deployed them — that crystallized their advantages or exposed their vulnerabilities.
The story of transformers, GPT-2, and ChatGPT shows how organizational culture, risk appetite, and speed translated into real-world market leadership.
4.1 The Transformer Revolution: Google's Missed Opportunity, OpenAI’s Seizure
The divergence between OpenAI and Google over transformer technology began with two radically different cultures — and visions for its potential.
The Early Context
At Google’s Mountain View headquarters, transformer architecture was treated cautiously, tucked away in a "metaphorical cupboard."
Meanwhile, at OpenAI’s grayer, colder San Francisco office, researchers were giddily excited about its potential for language generation.
At Google, the transformer was first applied to incremental improvements — better translation, better search indexing.
At OpenAI, figures like Ilya Sutskever immediately recognized a chance to revolutionize machine language understanding and generation.
OpenAI’s Decoder-Only Innovation
Initially, OpenAI researcher Alec Radford saw Google's publication of the transformer paper as a crushing blow — assuming Google would dominate.
But Radford and Sutskever quickly realized that Google had no immediate plans to commercialize transformers for generative AI.
OpenAI made a radical design choice:
Eliminate the encoder.
Build a decoder-only model that could both understand and generate text in a single streamlined system.
This "decoder-only" architecture became the foundation for all future GPT models.
Scaling Up: "Can You Make It Bigger?"
Sutskever’s simple but powerful strategy:
Feed the model more data.
Add more parameters.
Scale compute relentlessly.
Radford reported making more progress in two weeks under this new philosophy than in his previous two years.
This culminated in the first Generatively Pre-trained Transformer — GPT.
Meanwhile at Google
Google used transformers to improve Google Translate and build BERT, a breakthrough in search query understanding.
Noam Shazeer, co-inventor of the transformer, developed Meena, a chatbot he believed could replace Google Search entirely.
However, Google’s leadership hesitated:
Releasing Meena could cannibalize Google’s profitable search business model.
Sundar Pichai and senior executives pulled back, prioritizing business protection over transformational risk.
The Aftermath
OpenAI surged ahead, turning transformers into the core of generative AI’s global explosion.
Inside Google, frustration mounted:
All eight original inventors of the transformer left.
Many founded startups (like Character.ai) now collectively valued at over $4 billion.
Strategic Takeaway for Builders:
Inventing a technology isn’t enough.
Organizations willing to disrupt themselves — and move fast — often outpace the inventors who hesitate to act.
4.2 GPT-2: Power, Secrecy, and the Birth of Mystique Marketing
If transformers gave OpenAI its architectural foundation, GPT-2 showed OpenAI how to control public perception — and use secrecy strategically.
Development and Breakthroughs
GPT-2 was more than 10x larger than GPT-1, with 1.5 billion parameters compared to GPT-1's smaller architecture.
GPT-2 was trained on forty gigabytes of internet text — vastly broader and noisier than GPT-1's BooksCorpus.
This scaling leap allowed GPT-2 to generate more complex, believable human-like text.
Concerns and Limited Release
OpenAI leadership recognized that GPT-2 was powerful — but also risky.
In February 2019, OpenAI took a rare cautious step:
Releasing only a small version of GPT-2 publicly.
Publishing a blog post, "Better Language Models and Their Implications," highlighting the risks of misinformation.
This level of caution would be rare in OpenAI’s future commercial phase.
Newfound Secrecy
GPT-2 marked the start of new secrecy at OpenAI:
The training dataset, WebText, was scraped from Reddit-linked webpages with three or more upvotes.
Later investigations revealed it included over 272,000 documents from unreliable news sources and 63,000 posts from extremist-linked subreddits.
This raised early alarms about bias, misinformation, and ethical blind spots baked into future models.
Public Reaction and Hype
The decision to hold back GPT-2 created intense press attention:
Wired: "The AI Text Generator That's Too Dangerous to Make Public"
The Guardian: "AI Can Write Just Like Me. Brace for the Robot Apocalypse"
OpenAI released select demos (e.g., a fake news story about unicorns) but withheld full model access.
Mystique marketing took hold:
Secrecy + hints of danger amplified public fascination.
Access to the full model became "like trying to get into an exclusive nightclub."
Full Release and Lessons
By late 2019, OpenAI fully released GPT-2 — after determining there was "no strong evidence of misuse."
But the playbook of managing narrative through selective disclosure was now firmly part of OpenAI’s DNA.
Strategic Takeaway for Builders:
Perception control can be as powerful as technical release.
Crafting mystique around capabilities can drive public and industry attention even faster than open access.
4.3 ChatGPT: Speed, Public Adoption, and Strategic Dominance
If GPT-2 demonstrated OpenAI’s ability to manage perception, the release of ChatGPT showed that strategic speed and public accessibility could fundamentally reshape the AI landscape — much faster than anyone, even OpenAI, expected.
Development and Underlying Technology
ChatGPT was built on GPT-3.5, an evolution of the transformer-based GPT-3 architecture.
The "T" in ChatGPT still stood for "transformer," the 2017 breakthrough that allowed machines to model language generation at scale.
Beyond just public demo purposes, ChatGPT’s underlying model was also integrated into products like Microsoft’s Copilot — embedding OpenAI’s technology deep into software infrastructure.
Launch Decision and Internal Debate
In early November 2022, OpenAI management informed staff of plans to launch a chatbot based on GPT-3.5.
It was framed internally as a "low-key research preview," not a major strategic release.
However, several employees voiced serious concerns:
The model still hallucinated — confidently stating false information.
Factual caution tuning degraded user experience, leading OpenAI to prioritize authoritativeness over accuracy.
Despite these internal worries, Sam Altman championed moving forward, arguing that humanity needed time to adjust to AI’s growing capabilities — likening it to "dipping your toes into a cold swimming pool."
Public Release and Immediate Reaction
On November 30, 2022, OpenAI quietly launched ChatGPT with a blog post and an 11:30 a.m. San Francisco time tweet from Altman inviting the public to try it.
The response was explosive:
Over one million users within the first week.
Thirty million registered users within two months — making it one of the fastest-growing consumer internet products in history.
Users marveled at its fluency and adaptability:
It could generate code, answer complex questions, write poetry, imitate famous literary styles.
Many described it as "like talking to a highly educated adult" — a far cry from previous clunky chatbots.
Mainstream media reacted quickly:
The New York Times declared ChatGPT "the best artificial intelligence chatbot ever released to the public."
Strategic Consequences: Code Red for Competitors
ChatGPT’s success shocked the tech industry into action:
Google declared a "Code Red," recognizing ChatGPT as an existential threat to its core search business.
Google, despite having internal models like LaMDA, had hesitated to launch public-facing chatbots, fearing reputational risk and revenue disruption.
Microsoft, meanwhile, accelerated its integration of OpenAI models into Bing, making AI-enhanced search a new battleground.
DeepMind, which had traditionally focused on scientific milestones over commercial products, found itself behind strategically — forced to pivot toward language model competition with the Gemini project.
Emerging Risks and Early Criticism
While ChatGPT was lauded publicly, real concerns surfaced rapidly:
Bias and Hallucination
ChatGPT inherited the biases of its training data:
It could reflect racial and gender stereotypes.
It sometimes generated problematic or inaccurate content.
Hallucination rates — generating fabricated information — were estimated at around 20%, causing real-world errors and even lawsuits.
Over-Reliance and Critical Thinking Risks
Critics warned that reliance on ChatGPT could erode critical thinking, encouraging users to accept plausible-sounding but unverified answers.
Privacy and Data Issues
ChatGPT raised concerns about long-term data retention and profiling risks, as users fed increasingly personal information into conversational systems.
Regulatory Response
ChatGPT’s popularity also triggered accelerated regulatory action:
The European Union updated its AI law, proposing stricter oversight and potential liabilities for general-purpose models like ChatGPT.
Altman initially threatened to withdraw OpenAI from Europe in response — before softening his stance.
Internal Frictions at OpenAI
The rapid commercialization of ChatGPT exacerbated internal tensions:
Safety researchers warned that OpenAI was "frantically cutting corners" to beat competitors.
A research paper criticizing OpenAI’s speed-first culture (co-authored by Helen Toner) infuriated Altman internally.
These frictions ultimately contributed to Altman's temporary firing in November 2023 — a culmination of disagreements over mission, pace, and governance.
Strategic Takeaway for Builders:
Shipping early and scaling fast doesn’t just win users — it can redefine market categories and reshape competitor strategies.
But neglecting governance, safety, and internal alignment creates strategic vulnerabilities that compound over time.
5. Power Struggles: The November 2023 Altman Firing and Return
The dramatic firing — and rapid reinstatement — of Sam Altman in November 2023 exposed the deep structural fragilities inside OpenAI.
It revealed how mission-driven ideals, commercial pressures, founder power, and weak governance collided — with survival and control of one of the world's most important AI companies hanging in the balance.
5.1 The Sudden Firing
While in Las Vegas, Sam Altman received a message from Ilya Sutskever, OpenAI's chief scientist, requesting a conversation the next day.
During a Google Meet call, the OpenAI board — excluding chairman Greg Brockman — informed Altman he was being fired, effective immediately.
Minutes later, Altman was locked out of his company accounts.
The board issued a cryptic public explanation, citing that Altman "was not consistently candid in his communications," but providing little additional context.
5.2 Brockman’s Departure and the Staff Revolt
Greg Brockman, a co-founder and chairman of the board, was removed from his leadership position but was offered the chance to stay at OpenAI.
He resigned immediately in solidarity with Altman.
Within hours, three of OpenAI’s top researchers also resigned, triggering a wave of internal panic.
Altman tweeted: "i love openai employees so much," a message quickly retweeted with heart emojis by dozens of staff — signaling massive internal support.
Meanwhile, confusion and outrage erupted across Silicon Valley.
Observers speculated whether this was a coup led by "decelerationists" ("decels") — those advocating for slowing AI development due to existential risk concerns.
5.3 Microsoft’s Reaction and Strategic Countermove
Satya Nadella, Microsoft’s CEO — whose company had invested $13 billion in OpenAI largely on the strength of Altman’s leadership — was furious.
Microsoft shares began falling after the news broke.
Nadella quickly opened negotiations with the OpenAI board to reverse the firing.As a backup, Microsoft announced early Monday morning that:
Altman, Brockman, and any OpenAI employees who wished could join a new advanced AI research team inside Microsoft.
This public show of confidence boosted Microsoft’s stock price, signaling that if OpenAI collapsed, Microsoft would catch its talent and momentum.
5.4 Reasons Behind the Board’s Decision
Independent board members Helen Toner and Tasha McCauley — both affiliated with effective altruism organizations — played key roles in Altman’s ouster.
The board's concerns included:
Altman's external ventures, including secretive fundraising discussions with Jony Ive for an "iPhone of AI" project, and his push to raise large sums for an AI chipmaking company — moves perceived as leveraging OpenAI’s technology for personal benefit.
Growing mistrust over Altman's internal communications, including accusations that he gave different stories to different people.
Frustration over Altman's anger toward Helen Toner's co-authored research paper criticizing OpenAI’s "frantic corner-cutting" during the ChatGPT launch.
Fundamentally, the board feared that Altman was compromising OpenAI’s mission to build AGI "for the benefit of humanity" by pushing the company toward aggressive commercial expansion.
5.5 Staff Threats and Investor Pressure
OpenAI's executive leadership team quickly pressured the board to reinstate Altman, warning that the company would not survive without him.
Nearly all of OpenAI’s 770 employees signed a letter threatening to resign and join Microsoft unless Altman was reinstated and the board resigned.
Key financial stakes also played a role:
Altman's firing jeopardized a pending secondary sale of employee stock that would have valued OpenAI at approximately $86 billion — potentially making hundreds of employees millionaires.
Microsoft reportedly threatened to pull its cloud credits, which were essential to OpenAI’s operations.
Facing existential collapse, the board found itself increasingly isolated.
5.6 Appointment of Emmet Shear and Further Rebellion
In an attempt to stabilize the company, the board appointed Emmet Shear, former CEO of Twitch, as interim CEO.
However, the rebellion continued:
Many OpenAI employees refused to attend Shear’s emergency all-hands meeting.
Some even sent Shear middle-finger emojis, signaling open disdain.
5.7 Sutskever’s Change of Heart
Over the tense weekend, Ilya Sutskever — who had helped orchestrate Altman's firing — changed his mind.
After "intense conversations" with OpenAI leaders and an emotional appeal from Greg Brockman’s wife, Sutskever flipped sides:
He signed the staff letter demanding the board’s resignation.
He publicly tweeted that he "deeply regretted" his role in Altman’s firing and would "do everything to reunite the company."
5.8 Altman’s Reinstatement and Board Overhaul
Altman negotiated conditions for his return:
He demanded changes in OpenAI’s governance structure.
He required formal absolution of any wrongdoing.
A new board was formed, chaired by Larry Summers and Bret Taylor.
Helen Toner and Tasha McCauley, the independent directors who had spearheaded Altman's ouster, stepped down.
Microsoft gained a non-voting observer seat on the new board — increasing its influence.
5.9 Aftermath and Strategic Implications
The entire episode revealed the profound tensions between OpenAI’s nonprofit mission and its billion-dollar commercial reality.
Although Altman had publicly claimed that the board could fire him, the revolt made it clear:
Founder charisma, internal loyalty, and external investor pressure often matter more than formal governance structures.
The firing also raised broader questions:
Can a nonprofit truly govern a multibillion-dollar AI company pursuing AGI?
Or does scale inevitably bend missions toward survival and power consolidation?
Meanwhile, the two women who had stood up to Altman — Toner and McCauley — faced public backlash.
Sutskever retained a leadership role despite his role in the chaos.
Some industry observers argued that the events strengthened the case for open-source AI — as a necessary counterweight to concentrated AI power inside private companies.
6. Ethical Reckoning: The Stochastic Parrots Critique
While OpenAI and DeepMind raced to scale capabilities, another front opened in the AI world — one focused not on how powerful language models could become, but on how dangerous, biased, and opaque they already were.
The "Stochastic Parrots" critique, led by Emily Bender, Timnit Gebru, and Margaret Mitchell, exposed critical ethical fault lines that the builders of AI could no longer ignore.
6.1 The "Stochastic Parrots" Critique
Bender coined the term "stochastic parrots" to describe large language models (LLMs):
Systems that statistically predict the next word without genuine understanding of meaning, context, or truth.
LLMs can infer associations (e.g., umbrellas with rain) but do not comprehend the concepts behind them (e.g., wetness, weather patterns).
The paper synthesized growing evidence that LLMs were:
Amplifying societal biases,
Underrepresenting non-English languages, and
Becoming increasingly secretive about the origins of their training data.
Crucially, the critique pushed back against the notion that more fluent output equals higher intelligence — challenging hype cycles claiming LLMs were approaching human-like cognition.
6.2 Ethical Concerns Surrounding Large Language Models
Bias and Reinforcement of Stereotypes
LLMs were trained on vast internet corpora filled with humanity’s worst stereotypes.
Specific examples of embedded bias include:
GPT-3:
More likely to associate professional occupations with men,
More likely to use negative language when referencing Black individuals,
More likely to associate Islam with violence.
ChatGPT:
Generated stereotyped code, e.g., describing professors with "salt-and-pepper beards,"
Reinforced gendered career roles for boys and girls.
DALL-E 2:
Consistently generated "CEO" images as white men, and "nurse" images as women.
Because these biases are inherent to the training data, companies like OpenAI struggle to fully eliminate them — no amount of post-hoc alignment easily fixes biased priors.
Lack of Transparency
Unlike GPT-1’s more open disclosure, later models became increasingly opaque:
OpenAI refused to fully disclose what data was used for DALL-E 2 and its newer language models.
Secrecy made it difficult to audit models for bias, misuse, or even compliance with copyright laws.
Misinformation and Hallucination Risks
The Stochastic Parrots critique warned about the potential for LLMs to generate misinformation at scale:
OpenAI itself had flagged this danger when limiting GPT-2’s initial release.
ChatGPT’s hallucination rate (~20%) exacerbated these concerns — leading to real-world consequences, including a defamation lawsuit.
The internet taught GPT models "what matters" — but also taught them to prioritize clickable, trending, emotionally charged material, compounding risks of amplifying dangerous content.
Impact on Critical Thinking
LLMs' confident fluency risks eroding human critical thinking:
Users might increasingly defer to AI-generated outputs without independently verifying facts or logic — a serious societal risk.
6.3 Reception and Cultural Impact
Initial Suppression, Then Viral Spread
The Stochastic Parrots paper was initially submitted internally at Google, but faced strong executive resistance.
Leadership deemed it too negative, emphasizing harms over positive impacts.
After internal disputes and firings (most notably Gebru and Mitchell), the paper was leaked — causing the Streisand effect.
Public Influence
"Stochastic parrot" became a catchphrase for criticizing LLMs’ limitations.
Even Sam Altman later publicly referenced the concept, acknowledging its relevance.
The incident exposed the cultural gap:
Between those building AI systems to scale,
And those warning that such scaling amplified real-world risks faster than they could be mitigated.
6.4 The Role of Bender, Gebru, and Mitchell
Timnit Gebru and Margaret Mitchell, demoralized by Google’s lack of concern, joined forces with Emily Bender to summarize and elevate the growing body of risk evidence.
Bender, a computational linguist, had long argued that LLMs simulate language without true understanding — and warned that language generation could mislead both users and creators.
Together, they pushed for:
Greater transparency in how training data is sourced and used.
Rigorous auditing for biases, inaccuracies, and systemic harms embedded in AI models.
Their work highlighted a painful reality:
Far more resources were being poured into scaling up LLMs than into governing or auditing them.
6.5 Google's Response and Broader Tech Industry Implications
Google insisted that its language models were "engineered to avoid" the harms highlighted in the Stochastic Parrots paper.
However, its handling of the critique — and the high-profile firings — triggered widespread backlash.
Many viewed it as a case study of tech companies suppressing critical research when it conflicted with commercial ambitions.
The broader tech industry faced uncomfortable questions:
Could large, commercial AI labs be trusted to police their own models?
Or were external pressures — public scrutiny, regulation, activist researchers — necessary to prevent harm?
6.6 AI Safety vs. AI Ethics: Diverging Agendas
The book draws an important distinction between two competing frameworks for thinking about AI risk: AI Safety and AI Ethics.
Focus and Definition
AI Safety concentrates on long-term existential threats — the fear that a future rogue Artificial General Intelligence (AGI) could escape human control and cause catastrophic global harm.
This perspective is championed by figures like Eliezer Yudkowsky and the broader "AI doomer" movement, which advocates for extreme caution to prevent runaway technological risks.
By contrast, AI Ethics focuses on immediate, tangible harms already present in today's AI systems, including bias, discrimination, misinformation, and privacy violations.
Researchers such as Timnit Gebru, Margaret Mitchell, and Emily Bender argue that the dangers of AI are not distant hypotheticals — they are impacting real people, especially marginalized communities, today.
Funding Disparities
The book highlights a sharp imbalance in resource allocation between the two camps. AI Safety initiatives, especially those centered on mitigating future existential risks, have attracted vast and growing investments from tech philanthropists, venture firms, and corporate labs.
Meanwhile, AI Ethics research — focused on present-day harms — remains comparatively underfunded, despite its urgent societal relevance.
Critiques of Focus
Critically, some observers argue that the overwhelming emphasis on hypothetical future threats serves to deflect attention from immediate, solvable problems.
By focusing public narratives on speculative "rogue AI" scenarios, tech companies and institutions can avoid addressing the biases, injustices, and operational harms already embedded in today’s large-scale AI systems.
The book notes that while billions have flowed into AGI existential risk mitigation, very little has been invested in solving measurable, present-day harms caused by current generation language models.
Some critics suggest that emphasizing "doomsday" scenarios conveniently shifts public scrutiny away from the systemic biases and societal damages already unfolding today — disproportionately affecting vulnerable and marginalized groups.
6.7 Conclusion: The Real Challenge Builders Must Face
The Stochastic Parrots critique reshaped public understanding of LLMs:
It made clear that technical capability and ethical responsibility must evolve together — or else AI risks amplifying and entrenching existing societal injustices.
It exposed that bias, misinformation, and secrecy are not accidents — they are inevitable outcomes of scaling models trained on the raw internet without adequate safeguards.
For builders today, the challenge is not just inventing the next model.
It’s designing systems that scale safely, transparently, and ethically — before regulatory, reputational, or societal backlash forces reactive, painful corrections.
7. Final Reflection: What Builders Should Learn from Supremacy
At its core, Supremacy is not just the story of Sam Altman and Demis Hassabis.
It is the blueprint of a deeper, recurring reality:
In frontier markets, ideals ignite the journey — but survival, structure, and power dynamics ultimately determine who shapes the future.
The race to build AGI, as Parmy Olson documents, mirrors every major technological revolution in history:
Visionaries begin with transformative missions.
Early designs prioritize openness, safety, and universal benefit.
But as stakes escalate — as billions in capital, organizational inertia, and competitive pressures mount — early ideals get tested, bent, and often rewritten.
Both OpenAI and DeepMind started with benevolent visions:
OpenAI: Build AGI safely and share its benefits broadly.
DeepMind: Solve intelligence, and then solve everything else — for humanity’s good.
Yet both were ultimately pulled toward survival strategies that compromised original missions:
OpenAI’s pivot to a capped-profit entity.
DeepMind’s deeper integration into Google’s commercial engines.
The pivotal decisions — around technology bets (transformers, decoder-only architectures), product strategy (speed vs. perfection), narrative control (safety-first framing), and governance (founder loyalty vs. nonprofit boards) — all converged into outcomes shaped less by raw technical brilliance and more by structural strategy, survival trade-offs, and narrative mastery.
The ethical debates (exemplified by the Stochastic Parrots critique) further expose that scaling capability without scaling responsibility can create fragilities — reputational, regulatory, societal — that eventually threaten the very ecosystems builders hope to dominate.
Strategic Lessons for Builders
Here are the hard-earned lessons Supremacy offers to today's founders, technologists, and decision-makers:
✅ Mission statements are not enough.
Without embedding ideals into legal structures, governance, and funding mechanics, survival pressures will quietly bend missions toward power accumulation.
✅ Speed matters more than perfection — especially in frontier markets.
Iterative deployment, even with imperfect models, can generate cultural momentum and user mindshare too strong for technically superior but slower competitors to overcome.
✅ Public narrative is a strategic moat.
Controlling how your company is perceived — by regulators, by talent, by the public — can protect you as much as technical lead time.
✅ Founders must design governance for resilience — or become vulnerable to it.
If founders don't proactively structure power, they risk being undone by the very boards or investors meant to guide them.
✅ Ethical reckoning isn't optional — it's a strategic necessity.
Scaling AI without scaling transparency, bias mitigation, and safety creates vulnerabilities that competitors, regulators, or public sentiment will eventually exploit.
✅ Inventing the future requires building systems of control, not just products.
Who controls the training data, the deployment levers, the narrative framing, and the governance models will define the next era — not simply who builds the first great model.
Closing Thought
In the race for AI supremacy, technical breakthroughs matter.
But Supremacy shows that structural strategy — how you survive, adapt, and control the terms of the race — matters even more.
Founders and builders today stand at a similar crossroads:
Build fast, but build with foresight.
Scale ambition, but scale governance.
Capture opportunity, but respect the future you are shaping.
The greatest technologies of our time will not be remembered solely for their codebases or architectures.
They will be judged by the systems they empowered — and the world they helped build.
Choose your structures wisely. The future you create depends on them.
8. Top Quotes From The Book
"While Google’s hometown had T-shirt weather, you needed a jacket in OpenAI’s urban microclimate. Another big difference: the researchers at OpenAI were giddily excited about the transformer technology that Google’s management wanted to keep in a metaphorical cupboard."
"Just feed it more and more data. Sutskever started asking people the same thing when he walked around the office, according to someone who worked there at the time: 'Can you make it bigger?'"
"Thanks to the transformer, Radford was making more progress with his language model experiments in two weeks than over the previous two years."
"OpenAI had a ‘fiduciary duty [to] humanity,’ and that it would not use its AI to help ‘concentrate power.’ Most companies famously had a fiduciary or trusted legal duty to their shareholders and investors, but here OpenAI emphasized it was going against the grain. It was for the people."
"Strategic partnership is a handy one that companies frequently use to cover a wide range of corporate relationships that could put them at arm’s length or on a tight leash. It could mean sharing money and technology between two firms or setting up a licensing agreement. The term was ambiguous enough to hide the true nature of an awkward corporate relationship..."
"They decided to release a smaller version of the model, warning in a blog post in February 2019 that it could be used to generate misinformation on a large scale. It was a startlingly honest admission and approach, an approach that OpenAI would rarely take afterward. ‘Due to our concerns about malicious applications of the technology, we are not releasing the trained model,’ the post said."
"But then a funny thing happened that signified how buzzy OpenAI’s approach to AI could be. GPT-2 received a flood of press attention, and many of the articles focused on the dangers of this new AI system that OpenAI was pointing to. Wired magazine published a feature titled ‘The AI Text Generator That’s Too Dangerous to Make Public,’ while The Guardian printed a column breathlessly titled ‘AI Can Write Just Like Me. Brace for the Robot Apocalypse.’"
"Altman had learned over the years to be counterintuitive. If you held back on details, you could create more fanfare. Lean into controversy—such as when Altman sent a long list of Loopt’s risks to a Wall Street Journal reporter—and you could disarm your critics."
"‘It seems to really understand concepts,’ he said in one interview, ‘which feels like intelligence.’ DALL-E 2 was so magical that it could make skeptics of AGI start taking the idea seriously, he added."
"Machines weren’t just learning statistical correlation in text, Sutskever said in one interview. ‘This text is actually a projection of the world.… What the neural network is learning is more and more aspects of the world, of people, of the human condition, their hopes, dreams and motivations, their interactions and the situations that we are in.’"
"But now Google executives didn’t have much choice. In one meeting, which was recorded and shared with the New York Times, a manager pointed out that smaller companies like OpenAI seemed to have fewer concerns about releasing radical new AI tools to the public. Google had to jump in and do the same, or it risked becoming a dinosaur."
"‘Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,’ he wrote."
"That same month, an open letter signed by Elon Musk and other technology leaders called for a six-month ‘pause’ on AI research because of the risks to humanity. ‘Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?’ said the letter... ‘Should we risk loss of control of our civilization?’"
"And it seemed like the more Sam Altman talked about the threat of OpenAI’s technology—telling Congress, for instance, that tools like ChatGPT could ‘cause significant harm to the world’—the more money and attention he attracted."
"Safety-first framing had made Anthropic sound like a nonprofit, with its mission to ‘ensure transformative AI helps people and society flourish.’ But OpenAI’s smash hit with ChatGPT had shown the world that the companies with the grandest plans could also be the most lucrative investments."
"Bender couldn’t stand the way GPT-3 and other large language models were dazzling their early users with what was, essentially, glorified autocorrect software. So she suggested putting ‘stochastic parrots’ in the title to emphasize that the machines were simply parroting their training."
"The Stochastic Parrots paper hadn’t been all that earth-shattering in its findings. It was mainly an assemblage of other research work. But as word of the firings spread and the paper got leaked online, it took on a life of its own. Google experienced the full Streisand effect... while ‘stochastic parrot’ became a catchphrase for the limits of large language models. Sam Altman would later tweet, ‘I am a stochastic parrot and so r u’ days after the release of ChatGPT."
"But today, machines are generating articles, books, illustrations, and computer code that seem indistinguishable from the content created by people. Remember the “novel-writing machine” in the dystopian future of George Orwell’s 1984 and his “versificator” that wrote popular music? Those things exist now, and the change happened so fast that it’s given the public whiplash, leaving us wondering whether today’s office workers will have jobs in the next year or two."
"Their story is one of idealism but also one of naivety and ego, and of how it can be virtually impossible to keep an ethical code in the bubbles of Big Tech and Silicon Valley."
"In class, Thrun taught his students about machine learning, a technique that computers used to infer concepts from being shown lots of data instead of being programmed to do something specific. The concept was critical in the field of AI, even though the term learning was misleading: machines can’t think and learn as humans do. Thrun noticed that the serious kid from St. Louis was interested in the possibility of unintended consequences in AI. What would happen if a machine learned to do the wrong thing?"
Related Books
Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World
In Genius Makers, Cade Metz reveals the fierce dreamers and daring rebels who ignited the artificial intelligence revolution. From humble university labs to the towering headquarters of Google and Facebook, these pioneers—part scientists, part visionaries—raced to shape a technology that would change the world. This is the thrilling inside story of ambition, rivalry, and the unstoppable quest to create machines that think.
Hello World: Being Human in the Age of Algorithms
In Hello World, mathematician and storyteller Hannah Fry takes us on a lively journey through the hidden world of algorithms—the invisible systems quietly shaping everything from healthcare to criminal justice to love. With wit and clarity, she unpacks the promises, flaws, and surprising quirks of the technology steering modern life, reminding us that the future of AI isn’t just about machines—it’s about us.
Prediction Machines: The Simple Economics of Artificial Intelligence
In Prediction Machines, three leading economists reveal a simple but powerful idea: at its core, artificial intelligence is a tool for making predictions—and cheaper, faster predictions are reshaping industries everywhere. Updated with fresh insights for today’s AI-driven world, this smart and accessible book shows how understanding AI as an economic force, not just a technological one, is the key to navigating the future.