A riveting new read tells the story of the race to develop artificial general intelligence and how it has already changed our world

Parmy Olson’s Supremacy offers a gripping exploration of the race to develop Artificial General Intelligence (AGI), delving into the ambitions, ethical quandaries and societal upheavals shaping this revolutionary technology. As a Bloomberg technology columnist, Olson deftly combines sharp analysis with compelling storytelling, spotlighting the people and philosophies driving the AI arms race.
At the heart of the book are two intriguing personalities: Demis Hassabis, co-founder of DeepMind, and Sam Altman, chief executive of OpenAI. Olson paints vivid portraits of these competitive, egotistical, yet undeniably brilliant individuals. Hassabis, a former neuroscientist and game developer, is portrayed as an academic visionary driven by a near-spiritual quest to “solve intelligence” and thereby unlock all other scientific mysteries. Altman is depicted as a charismatic idealist with a bold mission: to use AI to create affluence for everyone. However, Olson’s narrative reveals how both figures grapple with the collision of idealism and profit motives.
The book’s title, Supremacy, reflects the fierce competition between not only individuals, but also nations and corporations. Google’s acquisition of DeepMind and its subsequent applications, such as enhanced advertising algorithms, exemplify the commercialization of AGI. OpenAI’s transformation from a non-profit company championing transparency to a more secretive, profit-driven entity highlights the tension between altruistic aspirations and economic realities. Olson critiques the role of supervisory boards and corporate structures in steering AI development, asking if profit motives – both individual and corporate – ultimately eclipse idealistic goals.
Olson does not shy away from addressing the risks posed by AI. She weighs the validity of existential fears that AGI could one day wipe out humanity, and tackles more immediate concerns too. Those include AI’s role in amplifying existing societal problems including social media’s echo chambers, entrenched prejudices, and widening inequalities. AI trained on biased data sourced from social media risks reinforcing racism, sexism, and other systemic injustices.
Regulation emerges as a key battleground in Olson’s narrative. She discusses the European Union’s AI Act: will such regulations stifle innovation? She also highlights the geopolitical implications of regulatory disparities, with cautious jurisdictions like the EU potentially ceding ground to less restrained competitors, such as the US and China.
The opacity of AI systems adds significant complexity. Olson explores the challenges of auditing models that function as ‘black boxes,’ producing outputs that even their creators struggle to understand. This makes it difficult to identify and address biases – which raises questions about whether the ethical AI teams tasked with mitigating harm are merely embedding different forms of bias.
The race for AI supremacy, argues Olson, is not just a technological competition but also a cultural one, shaped by the biases and priorities of its creators. Yet she pays relatively little attention to approaches emerging from China and other non-Western nations. By underrepresenting their contributions, Supremacy misses an opportunity to explore how differing political, economic and cultural frameworks are influencing AI’s development.
Ultimately, Supremacy is a stimulating and essential read for anyone interested in the future of technology and its societal impact. AI is already transforming business, and Big Tech players are the big winners in terms of money, power and influence. Olson is surely correct in predicting that the widespread deployment of AI will have unforeseen effects. As she writes in her conclusion: “Now to find the price.”
Piers Cain is a management consultant