Adrian Josele Quional

Book Review: "The Big Nine" by Amy Webb

The Big Nine companies balances AI

This book strongly reminded me of AI Superpowers (which I've written about here), particularly in how it frames Artificial Intelligence (AI) as a geopolitical force rather than a purely technical achievement. Like Kai-Fu Lee, Amy Webb situates AI within the growing rivalry between the United States and China. However, The Big Nine extends this discussion in a way I had not fully appreciated before reading it: by arguing that it is not nations alone, but a small group of powerful technology companies — the "Big Nine" — that effectively steer the trajectory of AI worldwide.

While AI research undeniably takes place across the globe, especially in universities and research institutes, the reality is that the largest technology companies possess unparalleled computational resources, access to massive datasets, and the ability to deploy AI systems at global scale. In the United States, these companies — Google, Microsoft, Amazon, Facebook, IBM, and Apple (collectively referred to as G-MAFIA) — operate largely under market-driven incentives. In China, Baidu, Alibaba, and Tencent (consequently known as BAT) function in closer alignment with state objectives. Together, these nine entities have the practical power to define how AI is developed, deployed, and ultimately experienced by billions of people.

This concentration of power is unsettling precisely because AI is not a neutral tool. As Webb emphasizes, AI systems increasingly make decisions on our behalf — what we see, what we buy, how we navigate the world, and eventually how institutions function. From voice assistants like Alexa and Siri to algorithmic recommendation systems, AI is already embedded in everyday life. While such systems promise efficiency and convenience, they also reflect the values, incentives, and blind spots of the organizations that create them. If most future AI systems originate from the Big Nine, then humanity is, in effect, outsourcing moral and social choices to a narrow set of actors with competing interests.

The contrast between the U.S. and China's approaches to AI development further amplifies this concern. In the United States, there is no unified national AI strategy; instead, innovation is largely left to the private sector, driven by competition, consumerism, and profit. China, on the other hand, has articulated a long-term national vision for AI, explicitly aiming to become the global leader by 2030. This centralized strategy allows for coordination and rapid deployment — but at a significant cost to individual freedoms, particularly in terms of data privacy and surveillance. Webb warns that as China exports its AI technologies through diplomatic and economic partnerships in regions such as Southeast Asia, Africa, and Latin America, it may also export the underlying values embedded in those systems.

One of the book's most thought-provoking sections is Webb's exploration of three possible AI futures: optimistic, pragmatic, and catastrophic. The optimistic scenario envisions global cooperation, ethical alignment, and responsible AI governance. While appealing, it felt somewhat idealized to me — more aspirational than plausible given current political and economic realities. The pragmatic scenario, which assumes only incremental improvements to today's systems, appears far more realistic, though ultimately unsatisfying in the long term. The catastrophic scenario is the most unsettling, not because of a science-fiction AI uprising, but because it imagines a future in which geopolitical dominance — particularly China's victory in the AI race — reshapes global norms in profound and irreversible ways. Regardless of which scenario one finds most convincing, they serve as valuable thought experiments that force readers to confront uncomfortable possibilities.

In response to these risks, Webb proposes several ambitious solutions. These include forming an international AI alliance, fundamentally rethinking how governments — especially the U.S. government — engage with AI, and embedding AI expertise across public institutions. She also argues that the Big Nine themselves must be restructured to place ethics, transparency, and safety at the core of their missions rather than treating them as secondary concerns. Notably, Webb does not advocate heavy-handed regulation, warning instead that poorly designed regulations could stifle innovation — a position I largely agree with.

The book also places significant responsibility on universities, which serve as the primary pipeline of talent into the Big Nine. Webb calls for ethics to be integrated throughout AI education rather than confined to a single course, as well as for greater diversity among AI faculty and stronger incentives for educators who meaningfully engage with ethical questions in technical instruction. Given how deeply values shape technological outcomes, this emphasis feels both necessary and overdue.

Ultimately, The Big Nine makes a compelling case that AI's future will not be determined by algorithms alone, but by human choices — economic, political, and cultural. The values, beliefs, incentives, and identities of those who design AI systems inevitably shape the technologies themselves. Whether driven by consumerism, state power, or personal worldviews, these forces will leave their imprint on AI. The question the book leaves us with is not whether AI will shape humanity, but whose vision of humanity it will reflect. ###