Chapter 19: The Questions We Should Be Asking
You have now seen the system. You have seen the chips fabricated on a contested island, the energy flowing through straits that missiles can close, the data pulsing through cables that anchors can sever. You have seen the minerals processed in...
The Questions We Should Be Asking
You have now seen the system.
You have seen the chips fabricated on a contested island, the energy flowing through straits that missiles can close, the data pulsing through cables that anchors can sever. You have seen the minerals processed in facilities that one government can shut down with a memo. You have seen the capital flowing from Gulf sovereign wealth into American data centers, the gases once refined in Ukrainian steel mills, the rare earth magnets mined in Inner Mongolia, the Arctic mining claims staked by billionaires. You have seen the map, the proxy patterns, the feedback loops, the boardroom where the playbook is written. You have seen the asymmetry and the three futures it might produce.
This chapter does not offer answers. It offers questions. Not because answers are unimportant — they are desperately important — but because the right questions, asked clearly and directed at the right people, are the most powerful tool available to anyone who has seen what this book describes.
Questions About Infrastructure
What happens when two or more critical nodes fail simultaneously?
This is the question the system is not designed to answer, because no single institution is responsible for asking it. The CHIPS Act addresses chips. The Department of Energy addresses power. NATO has begun addressing cables. The Commerce Department addresses trade. Nobody addresses the system.
In 2008, no single financial regulator was responsible for systemic risk. AIG structured its activities to fall in the gaps between agencies. The crisis cost $22 trillion. After it, Congress created the Financial Stability Oversight Council — an institution whose entire purpose is to watch the spaces between the silos.
Where is the equivalent for AI infrastructure? Who is monitoring the simultaneous stress on Hormuz, the Red Sea, the Taiwan Strait, and Chinese rare earth facilities? Who has the mandate to say: these are not four separate situations — this is one situation, and it is approaching a threshold?
The CHIPS Act has awarded $30.9 billion across forty projects. Twenty-four of 161 milestones have been met in three years. Companies expect to complete all projects by 2033. Is this pace sufficient? The question is not rhetorical — it is quantifiable. The answer depends on when the next disruption hits Taiwan, and nobody knows when that will be.
Can Arctic infrastructure — the cables, the mines, the data centers — be built fast enough to provide meaningful redundancy before the current chokepoints fail? The cable timelines stretch to 2030. The mining timelines stretch to 2040. The chokepoints are stressed now.
Questions About Power
Who authorized this?
It is a simple question, and it has no satisfying answer. No electorate voted for a world in which ninety percent of advanced AI chips are manufactured on a single island. No legislature debated whether Chinese facilities should process ninety percent of the world's rare earths. No constitutional process produced the system in which a handful of technology companies — and the individuals who control them — hold more leverage over the future of artificial intelligence than any government on Earth.
The system was not designed. It assembled itself, through decades of market optimization, globalization, and the relentless logic of cost reduction. Nobody chose this architecture. But someone is benefiting from it — and the beneficiaries are not the same people who bear the risk.
When the Commerce Department took a 9.9 percent equity stake in Intel — becoming the company's largest shareholder for $8.9 billion — it crossed a line that has not been publicly discussed. It then took equity positions in at least eight additional firms, committing over $10 billion in six months across semiconductors, critical minerals, and nuclear energy. Is the Commerce Department a regulator or a market participant? Can it be both? Who is watching?
Chapter 15 documented the structural proximity of private wealth to state power — the inaugural seating, the policy-to-beneficiary timeline, the feedback loop between political spending and government contracts. The question is not whether any individual intended to capture the policy apparatus. The question is whether the system produces outcomes indistinguishable from capture regardless of intent. If it does, the remedy is structural — not personal.
The Gilded Age ended not because Rockefeller and Carnegie and Morgan became better people. It ended because a political movement — the Progressives — built institutions designed to check concentrated economic power: the Sherman Act, the Clayton Act, the Federal Trade Commission, the Interstate Commerce Commission. Those institutions were imperfect. They were captured and weakened and rebuilt over a century. But they existed. They provided a counterweight.
What is the twenty-first-century equivalent? What institution checks the power of actors who control the platforms, the compute, the government contracts, the campaign contributions, the employment, and the technical infrastructure simultaneously? When a single individual can build the rockets, own the satellites, run the social platform, advise the president, and access federal databases — what is the institutional counterbalance?
If the answer is "there isn't one," then that is itself the answer. And it is not acceptable.
Questions About Governance
Every governance framework for AI — the EU AI Act, the Bletchley and Seoul and Paris declarations, the UN Scientific Panel, the proposed international AI agency — addresses the software layer. Models. Applications. Safety guardrails. Risk classifications.
Not one addresses the hardware layer.
Not one addresses who controls the chips, the energy, the cables, the minerals, and the capital that make AI physically possible. The physical substrate of artificial intelligence — the thing this entire book is about — exists entirely outside the governance conversation.
Why?
The answer is probably both. The hardware layer crosses jurisdictions in ways the software layer does not — a single chip touches Japan for photoresists, the Netherlands for lithography machines, China for rare earths, Taiwan for fabrication, Malaysia for packaging, and the United States for design before it enters a data center. No single government can govern that chain. But the absence of any multilateral framework for even monitoring the chain is not an accident of complexity. It is a choice. The same corporations that engage with software-layer regulation they can shape actively resist hardware-layer transparency that would reveal the concentration, fragility, and geopolitical dependency of their supply chains.
Eighty-three percent of Americans believe AI companies should be regulated. Seventy-two percent of UK citizens say laws and regulation would increase their comfort with AI. The democratic mandate for governance exists. The governance itself does not — at least not for the infrastructure that matters most.
What would an International AI Infrastructure Agency look like? Not a model safety board — those are beginning to exist. An institution responsible for monitoring the physical supply chain: the concentration of chip fabrication, the vulnerability of cable routes, the dependency on specific mineral processing facilities, the energy requirements and their geopolitical implications. An institution that could see the system whole and sound the alarm when it approaches a threshold.
The International Atomic Energy Agency was created because nuclear technology was too dangerous to leave ungoverned. Is AI infrastructure less consequential?
Questions About Democracy
The decisions being made about AI infrastructure — the $500 billion Stargate initiative, the Arctic mining claims, the chip export controls, the rare earth retaliation, the military AI deployments — operate at a scale that traditional democratic mechanisms were not designed to reach.
Voting, public comment periods, protest — these tools were built for national policy debates within sovereign states. The AI infrastructure war is a global system managed by transnational corporations and bilateral trade negotiations, with consequences that fall on populations who have no seat at the table. What does the fisherman in Bab el-Mandeb owe to a system that routes data cables through his strait? What does the Inuit hunter in Greenland owe to a system that wants the minerals beneath her land? What does the factory worker in Hsinchu owe to a world that depends on the chips she fabricates but cannot agree on how to protect the island she lives on?
The deliberative democracy experiments offer a glimmer. Stanford's Global Coalition for Inclusive AI Governance has engaged over ten thousand citizens worldwide in structured deliberation about AI policy. California has launched a new deliberative democracy program specifically for AI governance. The African Union is prioritizing citizen participation in AI policy decisions. These are small, but they are real.
The deeper question is whether democratic governance can operate at the speed and scale that AI infrastructure demands — or whether the gap between technological pace and democratic deliberation has become a structural vulnerability in itself. The answer may be more encouraging than it appears. The AI models change monthly. But the physical infrastructure — the fabs, the cables, the mines, the power plants — operates on timescales of five to fifteen years. This is precisely the timescale where democratic deliberation works best. The problem is not speed. The problem is visibility. You cannot govern what you cannot see.
This book is an attempt to make it visible.
Questions for You
You have read this far. You have seen the system. The question is what you do now.
It would be easy to close this book and feel overwhelmed. The scale is vast. The actors are powerful. The system is complex enough to make any individual feel irrelevant.
But more than eighty percent of your fellow citizens want AI regulated, and the infrastructure that matters most is entirely ungoverned. That gap between public will and institutional reality is not a sign of powerlessness. It is a sign of unfinished work.
What you can do is refuse to look away.
The hidden war is hidden because it operates beneath the surface of public attention — in supply chains and trade negotiations and corporate boardrooms and intelligence briefings. It stays hidden because the people who benefit from the current arrangement have no incentive to illuminate it. Every reader who understands that the AI wars are not coming but are already here — that the conflicts in Iran, Ukraine, the Red Sea, Taiwan, and the Arctic are not separate stories but one story — makes the hidden visible. And what is visible can be governed.
Ask your representatives: Who is monitoring the AI supply chain as a system? Ask the companies whose products you use: Where are your chips fabricated, and what happens if that source is disrupted? Ask the strategists who talk about "winning the AI race": Winning for whom? At whose expense? With what accountability?
Ask the question that sits beneath all the others: If AI is the meta-capability that amplifies all other capabilities — military, economic, scientific, cultural — then whose values is it amplifying? Is it amplifying the values of the handful of individuals who control the infrastructure? Or is it amplifying something broader, something more durable, something that belongs to the seven billion people whose lives depend on systems they did not build and cannot yet see?
This book began with a CEO who told the Pentagon he would not remove the safety guardrails from his systems, and a government that treated that act of conscience as a national security threat. It mapped the physical infrastructure of artificial intelligence across eight contested fronts, three continents, and two oceans. It showed how those fronts interconnect into a single system that no nation can fully secure and no institution is designed to govern.
It has not told you what to think. It has tried to show you what to see.
The AI wars of the twenty-first century are not coming. They are here. They are being fought over the materials and energy and connectivity and capital and knowledge that make machine intelligence possible. They are being fought in places most people have never heard of, by people who never chose to be combatants, for stakes that will shape the century.
The war is no longer hidden. You have seen the system.
The question is what we do now. And that question — the most important one this book can ask — does not belong to strategists or oligarchs or algorithms.
It belongs to you.