Chapter 19: The Fracturing Consensus
The AI research community once shared assumptions. No longer....
Chapter 19: The Fracturing Consensus
The AI research community once shared assumptions. No longer.
For decades, researchers had operated within a loose consensus: AI was good, progress was inevitable, the rising tide of capability would lift all boats. Disagreements existed (about methods, about timelines, about which problems mattered most) but the field's fundamental orientation was shared. Build smarter systems. Publish results. Let the world benefit.
The consensus shattered along multiple fault lines. Open versus closed. The Global North versus the Global South. Different governments pursuing incompatible regulatory strategies. The fractures were not just intellectual; they were institutional, political, economic. The AI industry that had seemed unified in its ambitions revealed itself as a contested terrain where different visions of the future competed for dominance.
Yann LeCun had spent his career building the foundations of modern AI. His work on convolutional neural networks had been vindicated by the deep learning revolution. As Meta's chief AI scientist, he had become one of the most influential voices in the field.
And he was worried about closure.
"The danger of this concentration of power through proprietary AI systems," LeCun argued, was "a much bigger danger than everything else." If AI development remained in the hands of a few companies, releasing only what they chose to release, then "all of our information diet is controlled by a small number of companies." The risk was not superintelligence escaping human control. It was humans losing control to corporations.
His target was explicit. He accused Sam Altman, Demis Hassabis, and Ilya Sutskever of "massive corporate lobbying and attempting to regulate the AI industry in their favor under the guise of safety." The safety concerns were real, he acknowledged, but they were being weaponized to justify closure—to keep AI capabilities locked behind corporate gates while smaller players were regulated out of existence.
Meta had practiced what LeCun preached. When OpenAI produced GPT-3, Meta produced OPT—a similarly powerful model, but completely open source. Anyone could download it, study it, build on it. OpenAI, despite its name, never released GPT-3 openly. The contrast illustrated the divide.
In 2024, LeCun signed a letter with Andrew Ng, Julien Chaumond of Hugging Face, and Brian Behlendorf of the Linux Foundation, identifying three benefits of openness: greater independent research and collaboration, increased public scrutiny and accountability, and lower barriers to entry for new participants. Openness democratized AI; closure concentrated it.
But even Meta began to shift. By 2025, the company was "rethinking its strategy about open source," moving toward the closure that the rest of the American industry had embraced. OpenAI had stopped being open long ago. Anthropic never was. The irony was bitter: China, meanwhile, was "going open source all the way." The best open source models in the LLM world, by some measures, were Chinese.
In late 2025, at the time of this writing, LeCun left Meta after 12 years. The gap between his vision and the company's strategy had grown too wide. He founded Advanced Machine Intelligence, arguing that "we are not going to get to human-level AI just by scaling LLMs"—a position that put him at odds with the dominant paradigm.
The open-versus-closed debate was not just about technology. It was about power: who would control the systems that increasingly mediated human knowledge and interaction.
Governments, meanwhile, were constructing incompatible regulatory frameworks.
The European Union moved first with comprehensive legislation. The AI Act, entering force in August 2024, established a risk-based classification system. High-risk AI systems faced mandatory requirements: documentation of training data, copyright compliance, human oversight, transparency about AI-generated content. Prohibited practices included social scoring systems and real-time biometric identification in public spaces. The goal was not to stop development but to ensure it occurred within boundaries that protected European values.
China's approach was different in substance but equally comprehensive. The Interim Measures for Generative AI Services, effective August 2023, made China the first country with binding regulations for generative AI. Security assessments were required for public-facing services; large language models had to be filed with the government. In September 2025, labeling measures went further: visible labels for chatbots, AI-written content, and synthetic voices. The substantive requirements reflected different values: generative AI had to align with "socialist core values" and avoid "subverting state power." Content control, not capability limits, was the primary concern.
The United States fragmented. The Biden administration's AI Executive Order in October 2023 established reporting requirements, but comprehensive legislation remained elusive. California's SB-1047 was vetoed. The regulatory landscape became a patchwork of FTC consumer protection, NIST voluntary standards, and sector-specific rules from FDA and SEC. No unified framework emerged.
The United Kingdom positioned itself as pro-innovation, favoring sector-specific regulation over comprehensive law. The AI Safety Institute established pre-deployment testing contracts but focused on technical assessment rather than binding rules.
The result was regulatory divergence on a global scale. AI companies faced different requirements in different jurisdictions, with incentives to locate operations where rules were weakest. The gaps between frameworks created spaces where harm could accumulate before any regulator responded.
The Global South experienced AI primarily as deployment target.
The systems were designed in California, Beijing, and London, then deployed to communities that had no voice in their creation. Predictive policing spread widely across Asia and Latin America, regions that showed "greater acceptance" than Europe or North America. China's Digital Silk Road initiative expanded the reach of Chinese AI vendors into Africa, Latin America, and Southeast Asia, offering turnkey surveillance infrastructures bundled with concessional financing.
The dynamic was colonial in structure if not in name. "This form of colonization operates on predictive control rather than territorial occupation." AI systems embedded in smart city programs and facial recognition networks were "not merely technical imports; they reflect epistemic impositions about how security should be defined, policed, and prioritized."
The specific applications varied. In Xinjiang, surveillance systems monitored Uyghur Muslims. In Ecuador and Honduras, predictive policing from US-based firms claimed to anticipate crime. Facial recognition spread through African urban centers, often with minimal public debate about deployment terms or data retention policies. Credit scoring algorithms trained on Global North data were applied to Global South populations, with biases that could amplify discrimination rather than reduce it.
Not all developments were negative. In Chile, advocacy groups secured transparency requirements for predictive policing algorithms. The African Union developed AI Strategy and Digital Transformation Strategy that offered alternative frameworks. The United Nations passed resolutions on Inclusive AI and established a High-Level Panel to develop global governance principles.
But the fundamental asymmetry remained. Those building AI systems were not those living under AI surveillance. Those setting the rules were not those subject to algorithmic decisions. The Global South was being integrated into AI systems on terms it did not choose.
The fractures mapped onto deeper divides.
The open-versus-closed debate was about control: who would have access to AI capabilities, who would shape their development, who would profit from their deployment. LeCun saw closure as corporate power grab; others saw openness as enabling misuse. Both were partly right.
The regulatory divergence reflected different political systems and different values. The EU prioritized human rights and risk management. China prioritized social stability and state control. The US prioritized innovation and market freedom. Each framework encoded assumptions about what mattered, and those assumptions were incompatible.
The Global South deployment revealed the colonial residue in technological systems. AI was developed with resources extracted globally but controlled locally. Training data came from everywhere; profits flowed to a few jurisdictions. The pattern was old even as the technology was new.
The consensus that once held the AI community together had fractured beyond repair. The field that had seemed unified in its ambitions was revealed as a contested terrain. Different actors wanted different things: open access versus competitive advantage, safety versus capability, global governance versus national sovereignty, present harm versus future risk.
The future would be shaped by which fractures healed and which deepened. The outcome was not determined. But the consensus was gone, and what would replace it remained to be built.