Chapter 14: The Great Acceleration
After 2012, the dam broke. Deep learning did not merely work on images. It worked on speech. On translation. On games. On drug discovery. On anything ...
Chapter 14: The Great Acceleration
After 2012, the dam broke.
Deep learning did not merely work on images. It worked on speech. On translation. On games. On drug discovery. On anything with sufficient data. The question shifted, almost overnight, from "Does deep learning work?" to "What can it not do?"
The years between 2012 and 2017 were vertiginous. Researchers who had spent careers in obscurity found themselves courted by the world's wealthiest companies. Techniques that had seemed promising became dominant paradigms. National governments rewrote strategic priorities. And at the same time, quieter voices began noticing what the acceleration was leaving behindâor actively harming.
This is the story of five years when everything changed faster than anyone could process.
The institutional response came first.
Demis Hassabis had founded DeepMind in November 2010, two years before AlexNet. He and Shane Legg had met at the Gatsby Computational Neuroscience Unit at University College London; Mustafa Suleyman was a family friend. Their approach was interdisciplinary from the start (machine learning, neuroscience, engineering, mathematics) aimed at something that sounded grandiose: artificial general intelligence.
They trained their systems on old video games from the 1970s and 1980s. Breakout. Pong. Space Invaders. The neural networks received only pixels and score as input (no rules, no human guidance) and learned to play through trial and error. By 2013, DeepMind had a paper showing their system could master multiple Atari games at superhuman levels. The same architecture, the same learning algorithm, applied across different tasks. It was a small but suggestive step toward generality.
The early investors included Peter Thiel, Elon Musk, and Jaan Tallinn. On January 26, 2014, Google announced the acquisition for a price reported between $400 million and $650 million, the company's largest European purchase to date. Facebook had reportedly been in negotiations; Google had won the bidding war. DeepMind would remain in London, would continue its research agenda, would now have Google's resources behind it.
OpenAI emerged from different concerns. On December 11, 2015, Sam Altman and Elon Musk announced a new artificial intelligence research organization with a starting commitment of one billion dollars. The stated goal was to "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."
The founders had expressed, in various forums, deep anxiety about AI's potential for catastrophe. Musk had called it "the greatest threat to humanity." The logic of OpenAI was to ensure that the most powerful AI systems were developed in the open, not behind corporate wallsâto democratize access, share results, publish code. One Google employee said he was willing to leave for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." Wojciech Zaremba, who had reproduced AlexNet at Google, turned down offers of two to three times his market value to join.
The irony would emerge later. By 2019, OpenAI had shifted from nonprofit to "capped-profit" status. Its GPT models became increasingly closed. Microsoft invested over ten billion dollars. The organization founded to counter corporate concentration had become, arguably, an instrument of it. But in 2015, the mission still seemed coherent. The acceleration was making such questions urgent.
Meanwhile, China was not waiting.
Baidu had established its Institute of Deep Learning in 2013. In 2014, Andrew Ngâwho had led Google's "cat neurons" projectâjoined as Chief Scientist. Robin Li, Baidu's CEO, made AI central to the company's identity. PaddlePaddle, Baidu's deep learning framework, launched in 2016; Alibaba and Tencent followed with their own AI initiatives.
But the most dramatic emergence was SenseTime. Founded in October 2014 by Tang Xiao'ou, a professor at the Chinese University of Hong Kong, with 11 graduate students, the company focused on computer vision and facial recognition. In 2014, Tang's team published the first facial recognition system to exceed human accuracy on standard benchmarks. By 2015, SenseTime had nine papers accepted at CVPR. By 2016, they had 16 papers and won first place in ImageNet competitions for object detection, video object detection, and scene analysis.
The Chinese government noticed. The Thirteenth Five-Year Plan, released in 2016, set a goal: China would become a global AI leader by 2030. State investment followed. SenseTime was designated one of China's "AI champions" along with Cloudwalk, Megvii, and Yitu, the "four little dragons" of Chinese AI.
The research output reflected the investment. In 2015, approximately thirty percent of papers at major computer vision conferences had Chinese authors. By 2019-2020, that share had risen to forty percentâovertaking American authorship. A field once dominated by a handful of Western universities and corporations was becoming genuinely global.
But the Chinese AI rise carried shadows. In November 2017, SenseTime set up a "smart policing" venture with a surveillance equipment supplier operating in Xinjiang. By 2019, the New York Times reported SenseTime software was being used in the surveillance of Uyghur populations. American sanctions followed. SenseTime denied the allegations, but the pattern was established: the same facial recognition capabilities that won ImageNet competitions could enable mass surveillance. The technology did not choose its applications.
The achievements kept coming.
In March 2016, AlphaGoâDeepMind's Go-playing systemâfaced Lee Sedol, one of the world's greatest players, in Seoul. Go was supposed to be decades away from computer mastery. The game's vast search space and reliance on intuitive pattern recognition seemed to require something closer to human intelligence than brute-force calculation.
AlphaGo won four games to one.
The moment captured something that benchmarks could not. A machine had mastered a game that humans had played for thousands of years, that was woven into East Asian culture, that seemed to require creativity and strategic wisdom. The image of Lee Sedol's face as he realized what was happeningâthe shock, the recalibrationâbecame iconic.
Behind the headline, quieter breakthroughs accumulated. In 2014, Ian Goodfellow and colleagues introduced generative adversarial networks, systems that could generate new images by having two neural networks compete against each other. In 2015, ResNet won ImageNet with a network 152 layers deep, enabled by "skip connections" allowing gradients to flow through very deep architectures. In 2017, a team at Google published "Attention Is All You Need," introducing the Transformer architecture that would eventually underlie GPT and every large language model to follow.
Each month brought new benchmarks surpassed. Translation quality improved. Speech recognition approached human accuracy. Image generation became photorealistic. The researchers were often as surprised as anyone. Deep learning was not supposed to work this well, this consistently, across this many domains.
But some researchers were noticing something else.
Timnit Gebru had arrived at Stanford's AI Lab to work under Fei-Fei Li, the same researcher who had created ImageNet. In 2015, she pointed out what should have been obvious: the field lacked diversity. The same community building systems to recognize faces and make decisions about humans was overwhelmingly white and male.
In 2017, she co-founded Black in AI and joined Microsoft Research's FATE labâFairness, Accountability, Transparency, and Ethics. With Joy Buolamwini of MIT's Media Lab, she conducted research that would become known as Gender Shades. They investigated commercial facial recognition systems and found what the benchmarks had hidden: on one implementation, Black women were 35% less likely to be correctly recognized than white men.
"Computer scientists long assumed that AI systems would become more accurate and objective as they gathered more data," one account noted, "but Gebru soon challenged that theory." The data itself was biased. The benchmarks measured the wrong things. The systems that passed tests with flying colors failed differently on different populationsâand the failures fell along predictable lines of race and gender.
Meredith Whittaker had spent 13 years at Google, where she founded the Open Research group. In 2017, she and Kate Crawford established the AI Now Institute at New York University, "a leading university institute dedicated to researching the social implications of artificial intelligence." The founding aim was to produce analyses grounded in empirical data, focusing on AI's current deployments rather than hypothetical future scenarios.
The distinction mattered. While some worried about superintelligent systems destroying humanity decades hence, Whittaker and her colleagues documented harms happening now: algorithmic bias in hiring systems, predictive policing reinforcing existing patterns of discrimination, facial recognition deployed without consent or oversight.
The critics were initially marginalized. The field was celebrating its achievements; who wanted to hear about failures? But the evidence accumulated. And in 2018, the tensions became impossible to ignore.
The military thread had been there from the beginning.
In 2004, DARPA's Grand Challenge sent autonomous vehicles into the Mojave Desert. No vehicle completed the 150-mile course; Carnegie Mellon's entry traveled farthest before getting stuck on a rock. IEEE Spectrum described the assembled machines as "the motleyest assortment of vehicles assembled in one place since the filming of Mad Max 2."
But DARPA persisted. In 2005, five vehicles completed a 132-mile course through southern Nevada. Stanford's "Stanley," led by Sebastian Thrun, won the two-million-dollar prize. "It was truly the birth moment of the modern self-driving car," Thrun later said.
The 2007 Urban Challenge pushed further: autonomous vehicles navigating simulated city traffic, obeying regulations, avoiding obstacles. The explicit purpose was military. "The longer-term aim," DARPA stated, "was to accelerate development of the technological foundations for autonomous vehicles that could ultimately substitute for men and women in hazardous military operations."
The personnel flowed from challenge to industry. Thrun led Google's self-driving car project, which became Waymo. Challenge veterans founded or joined autonomous vehicle companies across Silicon Valley. By the mid-2010s, the dream of driverless cars had become a corporate raceâbut its origins were in military logistics, supply convoys through hostile territory.
In 2017, Google won a contract called Project Maven, using AI to analyze drone surveillance footage for the Pentagon. When employees learned about it, thousands protested. Over 20,000 Google workers eventually walked out, demanding that the company withdraw from military AI contracts. Meredith Whittaker was among the organizers.
Google eventually withdrew from Project Maven. But the episode revealed the tensions latent in the acceleration. The same neural networks that recognized cats in YouTube videos could identify targets in drone footage. The same facial recognition that unlocked phones could enable surveillance. The same language models that answered questions could generate disinformation.
The technology did not choose. But the institutions deploying it didâand increasingly, the researchers building it were demanding a voice in those choices.
By 2017, deep learning was the dominant paradigm in artificial intelligence. The connectionists had won. The winters were over. Neural networks, trained on massive datasets using GPU computing, had surpassed every alternative on every benchmark that mattered.
But the seeds of conflict were already planted.
OpenAI's mission would drift toward closure and profit. Google's ethics board would form and collapse. The researchers who had questioned bias and demanded accountability found themselves pushed out: Gebru fired from Google in 2020, Whittaker departed in 2019 after reporting retaliation. The critics would be proven right about harms they had identified years earlier, harms that the field's metrics had never measured.
The acceleration continued. It had toâtoo much money was flowing, too many possibilities opening, too many careers depending on it. Each year brought new capabilities, new applications, new concerns. The field that had celebrated its breakthroughs was now being asked to reckon with their costs.
The Great Acceleration was never just about what AI could do. It was about what institutions would form to develop it, what priorities would guide it, what harms would be accepted as the price of progress. Those questions, posed in the vertiginous years between 2012 and 2017, would define the decade to come.
The dam had broken. The flood was just beginning.