Chapter 10: The Second Freeze

By 1988, the freeze had set in. The LISP machine market had collapsed. Symbolics, which had registered the first .com domain just three years earlier,...

Chapter 10: The Second Freeze

By 1988, the freeze had set in.

The LISP machine market had collapsed. Symbolics, which had registered the first .com domain just three years earlier, was sliding toward bankruptcy. LISP Machines Inc. failed. Lucid Inc. failed. Texas Instruments and Xerox, major corporations that had invested heavily in AI hardware, abandoned the field entirely. Desktop computers from Apple and IBM had become powerful enough that specialized AI workstations no longer made economic sense.

The expert systems followed. XCON, the paradigm of commercial success, became increasingly expensive to maintain. Its thousands of rules were difficult to update. When given unusual inputs (edge cases the knowledge engineers hadn't anticipated) the system produced what researchers called "grotesque" errors. The brittleness that Dreyfus had predicted from philosophical principles manifested in practical failures.

Reagan's Strategic Defense Initiative, which had invested heavily in AI, slowed dramatically. Corporate AI departments that had seemed permanent were dissolved. Researchers who had built careers on symbolic AI found themselves scrambling for grants, rebranding their work under different labels, or leaving the field altogether.

"AI" became a term to avoid.


🎧

Grant applications that mentioned artificial intelligence were viewed with suspicion. The promises of the Dartmouth founders (every aspect of intelligence, any feature of thought) now seemed like embarrassments. Reviewers remembered the hype. Funders remembered the broken promises. The field that had twice over-promised and under-delivered was learning what it meant to lose trust.

The second winter was colder than the first because the institutions were larger and the collapse more visible. In the 1970s, AI had been a research program; its setbacks disappointed academics. In the late 1980s, AI had been an industry; its collapse destroyed companies. An industry worth half a billion dollars evaporated in months.

The stigma spread. Computer science departments that had hired AI researchers in the boom years questioned their judgment. Students who had been excited about artificial intelligence in 1985 were warned away from the field by 1990. The word itself became toxic.


In Japan, the Fifth Generation Project ended in 1992.

Fifty-four billion yen spent. Commercial failure acknowledged. The parallel inference machines worked technically but could not compete with commodity hardware. Prolog never displaced LISP or C. The revolutionary knowledge-processing computers did not materialize.

"ICOT has done little to advance the state of knowledge based systems, or Artificial Intelligence per se," one evaluation concluded. Natural language goals had been dropped or spun off. Very large knowledge bases remained elusive. The project had trained a generation of researchers and created "a positive aura for AI" in Japan, but the fifth generation of intelligent computers had not arrived.

But Japan did not abandon AI entirely. MITI quietly pivoted toward what some called the "Sixth Generation"—neural network research at a similar funding level. The Fifth Generation had trained people; those people now applied their skills to new approaches. Sometimes the next paradigm is already visible at the edge of the current one.


The mainstream machine learning community, meanwhile, was thriving under a different name.

"Machine learning" had split from artificial intelligence and rebranded itself as a practical discipline focused on solvable problems rather than general intelligence. The goal was no longer to replicate human thought but to build systems that could classify data, make predictions, and extract patterns. The methods shifted from symbolic rules to statistical approaches: support vector machines, decision trees, ensemble methods like boosting and bagging.

This was not the grand ambition of Dartmouth. It was something more modest and, in its way, more honest. Machine learning researchers did not claim to be building minds. They claimed to be building tools that worked.

And the tools worked. As the Internet grew through the 1990s, machine learning found its applications. Spam filters learned to distinguish legitimate email from junk. Recommendation systems learned to suggest products and content. Search engines learned to rank pages by relevance. These were not the thinking machines of science fiction, but they were useful—useful enough to attract funding, useful enough to build careers on, useful enough to keep the field alive through the winter.

The rebranding was strategic. Grant applications for "machine learning" succeeded where "artificial intelligence" failed. The substance was often similar; the framing was crucial. The field had learned to manage expectations.


Canada became a refuge.

The Canadian Institute for Advanced Research (CIFAR), founded in 1982, provided something rare: sustained funding for speculative research. While American agencies were skeptical of neural networks and European funding followed American trends, CIFAR was willing to support work that might take decades to pay off.

Canadian universities offered positions when American ones were dubious. The Toronto-Montreal axis (Geoffrey Hinton at Toronto, Yoshua Bengio at Montreal) formed not because Canada was a hub of AI research but because it offered shelter. The refuge wasn't glamorous, but it was sufficient.

"CIFAR provided early support to extraordinary researchers," the institute later noted, "in their fundamental discoveries about deep learning and reinforcement learning, which were speculative and unproven at the time."

The periphery became the center. While the American mainstream pursued support vector machines and decision trees, the Canadian margin preserved neural networks. While the Stanford and MIT establishment was skeptical, the Canadians kept working.


Infrastructure was accumulating that had nothing to do with AI research.

The Internet was scaling, generating unprecedented quantities of data. By the late 1990s, the web contained billions of pages of text, more linguistic data than all previous human history combined. Image repositories grew. Video archives expanded. The training data that neural networks eventually needed was being assembled by industries that had no idea what they were preparing.

Graphics processing units, designed to render video game explosions and lighting effects, were becoming powerful parallel processors. NVIDIA, founded in 1993, shipped GPUs that could perform thousands of floating-point operations simultaneously. The chips were optimized for graphics, but they proved equally suited to the matrix multiplications at the heart of neural network training.

Moore's Law continued its exponential march. Computers became roughly twice as powerful every 18 to 24 months. The compute that seemed impossibly expensive in 1990 became affordable by 2010 and trivial by 2015.

None of this was planned for AI. The Internet scaled to serve commerce and communication. GPUs scaled to serve gamers. Moore's Law was a semiconductor phenomenon, not an AI strategy. But the result was the same: by the time neural networks were ready to scale, the infrastructure to train them existed.


What the second freeze preserved was as important as what it destroyed.

The researchers who survived were deeply committed. The taboo status of neural networks filtered for true believers, people who continued not because the approach was fashionable but because they were convinced it would eventually succeed. The community was small enough to know each other, committed enough to share ideas freely, stubborn enough to persist through years of rejection.

Small funding forced efficiency and focus. Without corporate resources, researchers had to choose their problems carefully. They developed techniques that could work with limited compute and limited data, techniques that scaled spectacularly when resources became available.

From the expert system era came clarifying failures. The limits of symbolic reasoning were now understood. Knowledge acquisition bottlenecks, brittle inference, inability to handle uncertainty—these problems pointed toward what alternatives needed to solve. The next approach would need to learn from data rather than encode human rules. It would need to handle ambiguity and variation. It would need to scale.

From the Fifth Generation came trained researchers and parallel computing expertise. From the machine learning community came statistical rigor and practical applications. From the Internet came data. From the gaming industry came GPUs.

The compost cycle was doing its work. The debris of the expert system collapse (the failed promises, the abandoned labs, the skeptical funders) was decomposing into nutrients that fed the next generation. The field was not dying. It was transforming underground, invisible to those who had declared it dead.


The second winter lasted nearly a decade, longer than the first, more damaging to careers, more corrosive to trust. Researchers who entered the field in the late 1980s spent their formative years watching their discipline become a cautionary tale.

But contraction is not the same as death. Seeds go underground. Roots deepen in frozen ground. Small communities preserve knowledge that larger communities abandon.

When the thaw came, it came suddenly. The systems that emerged were built on techniques developed in obscurity, trained on data accumulated by accident, running on hardware designed for games. The connectionists who had kept the neural network faith found themselves, almost overnight, at the center of the most dramatic technological transformation of the century.

They had watched through the long winter. They were ready.