Chapter 6: The Summer Where It Began

The proposal was audacious. "We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth ...

Chapter 6: The Summer Where It Began

The proposal was audacious.

"We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

Every aspect. Any feature. In principle. One summer.

The four men who drafted this proposal—John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon—were not naive. They were mathematicians and engineers who understood complexity. But they shared a conviction: the problem of intelligence was ripe for attack, and they were the ones to attack it.

In September 1955, they submitted their proposal to the Rockefeller Foundation. The following summer, in a math building on the Dartmouth campus, the field of artificial intelligence would be born—or at least, named.


🎧

The legend says it was a founding moment. Ten brilliant minds gathered for two months to launch a new science.

The reality was more scattered.

Ray Solomonoff arrived around June 18, 1956, possibly with Tom Etter. McCarthy was already in Hanover; he had an apartment there. Minsky came. Shannon attended for parts of the summer. But most participants drifted in and out for days or weeks rather than staying the duration. Trenchard More came for three weeks to replace Rochester. Two invitees, Donald MacKay and John Holland, never showed up at all.

Of the participants, only three stayed the full time: Solomonoff, Minsky, and McCarthy himself.

There was no unified agenda. The group had the top floor of the Dartmouth mathematics building to themselves, and most weekdays someone would lead a discussion of their ideas—or, more often, conversation would meander freely. It was, in the words of one account, "essentially an extended brainstorming session."

One day, Oliver Selfridge, Minsky, McCarthy, Solomonoff, and Trenchard More gathered around a dictionary on a stand to look up the word "heuristic." They thought it might be useful. It was that kind of summer: informal, exploratory, catching ideas as they surfaced.

No breakthrough papers emerged. No systems were built. No grand manifesto was produced.


And yet something happened at Dartmouth that mattered enormously. It was not a discovery. It was a name.

Before 1956, researchers working on thinking machines used various terms. "Automata studies." "Complex information processing." "Machine intelligence." Norbert Wiener had already claimed "cybernetics" for his broader science of feedback and control. The new field needed its own banner.

McCarthy chose "artificial intelligence."

The phrase was a claim, not a description. Nothing artificial had yet demonstrated intelligence in any meaningful sense. But McCarthy understood that names shape perception. "Artificial intelligence" declared ambition. It announced that the goal was not merely to simulate some narrow function—calculation, pattern-matching—but to replicate intelligence itself. The full thing. Every aspect. Any feature.

The term also accomplished something strategic: it drew a boundary around the new field, separating it from cybernetics. Wiener's science was broader, more interdisciplinary, more concerned with feedback loops and embodiment. "Artificial intelligence" would focus on symbolic reasoning, on programs that manipulated representations according to rules. The name implied a methodology.

Some participants reportedly resisted. "Machine intelligence" was more modest and perhaps more accurate. But McCarthy's term stuck. By the time the summer ended, "artificial intelligence" had entered the vocabulary. A field had a name before it had many results.


The optimism at Dartmouth was breathtaking by later standards.

The proposal had suggested that "a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer." Problems like language understanding, learning, creativity, abstraction. Two months, ten people.

This was not cynical grant-writing. The founders genuinely believed the core problems of intelligence might yield to concerted attack. Herbert Simon, who visited Dartmouth and would soon become a central figure in AI, declared shortly afterward that within 20 years machines would be capable of any intellectual task a human could perform.

The confidence rested on a philosophical commitment: that intelligence was, at root, symbol manipulation. If you could represent knowledge in formal structures and define rules for transforming those structures, you could replicate thought. The brain was mysterious, but the mind—the logical operations of reasoning—could be captured in programs.

This was the physical symbol system hypothesis, though it would not be named until later. It was an article of faith at Dartmouth, and it would guide the field for decades. It would also, eventually, run into walls that required rethinking.


Within a few years of Dartmouth, the money began to flow.

In 1963, MIT received a $2.2 million grant from the newly formed Advanced Research Projects Agency (ARPA, later DARPA). The money funded Project MAC, which absorbed the AI group that Minsky and McCarthy had built. Similar grants went to Carnegie Mellon, where Newell and Simon worked, and to Stanford, where McCarthy had established a new laboratory.

The funding was remarkably unrestricted. J.C.R. Licklider, then directing ARPA, believed in funding people rather than projects. Researchers could pursue whatever directions interested them; military applications were not required, or even particularly expected. Licklider wanted to nurture fundamental research, trusting that useful applications would eventually emerge.

This freedom would not last. The Vietnam War shifted priorities. In 1972, ARPA added "Defense" to its name and, under congressional pressure, began requiring that funded research show "direct and apparent relationship to a specific military function." The freewheeling exploration of the 1960s gave way to targeted projects: autonomous tanks, battle management systems, military applications with clear objectives.

But the original entanglement was set at Dartmouth and its aftermath. Artificial intelligence emerged from military funding, and military interests shaped what the field chose to pursue. The emphasis on symbolic reasoning over embodied robotics, on command-and-control systems over distributed intelligence, on automation over human augmentation—these priorities were not neutral. They reflected what the funders hoped to achieve.

Wiener had warned about this. The cybernetic elder, who had refused military funding since Hiroshima, saw the new field taking a path he found troubling. His warnings went largely unheeded. The money was too good, the problems too interesting.


There is another way to read the Dartmouth story: as a record of absences.

In 1946, 10 years before the summer workshop, six women programmed ENIAC, the first general-purpose electronic computer. Fran Bilas, Jean Bartik, Ruth Lichterman, Kay McNulty, Betty Snyder, and Marlyn Wescoff developed foundational programming techniques without prior languages or software to guide them. They invented their tools as they went.

When ENIAC was presented to the press, the six women operators were hidden from view. Programming was still considered clerical work, "subprofessional" women's labor, not worthy of recognition.

None of them were at Dartmouth.

Grace Hopper had developed the first compiler in the early 1950s, translating human-readable code into machine instructions. Her work on COBOL influenced the programming languages AI researchers would use. She was not at Dartmouth.

Dorothy Vaughan, an African-American mathematician at NACA (NASA's predecessor), had mastered programming the IBM mainframes that would soon run AI systems. She was not at Dartmouth.

The 10 men who gathered in Hanover were theorizing about intelligence while the women who made their computers work remained invisible. This was not unique to AI—it characterized computing broadly, and much of science besides. But it meant that the founding mythology of the field excluded the people who had built its material foundations.

The pattern would persist. Women would do programming and implementation, often uncredited. Later researchers like Karen Spärck Jones would make fundamental contributions (her work on inverse document frequency underlies modern search engines) and receive recognition only belatedly. The critique of AI's blind spots would come substantially from women: Timnit Gebru, Emily Bender, Joy Buolamwini, and others who noticed what the field's founders had not thought to ask.

The absences at Dartmouth were not conspiracy. They were the ordinary operation of a society that did not consider women's contributions worthy of invitation to the table where the future was being named. The founders saw themselves as launching a science. They did not see the labor that made their science possible.


Dartmouth's importance lies less in what happened there than in what happened after.

The personal connections formed that summer would shape the field for decades. McCarthy went to Stanford and built an AI laboratory. Minsky stayed at MIT and built another. Newell and Simon anchored Carnegie Mellon. The three centers that would dominate AI research for a generation were seeded by relationships forged in that Hanover summer.

The shared vocabulary—"artificial intelligence" itself, but also "heuristics," "search," "representation"—gave researchers a common language. They could publish in the same journals, attend the same conferences, compete for the same grants. A field requires boundaries, and Dartmouth drew them.

And the optimism, even when misplaced, drove effort. If you believe the problem is nearly solved, you work harder. The early AI researchers threw themselves at language and learning and reasoning with an intensity that produced real advances, even as the grand goals receded.

Twenty years after Dartmouth, Simon's prediction had not come true. Machines could not do everything humans could do intellectually. They could not understand language with human fluency, or learn from experience as children learn, or reason about the world with common sense. The problems that had seemed ripe for a summer's attack turned out to be problems for generations.

But the field endured. The name stuck. The institutions grew. And the dream—every aspect of intelligence, any feature, in principle describable, in practice buildable—continued to animate researchers long after the founders had died.

The summer where it began was only a beginning. The work was just starting.