Chapter 5: The Cybernetic Circle

Before there was artificial intelligence, there was cybernetics....

Chapter 5: The Cybernetic Circle

Before there was artificial intelligence, there was cybernetics.

The word comes from the Greek kybernetes, steersman, helmsman. Norbert Wiener chose it in 1948 to name a new science: the study of feedback, control, and communication in animals and machines. But the ideas had been gathering for years, in conversations that crossed every disciplinary boundary the academy had erected.

Between 1946 and 1953, a remarkable series of conferences convened in New York. The Macy Conferences on Cybernetics brought together neurophysiologists and anthropologists, mathematicians and psychiatrists, engineers and ecologists. They came to ask a question that seemed to belong to no single field: how do systems regulate themselves?

The answers they found shaped the twentieth century. And then, strangely, they were forgotten.


🎧

The roster of the Macy Conferences reads like a roll call of mid-century genius.

Warren McCulloch, the neurophysiologist who chaired the meetings, had already collaborated with the mathematician Walter Pitts to produce a landmark 1943 paper showing that networks of simplified neurons could compute any logical function. Norbert Wiener came from MIT, bearing his new science of feedback and control. John von Neumann, architect of the stored-program computer, attended regularly. Claude Shannon, who had just invented information theory, was there. So were Margaret Mead and Gregory Bateson, anthropologists who saw in cybernetics a language for understanding culture and mind together.

The conferences met 10 times between 1946 and 1953, almost always in New York. The first meeting, in March 1946, was titled "Feedback Mechanisms and Circular Causal Systems in Biological and Social Systems," a mouthful that captured the core insight. Causation could be circular. Outputs could become inputs. Effects could modify their own causes.

This was not a new idea in engineering. Thermostats had used feedback for decades. But the Macy participants saw that the same principles applied to nervous systems, to societies, to minds. A creature that could sense the results of its own actions, and adjust accordingly, was fundamentally different from one that merely responded to stimuli. Feedback made learning possible. It made adaptation possible. It made intelligence—in some sense yet to be defined—possible.


In Bristol, England, a neurologist named William Grey Walter was building the implications into metal and wire.

Grey Walter's tortoises—Elmer and Elsie, he called them—were small wheeled robots covered in plastic shells. Each contained a light sensor, a touch sensor, two motors, and a brain of exactly two vacuum tubes. No digital computer, no stored program, no symbolic reasoning. Just analog circuits responding to the world.

And yet they behaved.

In darkness, a tortoise would wander, its steering motor turning the front wheel through slow circles while the propulsion motor pushed it forward. When its light sensor detected a moderate glow, the tortoise would turn toward it and approach. But if the light was too bright (dazzling rather than inviting) the robot would retreat and resume its wandering elsewhere. If it bumped into an obstacle, a touch sensor in the shell would trigger evasive maneuvers. When its battery ran low, it would seek out its charging station, a "hutch" marked by a light, and connect itself to recharge.

Grey Walter called this behavior "free will," with evident pleasure in the provocation. The tortoises were not following stored instructions. They were responding to their environment through continuous feedback loops, and the result looked remarkably like intention.

In 1951, Elmer and Elsie performed daily at the Festival of Britain, drawing millions of visitors. Grey Walter placed a light on the "nose" of a tortoise and watched as it observed itself in a mirror, caught in a loop of sensing and responding to its own illumination. The effect, he wrote, was "like a clumsy Narcissus."

Walter stressed that analog electronics—continuous, graded, responsive—better simulated brain processes than the digital approach his contemporaries were pursuing. He was swimming against the tide. Turing and von Neumann had placed their bets on discrete symbols and logical operations. But Grey Walter's tortoises demonstrated something the digital approach would later have to rediscover: complex behavior could emerge from simple circuits interacting with a real environment. You did not need to program every response. You needed to build a system that could learn from the world.


Meanwhile, in Moscow, cybernetics was being denounced as a bourgeois pseudoscience.

The Stalinist suppression of cybernetics is one of the stranger episodes in the history of science. In 1950, Boris Agapov, science editor of the Soviet Literary Gazette, published a scornful attack on Western enthusiasm for "thinking machines." The idea of using computers to process economic information was mocked. Norbert Wiener was singled out for contempt. Over the following years, Soviet publications called cybernetics a "reactionary pseudoscience," a "tool of capitalist reaction," and, in one memorable phrase, "capitalism's whore."

The attack had nothing to do with the actual content of cybernetics. Soviet ideologues needed American targets to meet propaganda quotas, and cybernetics—new, technical, associated with computers—fit the bill. The 1954 Dictionary of Philosophy labeled it definitively: reactionary pseudoscience.

Then Stalin died.

Within two years, a military computer scientist named Anatoly Kitov stumbled upon Wiener's Cybernetics in a restricted library and realized instantly that "cybernetics was not a bourgeois pseudo-science, as official publications considered it at the time, but the opposite—a serious, important science." Kitov and the mathematician Alexey Lyapunov began touring research institutes, quietly evangelizing. In 1955, an article in Questions of Philosophy officially rehabilitated the field.

Under Khrushchev, cybernetics became fashionable—indeed, became an umbrella for other suppressed research. "Physiological cybernetics" sheltered non-Pavlovian neuroscience. "Cybernetic linguistics" protected structural linguistics. "Biological cybernetics" gave cover to genetics, still recovering from the Lysenko disaster. The Council on Cybernetics, established in 1959, coordinated research across the Soviet Union.

But the suppression had cost precious years. By the time Soviet cybernetic euphoria peaked in the 1960s, it had already subsided in the West. The delay, some historians argue, contributed to the Soviet Union's failure to develop its computer industry competitively. The pattern would repeat: ideological interference in science produced not immediate catastrophe but slow, accumulating disadvantage.


Norbert Wiener watched all of this with growing unease—not about the Soviets, but about his own country.

During the war, Wiener had worked on automated anti-aircraft systems. His expertise in random processes and prediction proved essential for developing mechanisms that could track enemy planes and aim guns ahead of their flight paths. It was important work, and he did it well.

Then Hiroshima changed everything.

In the autumn of 1945, Wiener drafted a resignation letter to MIT's president. He intended to leave science entirely, he wrote, because scientists had become "the armorers of the military" with no control over how their work was used. He did not send the letter. But he never forgot the impulse.

In 1947, a guided-missile company requested access to Wiener's wartime research. He refused—and publicized his refusal. In an article for the Atlantic Monthly titled "A Scientist Rebels," he declared that he would no longer allow any of his work to be used by "irresponsible militarists" to create weapons. He urged other scientists to join him.

This was remarkable. Wiener had helped found cybernetics; now he was warning that cybernetics could be dangerous. His books—Cybernetics (1948), The Human Use of Human Beings (1950), God and Golem, Inc. (1964)—all carried warnings about the social consequences of the technologies he had helped enable.

He foresaw automation displacing workers. He worried about machines influencing human relationships. He argued that feedback and control, applied to society, could erode human autonomy and dignity. "Cybernetics has unbounded possibilities for good and evil," he wrote.

Near the end of his life, Wiener told an interviewer that scientists and engineers should stop being "amoral gadget worshipers" and instead imagine the consequences of their work far into the future. When he died in 1964, Newsweek remembered him as "the father of cybernetics, and the watchdog of automation."

The watchdog role eventually passed to others. But Wiener had been first.


McCulloch and Pitts had shown that networks of neurons—simplified, idealized—could compute any logical function. This was profound: it suggested that brains were, in some sense, computers. But it also raised a question that would divide the emerging field. Should we study the neurons themselves, or the logic they implement?

The cybernetic tradition leaned toward the neurons. Grey Walter built analog circuits. Wiener studied feedback in continuous systems. The Macy Conferences included physiologists and psychiatrists alongside mathematicians.

But a different approach was gathering momentum. If neural networks implement logic, why not study logic directly? Why not build systems that manipulate symbols according to rules, bypassing the messy biology altogether? This was the approach that would, in 1956, get a name: artificial intelligence.

The split was not absolute. Neural network research continued through the 1950s and 1960s, though it would later enter a long winter. But institutionally, "AI" won. It was easier to define, easier to teach, easier to fund. ARPA money flowed to AI laboratories at MIT and Stanford. Cybernetics, with its sprawling interdisciplinarity, struggled to find a bureaucratic home.

By the 1970s, cybernetics had largely disappeared as a distinct field. Its insights scattered into systems theory, cognitive science, control engineering, complexity studies. The Macy Conferences became a historical curiosity.


What was lost?

Cybernetics understood feedback—circular causation, systems that sense their own effects and adjust. Early AI focused on feed-forward processing, inputs transformed into outputs without loops.

Cybernetics understood embodiment. Grey Walter's tortoises moved through real space, sensing and responding to actual environments. AI retreated into disembodied symbol manipulation, worlds of pure logic divorced from physics.

Cybernetics understood context. Margaret Mead and Gregory Bateson brought anthropology to the table, insisting that intelligence was always situated in culture and environment. AI often treated intelligence as a property of isolated minds.

Cybernetics understood emergence. The Macy participants studied how complex behavior could arise from simple interactions—how order could self-organize without central control. AI, at least in its classical form, sought explicit programming of knowledge.

These emphases would have to be rediscovered. In the 1980s and 1990s, researchers like Rodney Brooks—directly inspired by Grey Walter's tortoises—would argue that AI needed bodies and environments. The study of emergence would become complexity science. The critique of disembodied cognition would drive embodied and situated approaches.

The cybernetic circle had understood things that would take decades to resurface. Its eclipse was not a triumph of better ideas but of better funding, clearer boundaries, and a name that promised more than feedback loops and steering mechanisms.

The word "intelligence" carries weight. "Cybernetics" sounds like plumbing.


In New York, the Macy Conferences met for the last time in 1953. The participants dispersed to their separate disciplines. The transcripts gathered dust.

In Bristol, Grey Walter continued building increasingly sophisticated machines. His Machina docilis, the "teachable machine," learned Pavlovian conditioning, associating sounds with lights until it would move toward a whistle even in darkness.

In Cambridge, Massachusetts, Wiener kept writing his warnings, though fewer and fewer listened.

And in Hanover, New Hampshire, a young mathematician named John McCarthy was beginning to think about what he called, in a 1955 grant proposal, "artificial intelligence."

The name stuck. The field formed. The cybernetic vision (holistic, interdisciplinary, cautionary) faded into the background.

But ideas have a way of returning. The questions the cybernetic circle asked—about feedback and embodiment, about emergence and ethics, about the systems we build and the systems that build us—would prove harder to escape than the founders of AI imagined.

The steersmen were waiting to be remembered.