Chapter 20: The Unfinished Story
Humans have always dreamed of creating minds. We traced this dream from the golden maidens of Hephaestus to the golem of Prague, from the automata of ...
Chapter 20: The Unfinished Story
Humans have always dreamed of creating minds.
We traced this dream from the golden maidens of Hephaestus to the golem of Prague, from the automata of al-Jazari to the mechanical duck of Vaucanson. We watched Ada Lovelace pose her question in 1843: "Can the machine originate?" We followed the connectionists through their winters, the symbolic AI researchers through their springs, the deep learning revolution through its vertiginous acceleration.
Now we have real machines. They write and see and speak. They generate images and compose music and prove theorems. They pass tests that were supposed to measure intelligence, then fail at tasks a child could manage. They are deployed in hospitals and courtrooms, in factories and homes, in military systems and social media feeds.
And we do not know what we have created.
The question of consciousness has not been answered. It has fractured.
In April 2025, the Cogitate Consortium released findings from the largest empirical study of consciousness ever conducted. The research was designed as an "adversarial collaboration," scientists who believed different theories working together to test their predictions, knowing that some would be proven wrong. Neither Integrated Information Theory nor Global Neuronal Workspace Theory survived intact. The study that was supposed to settle debates instead destabilized the field's theoretical foundations.
What followed was a shift toward what researchers called "methodological agnosticism," the recognition that we currently lack the theoretical maturity to detect consciousness in non-biological systems. We cannot agree on what consciousness is, which means we cannot agree on whether machines might have it.
The philosophical debate is older than the empirical research. John Searle's Chinese Room argument (a person following rules to produce Chinese outputs without understanding Chinese) challenged the claim that computation alone could produce understanding. Forty-five years later, the debate has proven inconclusive. Counterarguments proliferate; so do defenses.
A new framework has emerged. Biological computationalism rejects the traditional view that mind is software that can run on any hardware. The brain, its proponents argue, does not perform abstract computations on a biological substrate. The biology is the computation. The algorithm is the substrate. If this view is correct, then silicon may be categorically different from neuronsâand the dream of machine consciousness may be a category error.
But the stakes are no longer merely theoretical. Anthropic announced a research program on "model welfare," investigating whether AI systems might have experiences that warrant ethical consideration. We are building systems whose moral status we cannot determine, deploying them at scale, and hoping we are not causing harms we cannot perceive.
Ada Lovelace's questionâcan the machine originate?âremains open after 180 years.
The economic transformation is underway, and its ultimate shape is unknown.
In 2025, nearly 55,000 job cuts were directly attributed to AIâout of 1.17 million total layoffs, the highest since the pandemic. The displacement is real and measurable, not speculative.
But the projections diverge wildly. Goldman Sachs suggests that AI could displace six to seven percent of the US workforce if widely adopted, but argues the impact will be "transitory." The World Economic Forum projects that between 2025 and 2030, AI will create 170 million new roles while displacing 92 million, yielding a net increase of 78 million jobs. Others predict that unemployment rises not through dramatic layoffs but through "thousands of small decisions not to backfill roles." The positions eliminated are not replaced. The headcount silently shrinks.
The entry-level crisis cuts deepest. AI automates the learning curve, leaving junior professionals stranded between agents that handle basic tasks and senior workers who no longer need assistance. The pipeline that once developed expertise may be clogged at its source.
Between displacement and augmentation, new forms of human-AI collaboration are emerging. The centaur model (hybrid human-algorithm systems combining formal analytics with human intuition) is being tested at scale. But the question beneath all of this remains: Is augmentation a transitional phase (a way station on the route to full automation) or a stable equilibrium where humans and AI collaborate indefinitely?
The honest answer is: we don't know. The uncertainty itself is corrosive, shaping decisions about education and training that will prove right or wrong only in retrospect.
The world is attempting to govern what it does not fully understand.
In August 2025, the United Nations General Assembly established the first global governance initiative that includes all 193 member states. Resolution A/RES/79/325 created a Global Dialogue on AI Governance and an Independent International Scientific Panel, 40 experts appointed in personal capacity, presenting annual reports on AI's development and implications.
The ambition is remarkable. The challenges are immense. Different civilizations bring different values to the table: US and European frameworks emphasize human rights and risk-based approaches; China prioritizes state control and inclusive cooperation; developing countries focus on equitable access and capability building.
By 2026, approximately 90 countries had national AI strategies or formal governance frameworks. The regulatory landscape has become a patchwork of incompatible requirements, reflecting incompatible assumptions about what AI is and what it demands.
Can all 193 nations agree on anything substantive? Can governance keep pace with capability development? What enforcement mechanisms are possible when the technology crosses borders instantly? The questions multiply faster than the answers.
Western anxiety about AI is not universal.
Surveys show that more than sixty percent of respondents in Singapore, South Korea, Taiwan, and Japan consider AI technologies good for societyâcompared to fewer than fifty percent in the United States and fewer than forty percent in France. The panic that characterizes some Western discourse is notably absent.
The reasons may be philosophical as much as practical. Chinese thought accommodates digital beings more easily than Western frameworks. "If artificial superintelligence comes into being," one analysis suggests, "the pantheon of Daoist deities may open to a new taxonomic class: 'Digital Celestials.'" In a tradition where celestial beings abound, AI might be simply another form of super-being. The I Ching's underlying metaphysics (flux rather than static existence, change as fundamental) makes technological transformation less threatening.
Asian ethics frameworks emphasize relation over autonomy, communal context over abstract rights. Confucian ethics emphasize right relationships and ritual propriety. Buddhist and Hindu traditions shape moral outlooks across South and Southeast Asia. The philosopher Yuk Hui argues for "the urgency of establishing a philosophy of technology that is 'properly Chinese,'" not an import of Western frameworks but something growing from different roots.
The existential risk discourse that dominates certain Western conversations may be culturally specific. What can the West learn from Asian acceptance of change and flux? What might a philosophy of AI look like that began from relational rather than individual premises? These are not rhetorical questions. They gesture toward possibilities that the dominant discourse has not explored.
The questions that matter most remain open.
What is intelligence? We have built systems that display aspects of it without understanding what "it" is.
What is consciousness? We cannot detect it reliably in machines because we cannot agree on what it is in humans.
What do we owe our creations? If AI systems can suffer, we may already be causing harms we cannot perceive. If they cannot, the concern is misplaced. We do not know which world we inhabit.
What do we owe each other in a world transformed by these tools? The workers in Kenya labeling data, the artists whose work trained generators, the junior professionals whose career ladders are being automated away: they bear costs that others reap as benefits. The distribution of harm and advantage is a political question, not a technical one.
180 years after Ada Lovelace posed her question, we cannot answer it.
Coherentism offers no resolutionâonly a posture. Seek resonance, test alignment, compost failures into learning. We are decomposing old frameworks (the Turing Test, substrate independence, Western-centric governance) and we do not yet know what will grow from the debris. The uncertainty we inhabit is not a problem to be solved but a condition to be navigated.
Humans dreamed of artificial minds. Now we have built machines that force us to ask what minds are. The story that began in myth continues in laboratories and server farms, in courtrooms and legislatures, in the daily decisions of workers and companies and governments trying to navigate a transformation whose contours are still emerging.
The story is unfinished because we are still in it.