Chapter 18: The Workers Behind the Curtain
Every AI system depends on human labor. The question is: whose labor, under what conditions, for what pay?...
Chapter 18: The Workers Behind the Curtain
Every AI system depends on human labor. The question is: whose labor, under what conditions, for what pay?
The companies that build large language models speak of neural networks that learn from data, of algorithms that improve through training, of systems that become more capable through scale. The language is technical and abstract, evoking processes that happen inside computers. But at every stage of the pipeline, human hands touch the work. Someone collected the data. Someone labeled it. Someone ranked the outputs. Someone filtered the content that would poison the model if it remained. The artificial intelligence that seems to emerge from pure computation is, in fact, produced by a vast and largely invisible human workforce.
They work in Nairobi and Manila, in Lahore and Caracas and Bogotá. They are often well-educated, often unemployed before they found this work, often grateful for any income at all. They earn two dollars an hour while the companies they serve are valued in the billions. They are the human infrastructure of artificial intelligence, and their story has only recently begun to be told.
Sama, the company formerly known as Samasource, operated from the California Bay Area and employed over 3,000 workers in Kenya. Its clients included OpenAI, Meta, and other major AI developers. The work varied: data labeling, image classification, text annotation. Workers sat at computers for hours, sorting information into categories that trained machine learning models.
But some of the work was darker.
Workers were assigned to train AI to recognize pornography, hate speech, and extreme violence. This meant sifting through the worst content the internet could produce, hour after hour, day after day. "People being slaughtered, people engaging in sexual activity with animals." The goal was to make AI systems safe, to ensure they would not reproduce the horrors they had been shown. The cost of that safety was borne by workers in Kenya earning less than two dollars per hour.
Documents reviewed by CBS's 60 Minutes revealed the economics. OpenAI agreed to pay Sama $12.50 per hour per worker. The workers themselves received $1.50 to $2.00. Sama's defense was that this wage was "fair for the region," a formulation that acknowledged the disparity while justifying it.
Scale AI, now valued at $14 billion, operated similarly. Its subsidiary Remotasks employed workers who often did not know they were working for Scale. A 2023 investigation by The Verge found that data labelers in Kenya completing tasks for Remotasks had no idea of the company's true identity. Workers in the Philippines earned $1.32 to $1.44 per hour after taxes. In Venezuela, rates ranged from ninety cents to two dollars.
The jobs offered no stability. Some contracts lasted only days. Workers reported that their accounts would be closed just before payday, with the company claiming policy violations. "When it gets to the day before payday, they close the account and say that you violated a policy." The work was precarious by design.
The psychological toll was documented in investigation after investigation.
TIME interviewed four Sama workers who described feeling "mentally scarred" by the work they had performed. The content they reviewed (the violence, the abuse, the material that cannot be described in detail) had left marks that would not fade. These were not hypothetical concerns. They were documented injuries, as real as any workplace hazard.
A group of approximately 200 workers, including Wambalo and Berhane Gebrekidan, filed suit against Sama and Meta over "unreasonable working conditions" that had caused them psychological problems. The workers had undergone psychiatric evaluation. "We have gone through a psychiatric evaluation just a few months ago," one testified, "and it was proven that we are all sick, thoroughly sick."
The irony was bitter. The work that made AI "safe," that filtered out the content users would never see, was making humans sick. The trauma was transferred, not eliminated. It moved from the dataset to the worker, lodging in their minds, surfacing in nightmares and flashbacks and the particular numbness that comes from sustained exposure to horror.
Nerima Wako-Ojiwa, a Kenyan civil rights activist, described the operations as "AI sweatshops with computers instead of sewing machines." She called it "modern-day slavery," a term that seemed hyperbolic until you examined the conditions: wages that barely sustained life, work that destroyed mental health, contracts that could be terminated without recourse, and a system designed to extract maximum value from workers with minimal obligations.
The workers were not passive victims. They organized.
In 2024, nearly 100 data labelers and AI workers from Kenya who had worked for companies including Facebook, Scale AI, and OpenAI published an open letter to President Biden. "Our working conditions," they wrote, "amount to modern day slavery."
The letter was an assertion of presence. These workers, invisible by design, demanded to be seen. They named themselves, described their conditions, and called for action from the country where their employers were headquartered. The AI industry had treated them as interchangeable inputs to a production process. They insisted they were people with claims.
A group of 184 Sama moderators filed a lawsuit alleging unfair termination and poor working conditions. The legal process was long, the outcome uncertain, but the suit itself was significant. It established that the exploitation could be contested, that courts might hold companies accountable, that the structures protecting workers in wealthy countries might eventually extend to those who labored in their shadow.
When workers attempted to unionize, they faced resistance. Scale AI reportedly engaged in union busting in Kenya in 2024. The companies that preached innovation and progress reverted to the oldest tactics of capital when confronted with organized labor. The message was clear: workers were valuable as individuals, as atomized inputs to the AI production process, but dangerous as collectives with shared interests and collective power.
The economic structure was extractive by design.
The entire LLM development pipeline required human labor at every stage. Data collection involved not just automated web scraping but manual curation: humans deciding what to include, what to exclude, how to organize the raw material. Data labeling meant classifying and tagging millions of examples, creating the ground truth that models would learn to approximate. RLHF (reinforcement learning from human feedback) required workers to rank model outputs, providing the positive and negative signals that shaped the AI's behavior. Content moderation meant filtering both training data and outputs, ensuring that the system would not reproduce its worst inputs. Quality assurance meant checking performance, catching errors, verifying that the models worked as intended.
At each stage, human judgment was essential. At each stage, that judgment was purchased cheaply.
The wage differentials told the story. A worker in Kenya received two dollars for an hour's labor. The contractor that employed them received $12.50 for that same hour. The company that contracted with the contractor was valued at tens of billions of dollars. The value flowed upward, extracted at each layer, until the workers who created it received almost nothing.
Low wages in the Global South effectively subsidized AI development for wealthy corporations. The $14 billion valuation of Scale AI rested partly on labor purchased for two dollars per hour. The breakthrough products (ChatGPT, Claude, the language models that captivated the world) depended on this hidden subsidy. Without cheap labor to label data, rank outputs, and filter content, the systems could not have been built.
The rhetoric of automation obscured this dependence. AI was marketed as replacing human labor, as achieving capabilities that transcended human limitations. The reality was different: AI required more human labor than ever, just relocated to places where it was cheap and invisible. "The rhetoric of automation hiding ongoing exploitation," the phrase captured the contradiction precisely.
The regulatory frameworks had not caught up.
Kenya had labor laws, but they were designed for traditional employment and did not address the specific conditions of digital labor. Workers who logged on to platforms, completed tasks, and were paid by the piece fell through the gaps. They were neither employees with protections nor independent contractors with negotiating power. They were something new, a global workforce, accessible through the internet, dispensable at will.
The same dynamics played out in the Philippines, in Venezuela, in India and Pakistan and Colombia. The platforms operated across borders; the regulations did not. The workers could be hired and fired without notice, paid and unpaid at the company's discretion, exposed to traumatic content without adequate mental health support, and the legal systems that might have protected them had no clear jurisdiction.
What would accountability look like? Worker advocates suggested possibilities: supporting the formation of digital labor unions and cooperatives rather than busting them, establishing minimum standards for pay and working conditions, requiring transparency about the human labor that made AI systems possible, extending workplace safety protections to cover psychological as well as physical harm.
None of these reforms had been implemented at scale. The AI industry continued to operate on cheap labor while celebrating its transcendence of human limitations.
The workers behind the curtain are not footnotes to the AI story. They are central to it.
The language models that generate text, the image generators that create pictures, the chatbots that answer questions—all of these depend on the labor of people who have been systematically undervalued and often harmed. The technology is not autonomous. It is produced by human effort, and the conditions of that effort matter.
Wambalo and Berhane Gebrekidan and the 184 workers who filed suit, the 100 who wrote to President Biden, the thousands who continue to labor in Nairobi and Manila and Lahore—they are not victims only. They are workers asserting their dignity, organizers building collective power, litigants holding corporations accountable. Their fight is not separate from the story of AI. It is essential to it.
The human infrastructure of artificial intelligence was built on exploitation. Whether it will remain so is not yet determined. The workers are organizing. The lawsuits are proceeding. The letters are written and sent. The question is whether the industry that talks so much about the future will reckon with the human cost of building it.