The rapid rise of artificial intelligence has promised a future of innovation and efficiency, yet much of its foundation rests on the hidden, grueling labor of workers in places like Kenya.
These individuals, often earning just a few dollars a day, perform the essential but invisible tasks of data labeling, content moderation, and even simulating intimate conversations for AI systems. They annotate images, refine outputs, and sift through disturbing material (pornography, violence, and graphic content) for hours on end to train models used by some of the world's most valuable companies.
In Nairobi, workers like Michael Geoffrey Asia have shared about their harrowing experiences that highlight the human cost. For months, Asia spent eight-hour shifts annotating explicit videos frame by frame, then took on additional work creating personas for AI-driven sex chatbots.
The role demanded constant creativity and quick adaptation, which eventually took a severe toll on his personal life and relationships.

He said that:
After doing this for months, he, like other data labelers, developed insomnia, PTSD, and had trouble having sex.
"It fractured a lot of things for me. My body is like, not functioning at all."
The task was crucial for companies like OpenAI.
Its earlier models, like the GPT-3, could already generate remarkably fluent text. However, it wasn't ready for widespread use because it sometimes produced harmful outputs like violent, sexist, or racist language. This stemmed from how it was trained: on enormous volumes of text gathered from across the internet.
While that data enabled its impressive language skills, it also carried the biases and toxicity present online.
Filtering such a massive dataset manually would have been nearly impossible, even with a large human team working for years. The solution was to develop an additional AI-driven safety layer to reduce these risks and make the system suitable for everyday users.
To create that safeguard, OpenAI followed an approach similar to what social media platforms like Facebook had done: training systems to recognize and filter harmful content.
The idea was straightforward: by exposing an AI to labeled examples of hate speech, violence, and sexual content, it could learn to identify and flag similar material in real-world use.
This detection system would then be integrated into ChatGPT, helping prevent it from reproducing harmful patterns from its training data and improving the quality of future datasets.
To generate those labeled examples, human beings are required.
And to do that, tech companies outsourced the work to a firm, like in Kenya, starting in late 2021.
In a separate interview, Ephantus Kanyugi, another data labeller from Kenya said that:
" [Labellers] need to spend time with these images, zoom into the wounds of dead people."
Kanyugi likens this kind of work to "modern slavery."
"Then despite working so many hours, you only get poor pay, and you might also not get paid."
Companies such as Sama, once known as Samasource and prominently based in Nairobi's Sameer Business Park, markets itself as an "ethical AI" company and claims to have helped lift more than 50,000 people out of poverty.
But in reality, the companies have faced repeated criticism and lawsuits over the harsh conditions their workers face.
Workers were asked to review and categorize tens of thousands of text excerpts, many of which came from the most disturbing corners of the internet. These included explicit descriptions of child abuse, bestiality, murder, suicide, torture, self-harm, and incest.
They also report low wages (data labelers employed by Sama on behalf of OpenAI, for example, were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance), inadequate mental health support, opaque algorithmic management that acts as an unfeeling boss, and strict non-disclosure agreements that silence complaints.
Not just OpenAI, because other major tech giants including Meta, Google, Apple, and others benefit indirectly from this labor, as it powers their AI tools and contributes to soaring valuations in the trillions, while the workers see none of that wealth.
This dynamic echoes historical patterns of exploitation, with speakers at recent gatherings drawing direct parallels to colonialism.
One worker described the modern multinationals as the "British imperialist companies of today," dominating supply chains and treating local labor as disposable products at the bottom of the hierarchy.
Kenya has become a key hub for this work due to its English proficiency, tech ecosystem, and high unemployment, making it an attractive spot for outsourcing. Yet the promise of tech jobs as a path to opportunity often turns into a trap of precarious contracts and psychological strain.

In response, workers are beginning to push back through collective action.
The Data Labelers Association (DLA), a growing organization in Kenya, has emerged as a voice for change. With Asia now serving as its secretary general and authoring a testimony titled "The Emotional Labor Behind AI Intimacy," the group organizes events like a recent gathering at the Nairobi Arboretum to recruit members, share stories, and demand improvements.
"Africa is at the bottom of the supply chain of AI. But right now, the fact that we are all here and most of you are data labelers—you are the people who supply the labor. When we think of the whole AI ecosystem, who’s an engineer, and maybe that’s the image of AI that the majority of the world has,” said Angela, a speaker at the event.
"And that’s actually very intentional. To make [your labor] invisible, to make AI look like this shiny object that no one understands, it’s very automatic and beautiful and tech. That’s the intentionality of hiding the labor and the behind the scenes of AI."

Their calls include fairer pay, access to mental health resources, the elimination of restrictive NDAs, and better overall benefits for a workforce that forms a substantial part of the country's tech scene.
These efforts extend beyond immediate workplace demands, fostering broader solidarity against corporate dominance and linking labor rights to environmental and anti-imperialist concerns.
As AI continues to reshape economies and societies, the organizing in Kenya underscores a critical truth: the intelligence hailed as revolutionary often depends on African intelligence, and African labor, that has long been undervalued and overlooked.
The fight for recognition and fairness signals a potential shift, where the people powering the technology refuse to remain in the shadows.