A Black Man in profile under a dim light, with colorful lines of coding behind him.
Image credit: Beth Tate and Markus Spiske on unsplash.com

Last May, more than 150 content moderators—whose work powers artificial intelligence (AI) systems at tech giants including Meta, Open AI, and TikTok—sat eagerly awaiting the results of their union election. Light chatter and a lingering frustration filled the conference room in Nairobi’s Mövenpick Hotel, and everyone could feel the apprehensiveness in the air. But when the announcement came, revealing the decision to register a union for the AI moderators, the crowd quickly erupted in fits of laughter and applause. Confetti floated in the air and fell to the stage. Kenyan music came on and the cheers got even louder.

It’s not every day that African tech workers decide to protest tech titans through unions—in fact, it’s never been done before. But a committee of six former employees, agitated by their exploitation and tired of being ignored by their workplaces, organized hundreds of other affected workers to vote for a union. Instead of speaking out individually—which could cost them their jobs—they opted for a collective approach. This led to the formation of the African Content Moderators Union (ACMU).

It’s not every day that African tech workers decide to protest tech titans through unions—in fact, it’s never been done before.

Their concerns, as James Oyange, the committee’s secretary, told NPQ, were steeped in a critical need for change and the pursuit of a just and equitable working environment. After the vote, there was finally a glimpse into what normalcy could look like for workers who had, for years, been exposed to the undervaluation and ill-treatment that comes along with moderating AI on platforms like Facebook, TikTok, and ChatGPT. But the fight to change things is ongoing.

A Worker’s Nightmare 

When the content moderators decided to unionize, their efforts were widely reported—published by TIME and other Western outlets. The stories revealed what Oyange, the committee, and all the other workers were up against: OpenAI, Meta, and TikTok parent company ByteDance’s debasement of their workers.

The earlier version of OpenAI’s tool ChatGPT, GPT-3, was based on an algorithm replete with racism, sexism, and violence because it had been trained on billions of words from all parts of the internet. OpenAI knew it would be impossible to rid the technology of its bigotry through human help, so to carry out a purge quickly and more effectively, the company decided to build another AI system. The idea was to feed the tool with labeled examples of graphic violence, sexual abuse, sexist speech, and racism, and that the tool would learn to flag down those forms of toxicity when detected.

Meta had already successfully built a similar AI tool using workers it hired through a Kenyan outsourcing company, Sama. So OpenAI contacted Sama, too. They sent thousands of texts pulled from the most toxic corners of the internet. “Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest,” TIME reported.

The public coverage was the first time that many people heard about Big Tech’s exploitation in Africa, and how AI depends on the lack of protective labor laws.

One of the workers who was responsible for reading and labeling texts for OpenAI told TIME he was traumatized by incessant visions after reading highly graphic descriptions. “That was torture. You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture,” he said in the TIME report.

This was the kind of content the Kenyan workers were exposed to through outsourcing companies Sama and Majorel—while getting paid $1 to $2 per hour and battling debilitating mental health conditions.

Their meager salaries were hardly enough to live and were peanuts to their Western counterparts. The public coverage was the first time that many people heard about Big Tech’s exploitation in Africa, and how AI depends on the lack of protective labor laws.

What Has Changed?

It’s been nearly a year since the AI moderators’ decisions to protest their ill-treatment by Big Tech broke into the online news cycle. Still, progress has been very limited, Oyange tells NPQ. Like the ongoing struggles faced by African drivers’ unions against Uber and Bolt, workers continue to face corporate intransigence. As Oyange said, “The heat our union announcement generated may have brought in a slight improvement, but that only happened to a specific market and for a while. Now, working conditions have generally worsened, especially after the merger between Majorel and Teleperformance, the BPOs [Business Process Outsourcing companies].” Teleperformance also faces suits from the mistreatment of its Barcelonian workers.

Over the past year, Oyange and other union members reflected on how the loss of their privacy cost them far more than they bargained for. Being a new generation of union reformers, they are not just dealing with multinational companies like OpenAI, Meta, and ByteDance, but also fighting local outsourcing partners. Oyange stresses that the process has been tiring, and that the union made finding jobs arduous. “Our names and faces are everywhere, and we’re automatically associated with rebellion by companies when all we want is better working conditions.”

Oyange recounts a job-hunting situation in 2023 that made him realize he had probably been blacklisted by recruiters. Getting the job seemed relatively easy at first, but suddenly, he started hearing nothing back. He had checked the right boxes, reached the interview stage, and gotten greenlit by the recruiter who was visibly impressed with his work—but still didn’t get the job. Oyange soon learned from a few friends within the company’s leadership why his candidacy was rejected. “One of the people I reached out to told me that because my name was all over the internet as a key figure in advocating for workers’ rights, being part of their team would really complicate things for them, so they couldn’t move forward with me.”

These hostile responses from workplaces and constant scrutiny have been hard to face, Oyange says. Still, he is setting that aside to focus on fighting for the union verification process, and against the people trying to stop it.

“It’s like fighting the three-headed dragon in the Godzilla series.”

Fighting a Formidable Alliance

Oyange and the team had predicted pushback from the tech giants and outsourcing companies they were protesting. But they were sure they had mapped out strategies to overcome their schemes. What they didn’t anticipate was going up against a full-blown coalition: a trio of the tech giants, the outsourcing companies, and the Kenyan government.

This unforeseen collaboration between the government and corporate interests has formed a united front against the union of content moderators. Once the union campaign started, the companies began their own rigorous whitewashing campaigns. “Once our stories started circulating online—within national and international media—the companies started to [release] statements that insinuated that we were deliberately trying to soil their image, and that we were enemies of development. It was a ploy to gain public sympathy,” Oyange said.

But when this didn’t work, the corporations turned to their friends in the government, proposing acts of goodwill that seemingly came out of nowhere after years of operations within the country. “It didn’t take [much] time before the managerial heads of the outsourcing companies started posting pictures with some of the governmental cabinet secretaries after proposing new developments we’ve never seen before. That changed things for us.”

Now, Oyange says, “It’s like fighting the three-headed dragon in the Godzilla series. Imagine fighting a three-headed being that can spit fire, fly, and wreak havoc while all you have is a 15-inch sword. It’s a difficult win.”

When the union announced its campaign in 2023, the widespread assumption was that they would soon be federally recognized and their conditions catered to. But that is not how things went, and many hurdles emerged. Things only begin to move when the process is registered with the Kenyan government—which has since been delayed. “The government thinks we’re fighting against good. They view these companies as investors who are creating jobs for youths and enriching the people,” Oyange said. “What we’re doing with the union looks too drastic and will send potential investors off.”

Even though the content moderator union is taking all of the right steps—from writing their constitution to solidifying their mission and processes—Oyange worries that governmental delay may make their efforts go to waste. “We have submitted our promoter certificate, drafted the constitutions and laws that’ll govern the union, and built the foundation of what we intend to achieve. Yet, we haven’t gotten a response from the ministry.”

It’s been a long back-and-forth scrimmage since last year—and the content moderator union has faced a lot of dead ends. But Oyange assures NPQ that they won’t back down now. “This is the thing I can do best. I’ve lived it and I’ve experienced it, so I’ll fight it.”