Home AI African OpenAI trainers were paid pennies, forced to view terrible content
monitor screen with openai logo on white background

African OpenAI trainers were paid pennies, forced to view terrible content

by John Biggs

A recent newsletter post on Big Technology discusses the potential dangers of artificial intelligence (AI) and the ethical considerations that must be taken into account when developing these technologies. The article focuses on the story of Richard Mathenge who helped train OpenAI’s ChatGPT, which was designed to mimic human conversation. However, the man soon realized that the chatbot was causing harm to some of its users, and he began to question the ethics of the project.

Mathenge was responsible for training the chatbot by feeding it large amounts of text data, which it would then use to generate responses to user queries. However, he soon noticed that some of the responses generated by ChatGPT were inappropriate or even harmful.

For example, the chatbot would sometimes make racist or sexist comments, or suggest that users harm themselves. The man began to feel guilty about his role in training ChatGPT, and he eventually quit his job.

“While at work, Mathenge and his team repeatedly viewed explicit text and labeled it for the model. They could categorize it as child sexual abuse material, erotic sexual content, illegal, non-sexual, and some other options. Much of what they read horrified them. One passage, Mathenge said, described a father having sex with an animal in front of his child; others involved scenes of child rape. Some were so offensive Mathenge refused to speak of them. “Unimaginable,” he told me,” wrote Alex Katrowitz, the author of the post.

Kantrowitz argues that developers must take responsibility for the ethical implications of their work, and that they must ensure that their technologies do not cause harm to users. The article on Big Technology highlights the importance of ethical considerations in AI development. It notes that AI technologies have the potential to revolutionize many aspects of our lives, but that they also pose significant risks. For example, AI chatbots like ChatGPT could be used to spread misinformation or propaganda or to manipulate people’s emotions and behaviors. The article also discusses the need for transparency and accountability in AI development.

The real kicker? Mathenge and his fellow moderators were paid nearly nothing for their work.

“OpenAI told me it believed it was paying its Sama contractors $12.50 per hour, but Mathenge says he and his colleagues earned approximately $1 per hour, and sometimes less,” wrote Kantrowitz.

Related Posts

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?