Women in rural communities describe trauma of moderating violent and pornographic content for global tech companies… content moderation belongs in the category of dangerous work, comparable to any lethal industry: Milagros Miceli, sociologist
Anuj Behal
On the veranda of her family’s home, with her laptop balanced on a mud slab built into the wall, Monsumi Murmu works from one of the few places where the mobile signal holds. The familiar sounds of domestic life come from inside the house: clinking utensils, footsteps, voices. On her screen a very different scene plays: a woman is pinned down by a group of men, the camera shakes, there is shouting and the sound of breathing. The video is so disturbing Murmu speeds it up, but her job requires her to watch to the end.
Murmu, 26, is a content moderator for a global technology company, logging on from her village in India’s Jharkhand state. Her job is to classify images, videos and text that have been flagged by automated systems as possible violations of the platform’s rules.
On an average day, she views up to 800 videos and images, making judgments that train algorithms to recognise violence, abuse and harm. This work sits at the core of machine learning’s recent breakthroughs, which rest on the fact that AI is only as good as the data it is trained on. In India, this labour is increasingly performed by women, who are part of an workforce often described as “ghost workers”.
“The first few months, I couldn’t sleep,” she says. “I would close my eyes and still see the screen loading.” Images followed her into her dreams: of fatal accidents, of losing family members, of sexual violence she could not stop or escape. On those nights, she says, her mother would wake and sit with her. Now, she says, the images no longer shock her the way they once did. “In the end, you don’t feel disturbed – you feel blank.” There are still some nights, she says, when the dreams return. “That’s when you know the job has done something to you.”
Researchers say this emotional numbing – followed by delayed psychological fallout – is a defining feature of content moderation work. “There may be moderators who escape psychological harm, but I’ve yet to see evidence of that,” says Milagros Miceli, a sociologist leading the Data Workers’ Inquiry, a project investigating the roles of workers in AI.
“In terms of risk,” she says, “content moderation belongs in the category of dangerous work, comparable to any lethal industry.” Studies indicate content moderation triggers lasting cognitive and emotional strain, often resulting in behavioural changes such as heightened vigilance. Workers report intrusive thoughts, anxiety and sleep disturbances.
A study of content moderators published last December, which included workers in India, identified traumatic stress as the most pronounced psychological risk. The study found that even where workplace interventions and support mechanisms existed, significant levels of secondary trauma persisted….
*************
