THE RISE OF INEQUALITIES IN THE AGE OF AI
Florencia Alvarado
This story explores how emerging technologies deepen existing inequalities, featuring an in-depth conversation with professor and researcher Milagros Miceli. The article is accompanied by evocative imagery from New York–based visual artist Florencia Alvarado, whose work adds a powerful visual dimension to the discussion.
1. Prof. Miceli thank you for being with us. Could you introduce your research on the ethics of AI?
My name is Milagros Miceli, I am a computer scientist and sociologist. My research is about data work, the labor that went into producting data into training Ai systems and that also goes into maintaining them. I have been working with communities of data workers for almost a decade now, showing how the work that they do is precarious, exploited and how all of that affect not only the data that they produce, but also the training Ai systems.
2. If you had to distill your mission into a single driving question, what would it be?
I would say, “How to create Ai systems that really belong to us?”—where “us” stands for the users whose data and whose bodies are being surveilled and used to produce data for their system; the workers whose labor contributes and fuels Ai systems; and the people—the inhabitants of the worlds whose environments are being exploited to fuel these systems.
3. What drew you—intellectually, politically, personally—to work at the intersection of artificial intelligence, power, and justice?
Before becoming a computer scientist, I was a sociologist. In fact, when I first started working in this field many years ago, I was a labor sociologist with no prior background in technology. My research then focused on workplace injustice and structural inequities.
When I was hired at the Weizenbaum Institute in Berlin to join a newly founded group working on AI ethics, I was the only social scientist among computer scientists. I knew nothing about AI, so I began learning from my colleagues by observing their work. What struck me immediately was that they were doing excellent research—but all centered on technical aspects: bias mitigation, algorithm design, explainability techniques, forensic metrics. They were deeply focused on tools, methods, and artifacts. As a sociologist, however, I couldn’t help asking: Who is looking at the people behind this? Who is talking to the workers involved in these processes? Because clearly, somewhere in this supply chain, there were people whose labor made AI possible.
That’s what led me actually to start looking into the work that goes into Ai. I soon learned about data workers, visited them, listened to their stories, and discovered their situations and lived experiences. I felt the intellectual and the political urge of studying these. Very few people were looking into it, it was almost entirely absent from mainstream narratives. Someone needed to do that.
4. You address invisible infrastructures. How do you navigate the difficulty of making these issues visible?
If you had asked me this a few years ago, I would have said that visibility was one of my biggest concerns. That’s why I started the ongoing project Data Worker Inquiry, which grew out of a need to reach a broader audience and make my research findings legible to other people, other than researchers. My team and I have been very successful in making some of these issues visible—maybe not to everyone, but the fact that I’m here speaking with you shows that the core message—that data workers exist and play a crucial role—has arrived and gone places.
At this stage, I think the real challenge is no longer visibility itself, but what follows—once we acknowledge that data workers are underpaid, exploited, and exposed to mental health issues because of their tasks. Do we simply express pity saying “poor people,” and then carry on with business as we often do with many injustices? Or can research like mine prompt deeper conversations and, hopefully, action? For me, that’s the most important part: the discussions and responses that visibility can generate.
5. What role does race, gender, and geography play in how AI is built—and whom it serves?
It’s important to acknowledge that AI is deeply racist, sexist, and discriminatory. It is a technology created and designed by those who have historically held power, often drawing on pseudoscientific ideas that reinforce white supremacy, maintain gender inequities, and uphold regressive politics. Because it is designed by and for those in power, its purpose is to serve and perpetuate the same concentration of influence—primarily wealthy, white men in the Global North. It is not built to serve us. The real challenge, then, is to create technology that does serve us. That’s exactly what we are trying to do here.
6. You critique how tech companies use terms like “fairness” or “ethics.” What does true ethical AI practice look like?
I critique the way companies use terms like “fairness” and “ethics” because, in many cases, these words have been co-opted. Through so-called “ethics-washing” initiatives—used like little tiny technical bandages for huge problems—companies have been able to claim: “Our AI is ethical.” But if we’re really talking ethics, I want to know about their business model and labour practices.
When we focus only on technical topics—bias mitigation, forensic metrics, explainability—we risk being distracted from the real issues: who is actually behind these systems, who is being exploited, and who is being harmed. In the same way, I see existential risks—the fear that AI might one day dominate or destroy us—as another distraction from really looking at those who suffer right now. I’m talking about data workers, those who are affected by surveillance drones and border biometrics. About the people who are being bombed right now in Gaza using Ai systems. Those are the urgent realities that we need to confront.
This is why I’m not a big fan of the label “ethical AI” or “Ai ethics” at all. I prefer to talk about justice. An example can be found in JustAI, which was created to serve the needs of a specific community. Its technologies belong to that community—not to some rich, someone in the Global North. It’s definitely not a type of technology that is based on the exploitation of us, people, workers or natural resources. Nor are they designed to reinforce concentrated power.
7. You each challenge the idea that AI systems are “neutral” or “objective.” Why is it dangerous to keep repeating this myth?
It is very dangerous because AI systems are world-creating systems. They are trained on specific worldviews and values, as Abeba has also explained in her work. And those values usually belong to privileged groups in the Global North, often rooted in beliefs tied to white, male supremacy. When we ask ChatGPT a question, the answer it generates sounds plausible. Many people take that as truth. The same happens with classification systems: take facial recognition, where the system might output a result about a person, like “this person matches this identity.” In most cases, that result is accepted as fact. Or think about credit-scoring systems: when an algorithm decides whether I qualify for a loan, it directly creates a reality in which I either get to pay my debts—or I don’t.
This is why it’s crucial to recognize that these realities are not neutral. They are built on specific ideologies, intentionally embedded in order to preserve the status quo and maintain elite power. Believing that these systems are unbiased or objective is extremely dangerous, because it makes their decisions appear unquestionable—when in fact they should be constantly questioned.
8. Why do you think most people misunderstand about datasets—where they come from, who curates them, and what values are embedded within them?
Most people misunderstand datasets because the data and the labor behind them are intentionally hidden. The engineers who work on a project are celebrated when big tech companies like Google or OpenAI present a new product, but nobody talks about the data work, which is equal—or more—than the work that goes into model these systems.
This is not accidental. Datasets give us a unique window into how these systems are created—what values, preconceptions, and worldviews are embedded within them. Researchers like Alex Hanna and Emily Denton, for example, have shown how analysing a dataset can reveal a great deal about the system trained on that data. That is precisely why companies prefer to keep datasets hidden. They don’t want you to know the workings of the data sets and how they have been tweaked according to the data that has been used to train them, it would expose the underlying assumptions and choices shaping the system.
9. Are there projects, collectives, or initiatives you admire that are doing the slow, necessary work of changing how AI is made?
I deeply admire worker-led collectives and I’m in awe of how data workers have taken the initiative to organize and collectivize. We see it in many forms: the content moderators’ union in Africa, workers’ and company councils in Europe, smaller unions like FIST in Spain, or the Data Labelers Association in Kenya. Many of them are our partners, and with others we collaborate, trying to support their initiatives.
On the research side, still worker-led, I want to mention the Workers Observatory in Scotland and the work of Rafael Groman in Brazil. These inquiries may be slow, but they are crucial, because they reveal what workers feel and need—which is indispensable for transforming the AI industry.
If I come back to organising: I believe these initiatives—workers striking, organising, collectivising, as we see right now in Berlin—are the ones most likely to bring real change in the AI industry. Much more than any of us could do alone. Because, let’s face the truth, without these workers, AI doesn’t work. They are irreplaceable and crucial for the AI industry. And recognising this breaks the myth that AI is autonomous or even sentient.
10. Do you think meaningful change in tech can come from within companies—or does it need to be demanded from the outside?
Absolutely not. Meaningful change will never come from within companies. Whatever comes from within is just tiny band-aids, as I said before—ethics-washing. Real demands and real change have to come from the outside. And that means from many places: but especially from workers and consumers. We all can make change—by choosing not to use these systems, by boycotting many of them. You don’t need ChatGPT to do everything for you. So yes, change comes from the outside, where the outside is us—the workers, the people, the communities that are organising around this.
11. Has there been a moment in the last few years—a conversation, a turning point, a small victory—that reminded you why you do what you do?
Absolutely—there have been many moments, especially with my current project, the Data Workers Inquiry Project. Even though the impact is small in the grand scheme—data work involves hundreds of millions of workers worldwide—there have been moments where we’ve been able to change the realities of some of them.
Some of these moments have been, for example, going to the European Parliament and taking a panel of five data workers to testify in front of the European Parliament. Seeing their views incorporated into the Platform Workers Directive was a moment we celebrated. On a more practical level, a psychologist and data worker we collaborated with developed a mental health intervention plan for data workers. When we secured funding and could tell her that her plan would actually be implemented, that was one of those moments. Again, when migrant women data workers who were stranded in Nairobi after being dismissed by a data work company, returned home thanks to our funding that secured her departure.
Moments like the formation of the Data Labelers Association or the African Content Moderators Union, when workers voted to unionize, are equally powerful. Those are the things that, despite the psychological distress and the mental health toll of this line of work, make all of it worth it.
12. How do you imagine AI five or ten years from now?
I wish I could be more optimistic, but if AI continues to evolve the way it is now, the future will look deeply dystopian—something we’re already seeing. There will be more wars fueled by technology, more tools designed to destroy and massacre, to genocide. More surveillance. More control over our thoughts, our opinions, our movements. More restrictions on how we live and express ourselves.
All of this driven by a small technocratic elite—exactly what we’re seeing unfold right now in places like the US. Maybe I sound too pessimistic, but I see it as both a warning and a call to action. That future is possible, but it doesn’t have to be inevitable. We need to be ready to resist, to organize, and to fight for a different path.
13. Do you feel academia is keeping up with the urgency of these issues?
No, academia is not keeping up with the urgency of these issues. In fact, it’s doing a lousy job—probably because academia was never designed to fight injustice. Quite the opposite: it can be a deeply exploitative space. It ends up mirroring the very negative patterns of the AI industry. Research that aims to expose exploitation frequently repeats the same patterns when they deal with workers, in how it extracts people’s knowledge, in its obsession with the publication cycle rather than for the sake of revealing and fighting injustice.
That’s why we started the Data Workers Inquiry. It’s not just about documenting the realities of data workers, but also about proving that research can be done differently. And I think, in many ways, we’ve shown that another path is possible.
14. If you could redesign the tech industry from the ground up, what would be the first thing you’d change?
Well, I would avoid private property. And I would say, you know, if these technologies are based on all of our data, all of our data and all of our work, the other day someone said, I don’t know who, I think it was some executive from open AI or someone like that said, well, it is impossible to build AI systems, especially so-called generative AI systems, without using copyright material. And I’m like, well, if you’re going to use my work to train your AI, then I would say your AI belongs to me too. So I think that’s what I avoid from the very beginning, private property and private ownership over these technologies.
15. What does a just technological future look like to you?
I think I answered that question throughout, but I will try to give it another go. So I would say a just technological future is one in which technologies are created with the needs of specific communities in mind and are created for those communities specifically. So we are talking smaller, way smaller AI. So these are not AI systems that are created to serve everyone and to sell as much as possible, but are like specifically designed from and within communities that have specific needs. And also, these are technologies that address needs and not ones that create needs that we didn’t have before.
As I was talking about ChatGPT before, we didn’t need a ChatGPT to answer all of our questions. We have each other. We can ask a friend or an elder or a parent. We can ask our teachers and our mentors.
And now it feels inevitable that we need these technologies for everything. Before we had artists and we could have art, and now it’s like we need the image generators. And without image generators, we are nothing. We don’t need all that. And we need to also unlearn that need that has been created, has been imposed on us.
16. What should younger researchers, designers, or activists know before stepping into this space?
I think the first thing they should know is that there’s probably already someone doing that work. Especially for younger researchers, it’s important to understand that communities don’t need a researcher to teach them what they need. We’re not needed everywhere, and parachuting ourselves into communities often just repeats harmful patterns.
What we can do is support what’s already happening on the ground. Instead of a designer assuming what Community X needs and building something for them, they could start by asking: do you even need me here? If yes, what kind of support or design do you actually need? And then collaborate to make that happen.
The same goes for researchers and activists. In most cases, there are activists working already on specific issues within specific communities, often underfunded and almost invisible. Our role could be to give them a platform, to amplify their voices, rather than trying to shine ourselves all the time.
Florencia Alvarado
Florencia Alvarado (Maracaibo, Venezuela) is a visual artist, photographer and designer based in New York. Florencia’s work is a lens’-based artist using photography, digital art, collage, scanners and still life photography. Alvarado is one of the Co-Founders of WMN Zine, a Lesbian publication awarded twice with the Queens Funding for the Arts Grant.