Disruption Network Lab: How AI promotes discrimination against migrants

„Technologies are never neutral – they replicate existing biases and power differentials in our society and create new risks.” Petra Molnar, speaker at the Disruption Network Lab conference "Smart Prisons", has investigated how AI is used in the surveillance of migrants.

Smart Prisons Logo

© Disruption Network Lab

Project partner

Disruption Network Lab

The Disruption Network Lab is a Berlin-based non-profit organization born in 2014 with the aim to examine the intersection of politics, technology, and society, investigating projects within the area of digital culture, information technology and political activism. The strength of the Lab is its transversality: it involves experts to share and discuss their work who range from investigative journalists, political activists and lawyers to human rights advocates, artists and whistleblowers.

The overarching aim of the Disruption Network Lab is to discover and provide new paths of social and political action within the framework of digital culture, activism and social justice. The program takes shape through a series of international conferences, panels and meetups in Berlin to expose systems of power and injustice. Allianz Foundation supports the 2023 SMART PRISONS conference that investigates systems of surveillance around prisons, detention centers and borders.

"Technologies are never neutral"

An interview with researcher Petra Molnar, speaker at the Smart Prisons Conference of the Disruption Network Lab in March 2023. She is an internationally recognized expert on migration and technology. She talks about how AI is used as a tool to survey migrants and what needs to be done to stop human rights violations.

What are Migration Technologies and what role do they play for the European Border Control/Regime? You mentioned in this context the Ecosystem of Migration Management Technologies – can you explain that to us?

Border and migration technologies can impact a person at every point in their journey. Before you even cross a border, you may be subject to predictive analytics used in humanitarian settings or biometric data collection. At the border, you can see drone surveillance, sound-cannons, and thermal cameras. If you are in a European refugee camp, you will interact with algorithmic motion detection software, various surveillance, and biometrics, and even if you have the chance to claim asylum, you may be subject to projects like voice printing technologies and the scraping of your social media records.

Borders themselves are also shifting and changing, as surveillance and new technologies are expanding our understanding of the European border beyond its physical limits, creating a surveillance dragnet as far as north and Sub-Saharan Africa and the Middle East. These experimental and high-risk technologies occur in an environment where technological solutions are presented as a viable solution to complex social issues, creating the perfect ecosystem for a lucrative ecosystem giving rise to a multi-billion euro border industrial complex. 

What is the human rights impact of the usage of Artificial Intelligence and automated technologies on migration control?

Technologies are never neutral – they replicate existing biases and power differentials in our society and create new risks. When they are used in spaces like borders and migration applications which are simultaneously high-risk while being very opaque and discretionary, a vast array of human rights violations can occur. We know that facial recognition and algorithmic decision-making can discriminate against people of color, women, and people who are differently abled. Indiscriminate data sharing of people’s sensitive personal information with law enforcement or even repressive government they are trying to flee is not only dangerous but also infringes on people’s right to privacy.  Using surveillance technologies at land and sea borders and preventing people from reaching European territory not only contravenes international refugee law but can also impact people’s right to life, liberty, and security of the person. These are just some of the many human rights risks of migration control technologies yet these projects continue to be largely unregulated and non-transparent.

“Technologies are never neutral – they replicate existing biases and power differentials in our society and create new risks.”

What are your experiences with these technologies in the field. You’ve talked to many who are affected. What are they saying?

Since 2018, as a lawyer and anthropologist I have been spending time with people who are at the sharpest edges of technological innovation. From the Arizona desert at the US/Mexico border to the Kenya-Somalia border, to various refugee camps in the EU, I have time and again seen first-hand the impacts of border surveillance and automation on people’s lives. While it is never my job to generalize or speak for people, some of the themes I notice again and again are people sharing feelings of being constantly surveilled, or being reduced to data points and fingerprints.

Many also often point out how strange it seems that vast amounts of money are being poured into high-risk technologies in places like refugee camps while they cannot get access to a lawyer or have psychosocial support, or in some cases even adequate food and water. There is also a central misapprehension at the centre of many border tech projects – that somehow more technology will stop people from coming. But that is not the case, which is what I have seen first hand. Instead, people will be forced to take more dangerous routes, leading to even more loss of life at Europe’s borders.

What are your key recommendations to stop these human rights violations?

Little regulation exists to govern the development and deployment of high-risk border tech. The EU’s proposed AI Act is a promising step as it will be the first regional attempt in the world to regulate AI. However currently the act does not go far enough to adequately protect people-on-the-move. A moratorium or a ban on high-risk border technologies, like robo-dogs, AI lie detectors, and predictive analytics used for border interdictions is a necessary step in the global conversation.

We also need more transparency and accountability around border tech experiments, and people with lived experiences of migration must be foregrounded in any discussions. Because in the end, it is not really about technology. What we are talking about is power – and the power differentials between actors like states and the private sector who decide on experimental projects, and the communities who become the testing grounds in this high-risk laboratory. Why are we developing AI lie detectors to test out on refugees when we could be using AI to root out racism at the border?  These are all deliberate choices. In these uncertain times, what can we collectively do to shift these narratives?

About

Petra Molnar focuses on carceral technologies of immigration detention. Practices of border violence increasingly rely on high-risk technological experiments, including carceral technologies used in immigration detention. Voice—printing, phone surveillance, and electronic ankle shackles are just some of the more recent tools that states use to detain people on the move. Certain places like refugee camps and detention centres serve as testing grounds for new technologies, because regulation and oversight are limited and an ‘anything goes’ frontier attitude supports a growing and lucrative multi-billion euro border industrial complex.

Petra Molnar © Kenya-Jade Pinto

Petra Molnar © Kenya-Jade Pinto