New Research Group “AIming Toward the Future: Policing, Governance, and Artificial Intelligence”
Interview with Maria Sapignoli
In January 2020 the new independent research group "AIming Toward the Future: Policing, Governance, and Artificial Intelligence" sponsored by the Max Planck Society will be launched at the Max Planck Institute for Social Anthropology (MPI). The group will be headed by Maria Sapignoli. Maria is an anthropologist (BA and MA from the Universitá di Bologna, and PhD from the University of Essex). She has been a research fellow in the Department ‘Law & Anthropology’ at the Max Planck Institute for Social Anthropology and a visiting fellow at several universities. Most recently she was fellow-in-residence at the Center for Human Rights and Global Justice (CHRGJ) at New York University (2018) and at the Dipartimento di Filosofia, Universitá degli Studi di Milano Statale (2019). She has conducted ethnographic research in southern Africa and in several international institutions and she is the author of Hunting Justice: Displacement, Law, and Activism in the Kalahari (Cambridge University Press 2018). We interviewed her about her research into the ways new technologies affect law, the state, and society.
Maria, in your project title you use the trendy but sometimes rather nebulous term “artificial intelligence”. What does this mean in this context?
In the context of my project AI means technologies that employ machine-learning systems, automatic decision-making, and big data, as well as the physical and digital infrastructures necessary for their development, maintenance, and deployment.
The word play with “AI” and “aiming” seems to indicate that there isn’t just technology, but also somebody intentionally aiming to create a certain future, is that right?
Yes, it’s not just about technology – let’s not forget the human creators and users! Technology experts very often have a clear image of a future to which they are aspiring or aiming through the development of new technologies.
And have they been successful in transforming our world?
I think that AI and related technologies are already transforming our world socially, economically, and politically. They are quickly becoming part of people’s everyday lives, most of the time in invisible ways.
It sounds like you find that problematic.
What we can observe is that machine learning and automated decision-making technologies have an aura of objective truth and scientific legitimacy that draws attention away from their uncertainties and constructive and destructive powers. A growing number of scholars and policymakers have emphasized the urgent need for more studies that look at the everyday and long-term effects of the use of AI, as well as looking back to the ideas that were acted on by the creators of such technologies. My project aims to do precisely this.
Another very broad concept you use in the project title is “policing”. Can you give us an idea of what that means exactly?
“Policing” is a term we will consider in a broader sense, for instance the use of predictive policing technologies in law enforcement, but also the use of algorithms to police hate speech on online platforms. The project aims to shed light on the use and effect of these technologies in policing and governance, and the implications that their use has for social inequality, law, and the outcomes of criminal justice procedure.
How did you come up with this topic?
My inspiration for this project comes above all from my previous ethnographic field-sites: in southern Africa, where I was looking at indigenous peoples’ rights and activism in the context of displacement; and in the United Nations, where indigenous representatives, experts, state delegates, and bureaucrats engage with development policies, guidelines, and laws.
What was the inspiration in southern Africa?
In southern Africa, I have recently been observing the state’s increasing use of drones and other technologies for policing and anti-poaching efforts, wondering if it has contributed to the criminalization of foraging and how it has informed the relationship between police and indigenous peoples. At the same time, I also observed how indigenous activists used similar technologies to collect evidence that they could use for litigation.
And how did the contact to the United Nations influence your project proposal?
While attending United Nations meetings, I have seen an increased emphasis on the use of AI technologies in development and human rights practices, under the motto “AI for good.” These transformative new technologies are becoming more and more central to the agendas of international institutions as a strategy that is seen as important to embrace, but at the same time also to regulate.
How long have you been interested in this field of new technologies and governance?
Several years, but it is just in the past two years that I have started to work directly on this topic and in some ways depart from my previous research.
Where did the idea of using new digital technologies in the governance and in police work come from?
This question is what the research project will attempt to answer. What we see is that all around the world, governments are investing more and more in “smart technologies” for governance – for instance, predictive policing. Furthermore, we can observe the intensification of the involvement of the private sector in governance and criminal justice initiatives. Of course any answer to this question is also highly contextual.
In what way is the private sector involved in governance?
As a result of incentives offered to private companies for developing algorithms to support law enforcement, many big tech giants and small startups are competing to offer their services. Digitization and AI are presented as promising a “smart, effective, and accountable” way to decrease the risk of litigation, adjust to diminished resources, and “police the police”. Their impacts in connection with the rise of big data and the many digital traces that people leave behind in their daily lives have yet to be understood.
What is the current state of research on the use of AI technologies by law enforcement?
The field of AI technologies for policing is enormous and emerging. A wealth of research across several disciplines has pointed out both the destructive effects of these technologies and their potential for good. Recent studies have considered the production of discrimination via algorithm, the effects of predictive policing and facial recognition technologies on vulnerable groups and crime management, and the reconceptualization of surveillance and of legal proceedings, among others. And yet, how these developments play out on the ground in the long term, particularly outside the United States, and the question of their impacts largely remain open empirical questions. That’s why an ethnographic approach will especially enrich the existing literature.
To what extent are digital systems important for criminal justice and prosecution?
They are important to the extent that they are producing evidence, knowledge, and law itself. What remains for us to determine are the impacts of “disruptive technologies” on legal procedure and on people’s basic rights.
Are there areas of law in which artificial intelligence already plays a role today?
Machine-learning systems and digital technologies are already having important and immediate consequences for a wide range of issues relevant to governance and justice, including where and when law enforcement directs their policing efforts, how they are held accountable, and what decisions a judge will most likely make at every stage in a criminal procedure, including policing, bail, sentencing, and parole. AI already shapes decisions on who populates prisons and for how long. What these issues all have in common is the fact that decision-making takes place through algorithmic assemblages. Questions of algorithmic accountability are ultimately questions of justice.
What consequences could the dissemination of these new methods of policing have for society and the legal system?
This is one of the questions this research project aims to answer. The dissemination of these methods is celebrated by some as improving efficiency and objectivity, while for others they magnify existing prejudices and lack transparency. We will raise concerns and open questions about the place of algorithms in deploying or expressing relationships of power, and the role of tech corporations in shaping governance, policies, and laws.
And how are you going to tackle this matter? What does your research design look like?
An empirical approach will help to understand the social changes and the future these new technologies produce: why they get used in the first place, why people trust them or do not, how they are created and ultimately translated and deployed. This is of particular importance at a time when governance seems to be quickly shifting from the human to the humanoid, affecting decision-making that impacts people’s basic rights. An anthropological study of these new technologies should address the nature of governance for the unfolding twenty-first century.
What main empirical methods will you use to collect your data?
One of the key methods the group will adopt is the ethnographic one. It will help to understand how one of the most significant technological, legal, and institutional developments of our time is conceptualized, shapes decision-making, and plays out in practice. Ethnography is particularly important because it sheds light on the human users and creators of new technologies, including the values and aspirations that become codified in the programs they develop.
Do you already know where you will conduct your fieldwork?
We are now in the process of setting up the group, and the answer to this question will depend on what sites we gain access to. I have personally done exploratory fieldwork in Cape Town, Milan, and New York – three cities where there have been very interesting developments in the use of new technologies in policing.
Which theoretical tradition do you rely on and which theorists are particularly important for you?
Well, we do not have a specific theoretical tradition we will follow. However, there are scholars who have inspired and will influence our work. I am thinking of people who have done research on the anthropology of policing as well as on the social effects that these technologies have for governance, human rights, and justice.
What is the first item on your agenda when you start work at the MPI?
I will get my office in January. The first thing? Bringing in my papers, books, some nice Kalahari paintings, a photo of my home town, Rimini, and – very important – my headphones to listen good music, it helps both to relax and to concentrate! In the first months of 2020, we will be engaged in assembling the research team.