Anthropology of AI in Policing and Justice

People | Accompanying Scientific Committee | Projects | Activities

Applications of machine learning and artificial intelligence are transforming policing and justice. Whether through the analysis of large datasets by intelligence agencies, the increasing use of facial recognition by public and private actors, or the use of predictive policing and risk scores, it is clear that algorithmically mediated socio-technical assemblages are changing governance and decision making. Building on the research trajectory set out by "AIming Toward the Future: Policing, Governance, and Artificial Intelligence" (a former Max Planck Independent Research Group headed by Maria Sapignoli), the group considers the development and application of new technologies being tested and put to work in policing and justice.  

Claiming to improve security, efficiency and impartiality, these technologies and their logics of prediction and anticipation often purport to structure the actions of their users. Of interest is the contrast between experts’ imagination of these technologies - in companies, universities, and special units within state agencies – and how they work (or don’t) in practice.

The research ethnographically considers AI and related technologies at various stages from development through to use, situating them in their local contexts, whilst also analysing their global circulation. The process of technology production provides a unique opportunity for critical inquiry into developers’ underlying assumptions before they become widely accepted as normative values.

The project will pay particular attention to the interaction of new technologies with legal frameworks and civil society push-back and support. It is guided by questions about how new technologies transform security practice, produce unexpected consequences, and interact with existing inequalities and injustices.

Go to Editor View