Skip to main content
Several small cartoon representations of people are linked by brown handdrawn lines representing a network.

Image by Jamillah Knowles & Reset.Tech Australia / © https://au.reset.tech/ / Better Images of AI / Detail from Connected People / CC-BY 4.0

  • Led by Professor Marion Oswald MBE, Northumbria University

This project focuses on how ethical scrutiny can improve the responsibility and legitimacy of AI deployed by the police. It involves working with the West Midlands Police and Crime Commissioner and West Midlands Police data ethics committee.

The deployment of AI and emerging technologies by the police, while promising more effective use of data for the prevention and detection of crime, brings with it considerable threats of disproportionality and interference with fundamental rights. The West Midlands Police and Crime Commissioner and West Midlands Police data ethics committee aims to bridge the gap between ethical reflection, scientific rigour and a focus on human rights, thus contributing to responsible AI in policing. Democratic legitimacy and public trust around West Midlands Police’s use of AI is partly dependent on the ethics governance in place and the public assurances that are made. To avoid any risks of undermining legitimacy and public trust, research can help us understand if the role of the committee is delivering on the assurances being given, as we set out below.

 

This project brings together a diverse team of researchers in Law, Computer Science, Social Innovation, and Policing, with extensive experience of theory and practice of real-world ethical approaches in sensitive contexts. The partnership with West Midlands Police and Crime Commissioner (WMPCC) presents a unique opportunity to analyse operationalisation of AI tools in policing, and the impacts of advice from its data ethics committee. A specific focus is on effects on human rights of marginalised groups, and on deploying an intersectional lens to investigate the impacts of policing AI.

 

The project is designed to address six specific research challenges via four work packages. We will investigate the influence of the data ethics committee upon technology design, identification of human rights concerns and the incorporation of the interests of marginalised groups. We will consider the potential of other frameworks to improve the process, and the challenges that could shape future research. Our methodology will Review, Observe, Understand and Communicate. The outcomes will not only reveal currently unknown and unqualified practices, but will employ state of art analytical methods and thus serve as a valuable test of their fitness to purpose. The project does not address ethics in the abstract, but is grounded in the real challenges of real applications of AI tools in policing. We will focus not only on outcomes but also on processes which may generate trust or fairness by exercising and displaying good governance, and by continually looking, learning, changing and improving.

 

A key output will be an evidence-based typology, which will have wide-ranging implications across policing. Dissemination of all results across the whole policing ecosystem will be possible through the diverse research networks of the project team, which include regulatory bodies.

 

Our team is experienced in integrated interdisciplinary research. We combine the expertise in law, computer science, criminal justice, social innovation and participatory methodology needed to ensure that the research is robust, insightful and impactful. This project is deliberately ambitious and will prepare the groundwork for a full demonstrator project on responsible AI in policing and sensitive contexts.