Scoping projects

Our scoping projects are designed to enable teams of researchers across the UK to undertake scoping and preparation work, build partnerships and map parts of the AI ecosystem. The goal is to define what responsible AI looks like across different sectors of society such as education, policing and the creative industries. These projects will result in early-stage research and recommendations to inform future work in the field.

Beginning in February 2024 and running for 6 months, our current active projects are listed below.

 

An image showing the collaborative process of building an assurance case. It is a participatory process which requires stakeholder engagement as well as co-designing by project team members.

Trustworthy and Ethical Assurance of Digital Twins (TEA-DT)

  • Led by Dr Christopher Burr, Alan Turing Institute
This project will conduct scoping research and engagement to develop the trustworthy and ethical assurance platform into an open-source and community-driven tool. This helps developers of digital twins or AI systems to address ethical challenges and establish trust with stakeholders ...
An array of colorful, fossil-like data imprints representing the static nature of AI models, laden with outdated contexts and biases.

Creating a dynamic archive of responsible ecosystems in the context of creative AI

  • Led by Professor Lydia Farina, University of Nottingham
This project seeks to develop an insight into what might actually constitute responsible AI in the context of creative AI. It involves examining the ethical and moral tension arising between the concepts of creativity, authenticity and responsibility ...

iREAL: inclusive requirements elicitation for AI in libraries to support respectful management of indigenous knowledges

  • Led by Dr Paul Gooding, University of Glasgow
iREAL will develop a model for responsible AI systems development in libraries seeking to include knowledge from indigenous communities, specifically Aboriginal and Torres Strait Islander communities in Australia ...

CREAATIF: Crafting Responsive Assessments of AI and Tech-Impacted Futures

  • Led by Professor David Leslie, Queen Mary University of London
This project engages with creative workers to co-develop impact assessments that address fundamental rights and working conditions in the context of generative AI. It ensures that workers have a voice in the development of these technologies and corresponding labour policy ...
A collage of cars, smoke, a bird, a policeman, a person lying down, and buildings, with blue marker connecting different elements of the photo. The phrases 'Their actions and existence has impact', 'They can physically interact' and 'They have power over people' are written on top of the image.

AI in the street: scoping everyday observatories for public engagement with connected and automated urban environments

  • Led by Professor Noortje Marres, University of Warwick
This project will explore divergences between principles of responsible AI and the messy reality of AI as encountered in the street, in the form of automated vehicles and surveillance infrastructure. The aim is to ground understandings of AI in lived experiences ...
Seventeen multicoloured post-it notes are roughly positioned in a square shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.

FRAIM: Framing Responsible AI Implementation and Management

  • Led by Dr Denis Newman-Griffis, University of Sheffield
This project will work with four partner organisations across public, private, and third sectors to build shared learning, values and principles for responsible AI. This will enable best practice development, help organise information and support decision making ...
Several small cartoon representations of people are linked by brown handdrawn lines representing a network.

Ethical review to support responsible AI in policing: a preliminary study of West Midlands Police’s specialist data ethics review committee

  • Led by Dr Marion Oswald, Northumbria University
This project focuses on how ethical scrutiny can improve the responsibility and legitimacy of AI deployed by the police. It involves working with the West Midlands Police and Crime Commissioner and West Midlands Police data ethics committee ...

Towards embedding responsible AI in the school system: co-creation with young people

  • Led by Professor Judy Robertson, University of Edinburgh
This project will investigate what generative AI could look like in secondary education. It involves working with young people as stakeholders whose right to be consulted and engaged with on this issue is a key tenet of responsible AI ...
Silver lines fan out in a circle against a dark blue background.

Shared post-human imagination: human-AI collaboration in media creation

  • Led by Dr Szilvia Ruszev, Bournemouth University
The project will investigate responsible AI in the context of media creation, focusing on collaboration, creativity and representation. This includes concerns about copyright, job security and other ethical and legal challenges ...
Four square images arranged to make a bigger square - the top left lists various decimal points, the bottom left is a black and white map, and the two right images form a dagger. The top right is overlaid with bright turquoise.

Museum visitor experience and the responsible use of AI to communicate colonial collections

  • Led by Dr Joanna Tidy, The University of Sheffield
This project will work with the Royal Armouries to investigate the use of AI to enhance museum visitor experience, specifically in relation to biases in AI, which stem from the colonial history of museum collections ...

(Header Image Credit: Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0)

Scroll to Top