What are the scoping projects?
Our scoping projects are designed to enable teams of researchers across the UK to undertake scoping and preparation work, build partnerships and map parts of the AI ecosystem. The goal is to define what responsible AI looks like across different sectors of society such as education, policing and the creative industries. These projects will result in early-stage research and recommendations to inform future work in the field.
Beginning in February 2024 and running for 6 months, our current active projects are listed below.
Trustworthy and Ethical Assurance of Digital Twins (TEA-DT)
This project will conduct scoping research and engagement to develop the trustworthy and ethical assurance platform into an open-source and community-driven tool. This helps developers of digital twins or AI systems to address ethical challenges and establish trust with stakeholders ...Creating a dynamic archive of responsible ecosystems in the context of creative AI
This project seeks to develop an insight into what might actually constitute responsible AI in the context of creative AI. It involves examining the ethical and moral tension arising between the concepts of creativity, authenticity and responsibility ...iREAL: inclusive requirements elicitation for AI in libraries to support respectful management of indigenous knowledges
iREAL will develop a model for responsible AI systems development in libraries seeking to include knowledge from indigenous communities, specifically Aboriginal and Torres Strait Islander communities in Australia ...CREAATIF: Crafting Responsive Assessments of AI and Tech-Impacted Futures
This project engages with creative workers to co-develop impact assessments that address fundamental rights and working conditions in the context of generative AI. It ensures that workers have a voice in the development of these technologies and corresponding labour policy ...AI in the street: scoping everyday observatories for public engagement with connected and automated urban environments
This project will explore divergences between principles of responsible AI and the messy reality of AI as encountered in the street, in the form of automated vehicles and surveillance infrastructure. The aim is to ground understandings of AI in lived experiences ...FRAIM: Framing Responsible AI Implementation and Management
This project will work with four partner organisations across public, private, and third sectors to build shared learning, values and principles for responsible AI. This will enable best practice development, help organise information and support decision making ...Ethical review to support responsible AI in policing: a preliminary study of West Midlands Police’s specialist data ethics review committee
This project focuses on how ethical scrutiny can improve the responsibility and legitimacy of AI deployed by the police. It involves working with the West Midlands Police and Crime Commissioner and West Midlands Police data ethics committee ...Towards embedding responsible AI in the school system: co-creation with young people
This project will investigate what generative AI could look like in secondary education. It involves working with young people as stakeholders whose right to be consulted and engaged with on this issue is a key tenet of responsible AI ...Shared post-human imagination: human-AI collaboration in media creation
The project will investigate responsible AI in the context of media creation, focusing on collaboration, creativity and representation. This includes concerns about copyright, job security and other ethical and legal challenges ...Museum visitor experience and the responsible use of AI to communicate colonial collections
This project will work with the Royal Armouries to investigate the use of AI to enhance museum visitor experience, specifically in relation to biases in AI, which stem from the colonial history of museum collections ...
(Header Image Credit: Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0 / Faded and cropped from original)