Anticipating Today: Co-creating techno-moral tools for responsible AI governance

  • Led by Dr Federica Lucivero, University of Oxford
  • Partnered with Ada Lovelace Institute

This fellowship will develop and pilot tools to support policymakers to anticipate the broader societal implications of emerging AI technologies. Created in collaboration with the Ada Lovelace Institute and the Nuffield Council on Bioethics, through a co-design approach, these tools will foster policymakers’ “moral imagination” and exploratory thinking around future AI technologies.

The challenge: Responsible governance of emerging AI technologies requires us to anticipate their broader societal implications and explore alternative futures. This is a difficult task in current policymaking contexts as technology foresight rarely accounts for imaginative thinking. Currently, there is a lack of practical tools to foster policymakers’ “moral imagination” and exploratory thinking around future technologies’ social and ethical implications, which can be feasibly and sustainably used in a policy context. To address this challenge, this project will develop a set of tools that can be used in the governance ecosystem in the UK to include imaginative thinking in responding to emerging AI technologies.

Objectives:

  1. To investigate enablers and barriers to imaginative thinking in the AI governance ecosystems in the UK;
  2. To develop a set of practical tools that build on the concept of moral imagination to foster creative and thorough ethical assessments of emerging AI technologies;
  3. To test these tools in three governmental sites/groups where anticipation of AI and technological impacts is required and refine them based on feedback;
  4. To share the tools in relevant non-academic contexts;
  5. To reflect on the process and need for translating this work into practical governance tools.

Key project partners:
This project will be delivered in collaboration with the Ada Lovelace Institute and Nuffield Council on Bioethics (NCOB).

How will we do this?
Our approach utilises methods and concepts from the social sciences alongside concepts developed in philosophy/ethics of technology.

  1. We will conduct interviews with relevant stakeholders, particularly those in the AI policy ecosystem and institutionalised foresight context to get a richer understanding of the complexities and nuances within policy making contexts.
  2. We will assess how the concept of moral imagination can be understood, applied and operationalised in the context of innovation governance.
  3. We will co-design a method to bring together arts/humanities researchers and practitioners with the goal of co-creating a set of practical tools that will be later tested in three governmental sites/groups and revised based on feedback.

What do we hope to achieve?
The project’s main output will consist in a set of supporting materials that can be effectively used in practical contexts where policymakers need to make decisions about emerging technologies. These tools will aim at stimulating policymakers’ creative, ethical, imaginative and alternative future thinking. This will hopefully lead to a more robust anticipation of how technologies influence and shape societal practices and value systems and eventually ensure that AI innovation contributes to societal and moral flourishing.

Scroll to Top