Trustworthy and Ethical Assurance of Digital Twins (TEA-DT)

  • Led by Dr Christopher Burr, Alan Turing Institute

This project will conduct scoping research and engagement to develop the trustworthy and ethical assurance platform into an open-source and community-driven tool. This helps developers of digital twins or AI systems to address ethical challenges and establish trust with stakeholders.


In recent years, considerable effort has gone into defining “responsible” AI research and innovation. Though progress is tangible, many sectors still lack the tools and capabilities for operationalising and implementing ethical principles. Furthermore, many project teams also find it challenging to know how to achieve goals, such as fairness or explainability, and communicate that they have been realised to other stakeholders of affected users. If ignored, these gaps could hamper efforts to build public trust in AI technologies or amplify existing societal harms and inequalities caused by biased and non-transparent sociotechnical systems.

The Trustworthy and Ethical Assurance for Digital Twins (TEA-DT) project will develop an existing open-source platform, known as the Trustworthy and Ethical Assurance (TEA) Platform, which has been designed to help users navigate the process of addressing the aforementioned challenges.

The TEA platform helps users and project teams define, operationalise, and implement ethical principles as goals to be assured, and also provides means for communicating how these goals have been realised. It achieves this by guiding individuals and project teams to identify the relevant set of claims and evidence that justify their chosen ethical principles, using a participatory approach that can be embedded throughout a project’s lifecycle. The output of the platform a user-generated assurance case can be co-designed and vetted by various stakeholders, fostering trust through open, clear, and accessible communication.

The TEA platform consists of three main elements: 1) an online tool for crafting well-reasoned arguments about ethical goals, 2) user-friendly guidance to foster critical thinking among teams and organisations, and 3) a supportive community infrastructure for sharing and discussing best practices.

Although the platform is designed for a wide range of applications, the TEA-DT project will specifically focus on digital twins virtual duplicates that are closely coupled to their physical counterpart to enable access to data and insights that can improve and optimise the way their real-world versions operate. More specifically, the project team will carry out scoping research on the assurance of digital twins within three different contexts: health, natural environment, and infrastructure.

Although digital twins promise vast societal benefit in these areas, the fact that they increasingly rely on various forms of AI and often operate in safety-critical settings, means that several challenges must be addressed to ensure their ethical and trustworthy development. For instance, in health, questions about data privacy and ownership arise; environmental applications must tackle bias and fairness issues, complicated by global scales and differing laws; and in infrastructure, technical challenges concerning uncertainty communication give rise to additional needs for transparency and explainability.

In collaboration with key partners and stakeholders, the TEA-DT project will carry out scoping research to co-develop exemplary assurance cases and enhance the platform’s features to make it more user-friendly and integrated into workflows. By committing to open research and community-building principles, the project aims to a) systematically share best practices and standards, b) make the operationalisation of ethical principles more accessible and inclusive, and c) integrate the project sustainably with existing networks and communities.

Scroll to Top