Regulatory guidelines informing the societal and ethical factors shaping medical AI adoption

  • Led by Dr Beverley Townsend, University of York
  • Partnered with Microsoft Research

This fellowship investigates how AI-powered medical devices can better meet the needs of diverse populations and impact regulation. The project will use an NHS AI triage tool as a case study.

Software- and Artificial-Intelligence-as-medical-devices (‘AIaMDs’) – either standalone or embedded within hardware – present huge opportunities to promote human health and well-being. However, for responsible AIaMDs’ development and deployment these devices must align with human values, and user expectations and requirements. To reduce health inequalities, AIaMDs must perform across all populations, within the intended use of the device, and serve the needs of diverse UK communities, while considering the lived experience of individual users.

The project will capture and elicit emerging societal-ethical requirements from user-participants which will inform the regulation of these technologies. Through stakeholder engagement, the UK public (including persons from marginalised groups) will have a voice in identifying what these varying and pressing societal-ethical expectations and concerns are and how they should be regulated. The primary aim of the project is to drive human-centric regulation by identifying user values and expectations, with a second and complementary aim of proposing practical policy guidelines and recommendations that demonstrate how specific emerging societal-ethical expectations and unique AI-related concerns can be addressed by the regulators.

The Medicines and Healthcare Products Regulatory Agency (‘MHRA’)’s challenge is to follow the UK’s pro-innovation approach to AI regulation while remaining true to its mandate, that is, to prevent public harm and ensure patient safety given that novel risks may arise from the digital and data-dependent components of the device. These technologies must not only be safe and effective, but must address crucial societal-ethical expectations and challenges such as issues of algorithmic injustice, unwanted bias, data quality and underrepresentation, misinformation, and accessibility. The underlying premise is that better alignment with values and expectations supports user experience, and promotes trustworthiness in the system, and, ultimately, system uptake.

Microsoft Health Futures was selected as the non-academic stakeholder as it has a strong regulatory, compliance and policy background across medical devices, responsible AI, and data governance and its contribution is critical to the success of the project. The NHS A&E triage diagnostic AI-supported agent (‘DAISY’), under development at the University of York, will be used as a case study where end user-participants will progress through the entire AIaMD’s lifecycle to elicit important non-technical societal-ethical requirements. The project’s impact is to better understand the tension between users’ social-ethical requirements and system design specification, clinical performance, and utility, and to inform regulators (in the UK and around the world) of regulatory gaps and non-technical requirements for inclusion into AIaMD policy guidelines. The work will directly benefit developers, designers, manufacturers, and deployers of these devices, including the NHS and end-users.

Scroll to Top