Muted registers: A feminist intersectional (re)figuring of red-teaming

  • Led by Dr Alex Taylor, University of Edinburgh
  • Partnered with Microsoft Research

This fellowship seeks to understand the practice of red-teaming to produce actionable insights for policy and governance. The focus is on collaboration, dissemination of tools, and broad public engagement.

As an emerging practice, there are still fundamental questions to be asked of red-teaming and its use in building safe and responsible, generative AI systems. Who, exactly, works in these ‘red teams’? What conditions do red-teams operate in, and what are the lasting individual and structural consequences of these conditions? And what taken for granted logics, norms and values are being applied in the drive to mitigate harms and improve AI safety?

This fellowship will seek a deeper understanding of and critical engagement with red-teaming in order to inform both organisational governance and national policies linked to Responsible AI. Grounded research will draw on scholarship in the humanities, philosophy, Black Studies and English literature – loosely grouped under the umbrella term feminist intersectionality – to make two analytical contributions:

  1. It will provide a means to think with/against the harms both being targeted by and also arising in red-teaming. Feminist intersectionality will enable a critical examination of red teams’ exposure to trauma in efforts to ‘responsibilize’ AI and how this risks displacing harm and amplifying it in wider sociotechnical systems and structures. A central question motivating this analysis will be: “What are the individual and structural consequences of displacing harm through red-teaming and can these consequences amount to a responsible practice?”
  2. It will be motivated by imaginative and hopeful ways of responding to situated, ethical and epistemic challenges. This perspective will be driven by a feminist intersectionality committed to listening to others and always provoked by the expansive question, ‘What other worlds are possible?’ With respect to red-teaming, this will involve attuning to the ways harm is reduced to a technical problem that can be somehow neutralised by people working in difficult conditions. The research will gather stories and accounts of the work in these conditions to foreground ‘muted registers’ (Hustak & Myers 2012) – those stories we don’t often hear or that are elided. Such registers will be used to redirect the attention in RAI away from neutralising harms and, instead, genuinely engaging with and empowering those who feel the effects of irresponsibility in AI.

This project is committed to impacting governance within commercial organisations and national policy-making. The programme of work for fellowship has been designed to work closely with Microsoft Research (MSR) and the BRAID team to translate the findings for governance and policy audiences, and to develop new ways of improving the working conditions for those routinely confronted with the harms reproduced by generative AI.

Scroll to Top