Skip to main content
A series of 9 green, blue and red flashcards, each with single word in white in centre; Accountable, Sustainable, Ethical, Just, Fair, Secure, Trustworthy, Safe, Transparent

Image: Responsible AI (R-AI) encompasses a spectrum of values and goals.

A Perilous Juncture for Responsible AI

Responsible AI in the UK is at a perilous juncture. AI is rapidly spreading into every part of our lives, and into every one of the institutions meant to govern, educate, shelter and defend us. Yet this is happening without the long-promised regulatory guardrails and widely adopted standards of ‘responsible’ AI development. With the UK government’s recent pledge to ‘mainline’ AI into the nation’s veins, one would expect a simultaneous rise in strong safeguards and firm commitments from industry and government alike to ensure that publics do not come to harm from the powerful set of technologies that is AI.

Instead, we have witnessed the watering down of many government and industry efforts to deliver on the promise of ‘Responsible AI’ (R-AI). Google DeepMind, Twitter/X, and Microsoft have all disbanded or eliminated R-AI teams; Meta now proposes to turn this work over to AI itself. Earlier this year, the US and UK refused to sign the Paris AI Action Summit statement that called for AI to be sustainable, open, inclusive, transparent, ethical, safe, secure and trustworthy. The US and UK have both rebranded their once-championed AI Safety Institutes to signal a newly restricted focus on national security and economic competitiveness rather than safety, fairness, accountability and reliability. Meanwhile, the EU’s AI Act faces stiff headwinds for implementation and calls to weaken its protections.

Ironically, all this comes at a time when the scope and depth of Responsible AI expertise in the UK and globally is reaching new peaks. This is thanks in part to major investments by industry, governments, universities and philanthropists in leading-edge research to map the anticipated risks and benefits of AI, while developing and testing the very governance and safety guardrails we desperately need. In the UK, this has meant over £100 million of investments in Responsible AI research and development; among them our own programme, BRAID (Bridging Responsible AI Divides), funded in 2022 by the Arts and Humanities Research Council and extended through 2028, in partnership with the Ada Lovelace Institute and the BBC.

As our Responsible AI community gathers in Manchester on 18 June to celebrate BRAID’s positive impact on the AI landscape halfway through this journey, and to hear exciting news about what comes next, we also reflect on this perilous, uncertain moment for the future of a Responsible AI ecosystem in the UK and abroad. How do we chart our road ahead?

Stylized illustration depicting a road meadering towards 'responsible AI'

Image: Implementing R-AI is often challenging, with competing stakeholder objectives, making it a non-linear process.

The BRAID Landscape Study: Lessons from the First Wave of Responsible AI

At moments of great uncertainty in a journey, when the road forward looks less clear and more hazardous than ever, it can be useful to pause and take stock of the lessons one has learned on the journey so far. This is particularly important when the coordinates of the destination you are aiming for have still not been precisely identified or agreed upon by all those traveling.

Looking back at the twisting road that has led us to this juncture, and identifying the key lessons we can learn from it to guide us on the rest of the journey, is the goal of the BRAID Landscape Study, the first phase of which we publish today to coincide with our community gathering. A second and final publication in 2026 will enhance these initial insights and deepen the study with perspectives drawn directly from R-AI leaders, practitioners and impacted publics.

The study published today achieves three things:

  • Mapping the contested territory and key divisions within the R-AI ecosystem;
  • Charting the historical path of Responsible AI as an idea, an ambition and a practice, and the vital role of the arts and humanities in shaping this path;
  • Identifying seven vital lessons that we have learned from the first wave of R-AI, lessons that can guide us as we continue this work with BRAID and beyond.

Although its desirability has never been controversial, it has never been entirely obvious what ‘Responsible’ AI (R-AI) does, or should, mean. ‘Responsible’ is not just a simple term of praise, but a contested idea that stands for a number of different and at times contradictory things. It sometimes refers to a growing field of critical AI research, at other times a corporate AI governance ambition, and at other times a process for producing desirable and trustworthy AI products. Responsible AI research often criticises the corporate R-AI governance agenda; meanwhile, many proposed ways to build responsible AI products lack buy-in from AI industry leaders. Thus, the ambiguity comes from the many ways the term has been attached to various practices in the AI space, with different stakeholders deploying it in different ways, often to achieve very different ends.

Our BRAID Landscape Study addresses these ambiguities by providing a historical and conceptual mapping of ‘Responsible AI’. The lack of a coherent, shared and clear understanding, both of the content and the goals of R-AI, creates the impression of an uncoordinated, confused mess. But that might come from expecting to find a single functional system, rather than a diverse and sprawling ecosystem. Our Landscape Study surveys the deep tensions within this ecosystem, but instead of eliminating these tensions, we detail how they can be calibrated to find a healthy equilibrium for the ecosystem as a whole. No single aspect of this ecosystem is completely isolated from the rest, and so these component parts need to be mapped, understood, and steered in a way that supports flourishing across the whole system.

Finally, we drew seven lessons to be learned from this rich, yet still incomplete history; lessons that we can take with us on the remainder of the journey to ensure the responsible configuration of AI in society.

We hope that you’ll want to read the full report – but if you’re looking for some quick insights, here are the seven lessons in brief:

1. The ‘AI’ in R-AI is an elusive and rapidly moving target

A significant insight from our study was that there is no single technology or method that we can call ‘AI’. Rather, AI is a sprawling and diverse set of methods, products, services, and knowledge that spans bespoke tools for narrow research projects to commercial systems deployed at global scale. These different systems give rise to distinct risks, ethical concerns, and methods of evaluation. This means we need to be flexible and sensitive to social and cultural context in how we think about the responsible rollout of these systems. What follows from this lesson, then, is that R-AI, must grow beyond fixed interventions such as ethical ‘frameworks’ and technical ‘toolkits’ to more flexible approaches such as post-deployment incident reporting and monitoring.

2. R-AI must expand stakeholder engagement to reach and include impacted communities

Another key takeaway from the first wave of R-AI is that vulnerable groups and affected communities, those who are most likely to be impacted by AI, need to be more fully brought into the fold of this work. The recent outcry over the irresponsible rollout of Generative AI (GenAI) systems, and the impact this has had on creative communities is a case in point. No set of corporate R-AI guidelines had the teeth to align the deployment of this technology with the interests of artists, designers, and writers, and early scoping work by the BRAID-funded project CREAATIF suggests that these tools are already harming more than they are helping creatives in the UK.

3. Narrowly technical approaches to R-AI do not work

It has become increasingly clear that R-AI needs to be seen as a sociotechnical enterprise. This follows from the acknowledgement that AI is a powerful social tool, capable of shaping our social and economic realities, not simply a technical product. Overly technical approaches often overlook more qualitative harms and reinforce techno-deterministic beliefs that AI development is inevitable and unalterable. Interdisciplinary perspectives, such as those informed by the arts and humanities, offer perspectives for understanding social impact, imagining alternative futures, and countering passive acceptance of harmful technologies. Several BRAID-funded fellowships, including ‘Anticipating Today’ and ‘Human-Centred AI for the Equitable Smart Energy Grid’, for example, are positioning the arts and humanities as central to R-AI, alongside and directly engaged with technical expertise.

4. Public trust is essential to a sustainable R-AI ecosystem

Public trust is not a given; it must be earned. Distrust in science and technology has undermined socially important interventions from climate action to vaccine uptake, and AI faces similar risks, especially as public scepticism grows. Restoring trust requires interrogating the democratic legitimacy of AI, and making sure there is effective public engagement in the design and deployment of AI-systems. Initiatives like the BRAID-funded ‘Inclusive Futures’ and ‘Medical AI and Sociotechnical Harm’ fellowships show how marginalised communities can shape more inclusive and trustworthy visions for AI. Crucially, this trust is not about blind faith in technology, but about inclusive processes that ensure AI is developed and governed in ways that respect public values and democratic oversight.

5. Good intentions are not enough for R-AI

Even the most well-intentioned developers cannot compensate for misaligned incentives and weak accountability structures. Organisational priorities such as speed to market or profit often override ethical concerns, undermining individual efforts toward responsible practices. Thus, R-AI must shift focus from personal virtue (or vice!) to systemic change. Cultures of responsibility must be built and sustained through structural supports, such as regulation, sustainable funding, and shared community values. BRAID-funded projects like ‘Muted Registers, ‘Machining Sonic Identities,’ and ‘Sustainable AI Futures’ aim to broaden R-AI’s vision beyond harm reduction, toward imagining hopeful, transformative, well-governed uses of AI.

6. R-AI must address questions wider than ethics and legality

Ethics and legality, while crucial, are insufficient for guiding AI’s societal role. R-AI must confront the full spectrum of political, cultural, economic, and environmental forces that shape AI development and deployment. The field must integrate not just legal and ethical reasoning, but also cultural narratives, aesthetic imagination, and political critique. Projects such as ‘AI Art Beyond the Gallery’ and ‘CREA-TEC’ demonstrate how the arts can both challenge and guide AI innovation. These disciplines offer the cultural insight necessary to envision and build better AI futures, and while ethics and law are important parts of this future, there are myriad other social and material powers that shape the possibilities presented and the values amplified by AI.

7. R-AI is not a problem to be solved but an ecosystem to be tended

Finally, R-AI is not a problem with a singular solution, but an evolving ecosystem that must be continually nurtured. Guidelines and policies are necessary but insufficient without strong communities of practice and shared wisdom. Sustainable R-AI depends on stewarding technical, moral, and creative expertise toward collective goals. It requires high-level vision and everyday discipline, working together to embed responsibility into the very fabric of AI development. BRAID’s strategy exemplifies this holistic approach, facilitating the exchange of ideas and practices across sectors to weave a resilient, inclusive, and forward-looking AI ecosystem.

That ecosystem must be safely shepherded onto a trajectory of sociotechnical maturity and sustainability. Given the speed at which AI is transforming the very social fabric that supports it, the diverse and sprawling global community of R-AI researchers, practitioners, creators and leaders must widely and effectively share the lessons they have learned already, and apply them to the new challenges for R-AI already on our doorstep.

Infographic with icon and headline text from each of the seven lessons learned from the study. See article below for more details

Image: Seven ‘lessons learned’ from the first waves of Responsible AI.

Download the BRAID Landscape Study here:
The Responsible AI Ecosystem: A BRAID Landscape Study

 

Fabio Tollon
Postdoctoral Researcher, BRAID (Bridging Responsible AI Divides)
The University of Edinburgh

Shannon Vallor
Co-Director of BRAID (Bridging Responsible AI Divides)
Baillie Gifford Professor of the Ethics of Data and Artificial Intelligence
Director, Centre for Technomoral Futures at the Edinburgh Futures Institute
The University of Edinburgh

 

Image credit: Ian Vickers, Eureka! Design Consultants Ltd.