After three whirlwind years, the BRAID programme has reached its midpoint and we’re pleased to share our plans for the future in a new blog post written by BRAID co-directors Ewa Luger and Shannon Vallor.

BRAID Fellows take part in an enrichment event in North Berwick in May 2025. Photo credit: Ryan Warburton
When BRAID was launched three years ago, our mission was to meaningfully integrate Arts and Humanities (AH) knowledge within the UK’s Responsible AI ecosystem. 76 collaborative cross-sectoral projects later, we have held community events, brokered fellowships, written collaborative policy responses, published papers and a landscape study, and run thematic workshops, all with a view to demonstrating the value of AH knowledge in areas of AI application such as education, creative industries, heritage, sustainability, energy, policing and inspection. We’ve been part of 128 events, engaging with around 10,000 participants. Over the summer, we also supported seven exciting artists to explore responsibility narratives in AI through our Tipping Point exhibition, creating a showcase as part of the Edinburgh Art Festival and Edinburgh Festival Fringe.
We have been proud to support and amplify AH voices and demonstrably land some of our community’s core messages around responsibility, accountability and justice in the AI ecosystem. By framing AI innovation in ways that amplify values like creativity, resilience, equity, and humane knowledge we drew together a new, truly multidisciplinary community that gathered at our Manchester event in June. Grown from a small core team, BRAID is now a dynamic and exciting community of researchers embedded in and informing the AI ecosystem.

BRAID Fellows gather at the Edinburgh Futures Institute in Nov 2025. Photo credit: Chris Scott.
BRAID at the midpoint
So, what next for us? Reflecting from the BRAID midpoint, we find ourselves in a different world from when the programme was conceived. Fuelled by FOMO and hype, some of the core messages of, and early commitments to, responsibility in AI are being lost in the race to adopt AI systems. Over the past three years AI adoption has become a driving force within our global economy, and core to our governments’ social and economic agendas.
Generative AI products have also captured the public imagination, placing powerful tools in the hands of anyone with an internet connected device. This has been achieved through an unprecedented model of monetisation at scale, where AI companies have extracted value from the work of others with few protections, and no clear means of recourse or routes to repair of economic and societal harms. Unrequested AI features have also become commonplace within platforms, software and services, presenting us with the kinds of synthetic content that previously could only have been convincingly developed by a human. Though arguably not a wise one, or one you would necessarily trust in a high-stakes setting.
Such AI tools have also enabled people with limited creative skills to produce a range of content at low to no cost, in many cases bypassing the need for professional human creativity or skill at the point of use. This has disproportionately impacted creative practitioners, and these tools are now actively enabling misinformation, disinformation and new forms of gender violence at scale, further enabled by new text-to-video applications. Meanwhile, promises of AI-enabled energy efficiencies are set against generative AI’s own rapidly ballooning energy demands, putting global and UK commitments to Net Zero and our path to sustainable futures at risk.
We are living in a world where many see AI innovation as an unequivocal good, hard coded into the capitalist growth model. In fact, when the public is surveyed, as by the Ada Lovelace Institute in 2023 and most recently in 2025, we find that while public perceptions have remained stable, concern levels have increased. Specific concerns have also emerged among groups such as artists, creatives, and educators leading to an increasing polarisation of views around the true value of AI.
In the background, questions around how the AI ecosystem should be governed, who should do it, and where responsibility lies have become increasingly pressing, particularly within public institutions, even as AI systems are rolled out.

(S)Low-Tech AI by Studio Above&Below (installation shot) seeks a shift towards slower, smaller and more grounded AI systems. Part of the Tipping Point exhibition August 2025. Photo credit: Chris Scott
Our exhibition, Tipping Point, was named to articulate a moment in time when AI has seeped into the waters of the infosphere to the extent that, if we fail to act in ways that protect what matters to us, the ongoing pollution will impact all areas of life.
Research tells us that 95% of businesses deploying AI are getting zero return on their investment,1 and speculation is increasing as to how long AI companies, such as Open AI, can continue to draw down investment without turning a profit. Despite these realities, narratives around AI’s promise remain buoyant, though public voices and needs still sit at the periphery if anywhere at all.
In response to these developments, we are reformulating the BRAID programme. For the coming three years we have restructured our work from the original four themes into three workstreams which, we believe, represent some of the most pressing challenges arising from AI – understanding diverse publics/community needs and supporting their capability to respond to AI challenges, exploring the mechanisms and conditions necessary for them to seek recourse and repair where AI harms occur, and promoting the use of AI in public media that supports, rather than undermines, democratic health:
Workstream 1: Community and Capability
Through this workstream we aim to focus on overlooked communities and publics by extending our convening and coordinating power to boost responsible and critical AI literacy skills provision and enhance the capacity of smaller organisations to evaluate responsible AI outcomes. This work is underpinned by two mechanisms for public voice:
- Understanding the changing nature of public attitudes to AI through two further iterations of the national survey led by the Ada Lovelace Institute, and
- The establishment of a stakeholder forum enabling different publics to feed into debates about how the responsible AI landscape can be shaped, and to better amplify and embed the voices and needs of underrepresented groups in our work.
From this foundation we will lead two research projects and one exhibition. The first project will explore the extent to which AI investment has yielded genuine net social return and create an open database of methods used to measure AI’s social value. This resource will be open for use by communities and publics seeking to understand the social value of adopting AI, enabling a fuller picture of its costs and contributions to our lives.
The second project, working closely with the BBC, will seek to understand how best to embed critical literacies within essential AI skills provision, to better utilise existing resources and support those with lower digital skills or those most likely to be subject to AI-generated harms.
Building on the success of Tipping Point as a means of engaging the public in AI debates, we will run another round of funding for creative practitioners, extending the focus to include creative practice more broadly, for example Music and Design, and culminating in a second exhibition.
Workstream 2: Recourse and Repair
This workstream will focus on identifying barriers to citizen and public recourse for AI-driven harms, potential pathways for repair of institutional, environmental and community damage, and resources needed to restore and maintain the balance of accountability in a healthy RAI ecosystem. In this line of work, we focus on two oft-neglected aspects of AI development and deployment: mechanisms of recourse for documented harms, and processes for repair of social and material injury.
Centering recourse and repair for AI-enabled harms, as well as the need for maintenance of these mechanisms and of systems delivering AI benefits, brings to light the temporal aspect of Responsible AI, that extends beyond isolated design, deployment and governance decisions. Our research will begin with an analysis and critical review of existing mechanisms, capabilities and practices that extend responsibility for AI over time, such as post-incident reporting, legal and financial instruments for recourse, repair and maintenance, and community activism and advocacy. We will also look at the role of media coverage in securing recourse and repair of AI harms to communities.
Workstream 3: Public Media and Democracy
This strand of work builds upon prior BRAID research and is a direct response to sectoral concerns and needs, aligning to work currently being launched at the Responsible Innovation Centre within BBC R&D. It will address AI’s impact on the public media ecosystem, with a particular focus on public interest journalism. The workstream will explore how public media can navigate growing dependencies on AI infrastructures, better evaluate AI in newsrooms, and contribute to public interest AI that build epistemic resilience in society. Research has already commenced with a stakeholder workshop, which will frame future activities and focus. This work emphasizes our democratic system to ensure that AI supports, rather than undermines, the mechanisms that underpin public interest and democratic health in the UK and beyond. This strand includes development of a public AI literacy curriculum/toolkit to support media organisations.

BRAID Community Gathering in Manchester, May 2025. Photo credit: Ryan Warburton
Supporting and expanding our community
We will, as ever, also want to support and expand our community and will be rolling out smaller scale funding for impact, further fellowships, and ongoing community events as well as new communications, which we will talk about more over the coming months.
BRAID is a community effort, and our first three years have shown that we are always better together. We will therefore continue to look for ways to further expand our community and make new connections. We will be advertising specific opportunities to connect over the next while but, in the meantime, if you would like to collaborate then please do reach out to us at braid@ed.ac.uk as we look forward to another three years of delivering the creative power and vision of the arts and humanities into the UK’s Responsible AI ecosystem.
Ewa Luger
Co-Director of BRAID (Bridging Responsible AI Divides)
Professor of Human-Data Interaction
Co-Director, Designing Responsible NLP Centre for Doctoral Training
Edinburgh College of Art
Shannon Vallor
Co-Director of BRAID (Bridging Responsible AI Divides)
Baillie Gifford Professor of the Ethics of Data and Artificial Intelligence
Director, Centre for Technomoral Futures at the Edinburgh Futures Institute
The University of Edinburgh