7 — 31 August 2025 | Inspace Gallery, Edinburgh
In the summer of 2025, Bridging Responsible AI Divides staged an exhibition of newly commissioned artworks on the theme of Responsible AI by artists Louise Ashcroft, Julie Freeman, Wesley Goatley, Identity 2.0, Rachel Maclean, Kiki Shervington-White, and Studio Above&Below. We invited BRAID community member and visitor to the exhibition Alasdair Milne to share his reflections on the exhibition.

Exhibition entrance (installation shot) with graphic by Cate Bebe Sutton.
How might artists develop strategies for the production and use of artificial intelligence (AI) technologies?
The title ‘Tipping Point’ gestures to the term as it is used in climate science—a point, once passed, that cannot be rowed back, and that may involve a cascade of further consequences. The exhibition asks, what can artists do ‘to help us more wisely respond to the present realities and near-future horizons of artificial intelligence’? But rather than viewing the increasing ubiquity of AI technologies as inevitable and beyond our control, the exhibition represents the efforts of a cohort of contemporary artists and creatives to develop a set of propositions for steering AI development in new and divergent directions.
In presenting new commissions, BRAID supports not just the presentation of works to the public, but the resource-intensive work of producing entirely new works. ‘The artworks present new ways of thinking about today’s AI, the futures we want and the communities needed to build it’. As BRAID co-investigator Beverley Hood puts it, it’s about moving beyond critique to direct strategies for intervention. The exhibition provides a physical site at Inspace Gallery for audiences to come and interact with each of the seven new commissions that individually, and together, forge new research collaborations and communities that might regulate—and self-regulate—through presentation and discursive exchange.
Upon entering the gallery, visitors first encounter Julie Freeman’s Models of Care, which takes its title from the field of medicine. Freeman takes this framework and repurposes it, expanding its relevance from the individual in need towards the collective project of planetary custodianship, particularly in relation to the impacts of creative practices that employ resource intensive technologies.
The work manifests as two bentwood ‘sonic sculptures’, their form inspired by the topography of Svalbard in the far north of Norway. The larger of the two sculptures emits sonic resonances composed by Freeman and Torben Snekkestad from field recordings of the arctic; glacial rumbles convey geological timescales. The smaller sculpture, on the other hand, resonates the outputs of a ‘low resource AI model’ trained on glacial data. Their vibrations can be felt when the sculptures are sat on or held. While the work concerns itself with generation, it also attends to the question of prediction, a mode of thinking long a core part of climatic and meteorological research. In doing so, Models of Care troubles how these might be distinct from each other.

Models of Care by Julie Freeman (installation shot) explores the relationship between AI, climate change, and human agency.
Not all works concerning AI need employ it in the production process. This proposition is underlined by Identity 2.0’s double zine installation, AI – Z. Identity 2.0, like Freeman, also define generative ai as ‘a prediction machine’. By engaging with activists Sophia Luu, James Reeves, Issey Gladston and Armes, their approach departs from the position that AI should be resisted, and draws out a set of strategies for doing so.
In ‘thinking of this zine as a seed’ its provocations become a starting point for future action. In the gallery the zine is situated within a library of other zines, providing it with a context also reflected in the breadth of contributors domains of expertise. Identity 2.0 expand their commission further by producing and presenting a second zine, Ungoogleable Knowledge, a zine about their main zine, moving to ‘open up the black box’ of their own working process as an exemplar to the approach of seemingly opaque technology companies. By quoting Toni Cade Bambara—‘The role of the artists is to make the revolution irresistible’—they draw attention to the enduring problem that the capacity of ‘artists’ is mobilised by all sorts of actors who aim for revolutions of many different kinds. For Identity 2.0, this revolution is opposed to AI as it currently exists.

AI-Z by Identity 2.0 (installation shot) works to harness the act of zine-making and the resistance tactics of activists to promote responsible AI and resist the pervasive inclusion of GenAI into our daily lives.
Louise Ashcroft’s Real Stupidity responds deftly to the context in which Inspace Gallery is situated, at the epicentre of Edinburgh’s Fringe Festival, by collaborating with comedians John Luke Roberts, Ella Golt, Frankie Thompson and Ben Target to produce short video advertisements for speculative AI products. As a novel kind of creative partnership, also encompassing curator and AI researcher Rebecca Edwards, this playful approach produces uninhibited results, implicitly critiquing the seriousness of some mainstream technology culture as being a limiting factor to divergent innovation. Each blueprint attests to the possibility of playfulness in speculative design and practice. From AI models that ‘tailors speech to your audience so you can communicate to your dad better’ to ‘everyone’s heating is set to the temperature of the coldest home, until equality’, each provocation, while very serious, is brought to life through humour, a fundamentally human quality.

Real Stupidity by Louise Ashcroft (installation shot) asks comedians to create a series of ‘Speculative Gadgets,’ a range of wearable AI devices that tackle contemporary societal issues.
Wesley Goatley’s tripartite installation A Harbinger, A Horizon, A Hope: Three Heralds of Possible AI Futures presents three hacked-hardware AI voice assistants. Like Identity 2.0, Goatley is critical of the way AI is framed, suggesting that in naming it, it is designated power and status that it does not deserve. First, the Harbinger is an operative voice assistant imagined for an NHS GP surgery, in a world where its development and maintenance has been outsourced to Palantir. As Goatley points out, there’s a difference between “automating the labour of” a person and “replacing” them; the Harbinger plays on this tension. Like each of Goatley’s three speculative works, the model runs on a local server; it does not rely on a data centre, thus minimising resource consumption. In response to BRAID’s provocation to move beyond critique, Goatley’s work is explicitly propositional since, as he describes, ‘any good critique has to be propositional’. The Horizon considers a different, branched future: an implicit collapse scenario in which resources are scarce. The resultant assistant is hacked together from whatever is to hand and runs on a small solar panel. This work speaks to human resilience, but also to a view of technology which does not rely on consistent and corporate supply chains.
The third device, The Hope, sets up the most challenging problem to approach, that of the integrated smart system, sometimes discussed as ubiquitous computing (or ‘ubicomp’) since its hardware becomes so closely meshed with the architecture of the built environment. Unlike the Horizon which considers the response to adversity at the scale of the individual, the Hope considers a future where such technology remains a collectively held resource. Utopian visions of ubicomp systems often fall at the first hurdle: the more integrated the system, the more effective its co-option as a surveillance apparatus. Goatley recast this problem as one of power, rather than just design; in doing so, it becomes an opportunity for a community. As a prototype, its concept both critiques existing approaches to surveillance but also proposes a critical alternative.

A Harbinger, a Horizon, and a Hope by Wesley Goatley (installation show) uses world-building and critical practice to create voice-enabled devices from distinct and possible near future scenarios for AI technologies.
Rachel Maclean’s Eye Yours! They’ve Ggetuo previews the first in a new body of work which expands beyond Maclean’s established medium of film to incorporate sculpture and painting. Having trained AI models on her fifteen year back catalog of film work in which she features as the sole actor, the sculpture marks an effort to use what is sometimes called ‘style transfer’ in machine learning – the capturing of a specific style by a model so that in can be reproduced endlessly. This is taken here as a constructive strategy, however, where the artist captures their own style and is able to expand, build upon, and make productive use of it, rather than allowing such capture to be imposed by a third party technology company training on scraped data. Further, Maclean aims at using AI to make work that would otherwise have not been possible; as an ampliative tool.
In this respect, Maclean’s work both uses AI and is about AI. Drawing parallels with the extraction of the British Empire and the first Industrial Revolution, the work’s warping glass morphology recalls the impact of the microscope and the new scales that it made exploration of possible. The artists’ own face is superimposed upon a ‘gentleman bust’ extending this historical reference; their role is contested, implying that to be both creator and controller of a machine cannot be taken for granted.

Eye Yours! They’ve Ggetuo by Rachel Maclean (detail), interrogates the tension between what AI is and what it feels like to interact with it.
Studio Above & Below’s installation (S)Low-Tech AI, as the name suggests, aims to demonstrate the possibility of more minimally-impactful AI technologies. Dual-channel footage of geological formations and ambient sound are changed by the audience, who interact via an interface of interchangeable stones originating from the same locations in the Scottish Highlands and Lowlands as the audiovisuals. Such an interface is not determined by a ‘heavy’ AI model, but rather inspired by modes of computation distinct from machine learning and its connectionist tradition, opting instead for rules-based systems. This decision, like Goatley’s choice to run models locally, reduces the total resource expenditure of the work. But further, it highlights that in many cases, the use of a machine learning model is superfluous; sometimes, a simpler algorithm can achieve the same results.

(S)Low-Tech AI by Studio Above&Below (installation shot) seeks a shift towards slower, smaller and more grounded AI systems.
With Closer to Go(o)d? Kiki Shervington-White brings together archival footage and insight gathered from a series of ‘participatory workshops with Black and ethnically diverse communities in Birmingham’ into a film which contests the objectivity of AI systems, in particular facial recognition algorithms. The film shows a facial algorithm’s attempt and struggle to categorise faces from archival footage, revealing its limits both from a technical and historical perspective.
In an effort to broaden the range of voices who are given a say in urgent questions of technical development, the film platforms perspectives from within this community, who share their own ideas of what they think would make AI more transparent. Among the many provocations the work contains, is the suggestion that using machine or algorithm as a buffer to accountability, in the context of machine vision and facial recognition systems but also more broadly, poses a threat to institutional systems of justice. Further, the work builds upon this critique; it pushes forward the debate around equitable development by suggesting that intervention in technical development must move beyond representation and towards building real power.

Closer to Go(o)d? by Kiki Shervington-White (installation shot), brings community perspectives to bear to challenge the myth of AI neutrality and foreground human agency in the age of AI.
The works presented here are divergent in their approach; as a cohort, this is their strength, but also places them beyond simple generalisation. Studio Above & Below, Freeman and Goatley all explicitly attempt to use AI tools in ways that the artists deem more responsible. Shervington-White likewise attempts to interrogate what ‘responsible’ might mean in the first place, whilst also positing some concrete strategies for moving in that direction. Identity 2.0 offer a more critical counterpoint in opposing AI and offering strategies for supporting human empowerment. Across the projects, a wide discursive space is populated with provocations, insights, concrete prototypes and strategies. In presenting this suite of innovations to the public, the artists not only do seek to bring insight to responsible AI research, but also to inspire the public to creatively and critically engage with their own uses of AI. They prompt us to consider how we as a society collectively talk about, and make sense, of this moment of technological change.
Alasdair Milne
August 2025
Alasdair Milne recently completed his PhD with Serpentine Galleries’ Creative AI Lab and King’s College London. His work focuses on the collaborative systems that emerge around new technologies, synthesising critical and analytic philosophical approaches to assess them through ‘cultural systems analysis’.
Image credit: Chris Scott