BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//BRAID UK - ECPv6.14.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:BRAID UK
X-ORIGINAL-URL:https://braiduk.org
X-WR-CALDESC:Events for BRAID UK
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20240101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20250515T160000
DTEND;TZID=UTC:20250515T170000
DTSTAMP:20260404T093235
CREATED:20250226T110248Z
LAST-MODIFIED:20250606T083255Z
UID:3298-1747324800-1747328400@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Srravya Chandhiramowuli
DESCRIPTION:Millions of workers\, particularly in global south regions\, are engaged in creating large-scale annotated datasets used for training and fine-tuning models\, as well as making AI work as intended by verifying and correcting its outcomes where required. Yet\, there is little recognition\, in AI development or governance\, of the role of data workers or the challenges they face. In this talk\, I bring attention to the contributions as well as concerns arising from data work through ethnographic insights into two data work projects\, one in which data work is structured as a repetitive\, unitised activity and another which aims to recover data work from such reductive frames using feminist-led\, participatory approaches. By tracing the work practices\, values and tensions across the two projects\, I highlight how data work\, including efforts to responsibilize it\, is caught within and shaped by the globalised supply chains that prioritise efficiency and expansion. Critically examining data work allows us to confront the scalar logics that underpin dataset (and indeed AI) production and to intervene in them as part of envisioning responsible AI futures. \n\n\n\n\nBio\nSrravya Chandhiramowuli is a PhD candidate in the University of Edinburgh’s Institute for Design Informatics and a PhD affiliate at the Centre for Technomoral Futures. Her research closely follows the on-ground practices of dataset production for AI\, bringing particular attention to systemic challenges and frictions in data and AI pipelines. Building on scholarship in Human Computer Interaction (HCI) and Science and Technology Studies (STS)\, Srravya’s research seeks to contribute towards just and equitable AI futures. \n\n\n\n\nWatch the recording here:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-srravya-chandhiramowuli
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/2025-sem-2-eventbrite-images-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250508T160000
DTEND;TZID=UTC:20250508T170000
DTSTAMP:20260404T093235
CREATED:20241105T114930Z
LAST-MODIFIED:20250512T114432Z
UID:2782-1746720000-1746723600@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Dr Lydia Farina
DESCRIPTION:Determining responsibility considerations for AI ecosystems in the context of the creative industries\nThis talk provides key insights from our scoping BRAID project ‘Creating a dynamic archive of Responsible AI Ecosystems in the context of Creative AI’. The project lays the foundation work for mapping RAI ecosystems in this context by using bottom-up evidence already collected in specific research and artistic projects. We interpret AI ecosystems as interlinked ecosystems consisting of different individual actors and groups interacting in complex ways with one another and with AI applications. Evidence collected from the case studies are modelled into a dynamic archive to enable us to determine the boundaries of these ecosystems and the relevant responsibility considerations. The structure of the dynamic archive is based on present and future stakeholders and on responsibility priorities identified by the case studies participants. The talk includes insights relating to the responsible use of AI applications both as actors within the ecosystem and as external curators of the dynamic archive. \n\n\n\n\nBio\nLydia Farina is an Assistant Professor in Philosophy at the University of Nottingham\, working on the philosophy of mind\, metaphysics and the philosophy of artificial intelligence. More specifically she researches the nature of emotion\, AI Responsibility\, affective computing and social kinds. In the past year she researched the use of dynamic archives to determine responsible use of AI in the creative industries as the Primary Investigator of a BRAID scoping project. She holds a PhD and a MA in Philosophy from the University of Manchester\, a MA in Classics from University College London and a BA in Classics from Aristotle University of Thessaloniki. Before Academia she worked in Finance and is a member of the Chartered Institute of Taxation (CIOT). \n\n\n\n\nWatch the recording below:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-lydia-farina
LOCATION:Online only
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/11/8-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250424T160000
DTEND;TZID=UTC:20250424T170000
DTSTAMP:20260404T093235
CREATED:20241003T113453Z
LAST-MODIFIED:20250429T132426Z
UID:2624-1745510400-1745514000@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar - Dr Dan McQuillan
DESCRIPTION:Responsible AI means Decomputing\nIn this talk Dan McQuillan will argue that having a responsible approach to AI means decomputing. To start with\, decomputing means less computing; in particular\, less of the hyperscale infrastructures which underpin generative AI and whose datacentres are sprouting like mushrooms across the globe. \n\n\nBut decomputing goes beyond concern for environmental impacts to challenge the commitment of the wider AI apparatus to extractivism and scale. AI as we know it exploits sources of data and labour as well as natural resources like energy\, water and minerals. Meanwhile its claims to superior intelligence rest on the continually expanding size of its models and datasets. Decomputing draws on both decolonialism and degrowth\, arguing for an approach to AI based on the need for social justice and a just transition. \nAll too often\, AI acts as a reductive diversion from complex social and environmental questions\, so decomputing seeks alternatives that are relational\, collective and truly response-able\, because they can respond to the complexities of lived experience. \n\n\n\n\nBio\nDr Dan McQuillan\, Lecturer in Creative and Social Computing at Goldsmiths\, University of London \nAfter a Ph.D in Experimental Particle Physics\, Dan worked with people learning disabilities & mental health issues\, created websites with asylum seekers\, ran social tech camps in Kyrgyzstan and Sarajevo and worked for Amnesty International and the NHS. He recently authored ‘Resisting AI – An Anti-fascist Approach to Artificial Intelligence’ \nWatch the recording below:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-dan-mcquillan
LOCATION:Online only
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/10/4.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250327T160000
DTEND;TZID=UTC:20250327T170000
DTSTAMP:20260404T093235
CREATED:20250210T124840Z
LAST-MODIFIED:20250210T124840Z
UID:3271-1743091200-1743094800@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Dr Claire Paterson-Young
DESCRIPTION:Ethical review to support Responsible Artificial Intelligence (AI) in policing: A preliminary study of West Midlands Police’s specialist data ethics review committee\n  \nBook your hybrid ticket now!\n\n\nThe deployment of AI by the police\, while promising more effective use of data for the prevention and detection of crime\, brings with it considerable threats of disproportionality and interference with fundamental rights. The West Midlands Office of the Police and Crime Commissioner (WMOPCC) and West Midlands Police (WMP) Ethics Committee aims to bridge the gap between ethical reflection\, scientific rigour\, and a focus on human rights\, thus contributing to responsible AI in policing. This seminar explores findings from an interdisciplinary research project that examined the impact and influence of the Committee\, including: \n\nDeveloping an understanding within the police of key ethical\, scientific\, legal and operational issues for planning and implementation.\nEmbedding genuine representation from the community that the police serve in ethical oversight committees to ensure opportunities for transparent engagement.\nImportance of explaining clearly how AI will be used in policing\, so as to enable potential benefits\, risks/harms and proportionality to be assessed in the same conversation.\nNeed for Police forces\, Police and Crime Commissioners and national bodies embarking on AI-driven policing to address the ethical\, legal and technical questions raised by policing AI\, such as reconciling privacy and security priorities relevant to the assessment of the proportionality of using suspect data.\n\n\n\n\n\nBio\nClaire Paterson-Young (BA MSc PhD) is an Associate Professor & Research Leader at the Institute for Social Innovation and Impact (ISII). Claire’s current major research projects include AI in Law Enforcement (RAI-UK funded 4-year interdisciplinary project titled ‘PROBabLE Futures – Probabilistic AI Systems in Law Enforcement Futures’). Claire has over 15 years practice and management experience in safeguarding\, child sexual exploitation\, trafficking\, sexual violence\, youth and restorative justice. Claire is Chair of the University of Northampton Research Ethics Committee and a serving member of the West Midlands Police and Crime Commissioner Ethics Committee. She formerly served as a member of the Health and Research Association Research Ethics Committee. She is a trustee of the National Association for Youth Justice (NAYJ)\, Fellow of the Royal Society of Arts for the encouragement of Arts\, Manufactures and Commerce (RSA) and Fellow of the Higher Education Academy (HEA). Claire is a Research Affiliate at Vulnerability & Policing Futures Research Centre. She has held a Visiting Fellowship position at Binus University (Indonesia) and Associate Fellowship position at Children and Young People Centre for Justice (Scotland). \nRunning Order \n16.00 – Talk by Claire Paterson-Young \n16.40 – Q&A \n17.00 – End \nIn-person: Inspace\, 1 Crichton St\, Newington\, Edinburgh EH8 9AB\nOnline: Zoom \nPlease note limited seats are available at Inspace for in-person audiences\, so please book tickets in advance. For those joining online please visit the online event page for the Zoom joining link and password. \nFor inquiries about accessibility\, please contact the DI team at designinformatics@ed.ac.uk or visit the Access webpage for more information about the venue: https://inspace.ed.ac.uk/venue-access/
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-claire-paterson-young
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/4.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250313T160000
DTEND;TZID=UTC:20250313T170000
DTSTAMP:20260404T093235
CREATED:20250210T124138Z
LAST-MODIFIED:20250226T103246Z
UID:3267-1741881600-1741885200@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Ananda Rutherford
DESCRIPTION:Do ‘Words Matter’ in Machine Learning?\n  \n\nBook your online ticket now!\n\nThis presentation will reflect on the distinctions between what is needed to produce equitable\, anti-racist information on artworks and what it is possible or desirable with the application of machine learning. What word choices in the cultural sector can or should ML be taught to make? Can machine learning be applied to address structural inequality and systemic bias within art historical and museological practice? What relationship should we be crafting between machine learning and the histories of art as presented through museum labels and interpretation? \nThe focus of this research was a dataset of texts from Tate’s Art and Artists online collection\, identified as biased in terms of language and interpretation. The research was conducted as part of the AHRC Towards a National Collection Programme\, on the Transforming Collections project. Reviewing the Tate texts against Hodan Warsame’s essay ‘Mechanisms and Tropes of Colonial Narratives’\, part of the pivotal publication Words Matter: An Unfinished Guide to Word Choices in the Cultural Sector (2018)\, alongside the development of an application to analyse object label texts revealed the need for deep contextual understanding\, both of art historical writing conventions and the artwork itself. \n\n\n\n\nBio\nAnanda Rutherford is a Research Fellow with UAL’s Decolonising Arts Institute. Her research for the AHRC/TaNC funded Transforming Collections project explored the language of museum catalogue texts and the potential application of machine learning to evidence and problematise issues of colonialism and racial bias. She is also interested in ethics in practice at the intersection of academic research\, data and technology\, and GLAM and heritage organisations. Ananda is a former museum collections and documentation manager\, with a career focus on the digitisation and dissemination of collections information online and continues to work and consult in this area. \n\n\n\n\nRunning Order \n16.00 – Talk by Ananda Rutherford \n16.40 – Q&A \n17.00 – EndOnline: Zoom \nFor those joining online please visit the online event page for the Zoom joining link and password. \nFor inquiries about accessibility\, please contact the DI team at designinformatics@ed.ac.uk or visit the Access webpage for more information about the venue: https://inspace.ed.ac.uk/venue-access/
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-ananda-rutherford
LOCATION:Online only
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/3.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250227T160000
DTEND;TZID=UTC:20250227T170000
DTSTAMP:20260404T093235
CREATED:20250210T115309Z
LAST-MODIFIED:20250310T092852Z
UID:3249-1740672000-1740675600@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Prof Andrew McStay
DESCRIPTION:Empathic AI companions: moving fast and breaking people?\nThis talk examines the ethical challenges and societal implications of empathic AI companions\, drawing on UK public attitudes and civil lawsuits against Character.ai. The lawsuit highlights critical design flaws\, inadequate safeguards\, and ethical dilemmas\, especially the blurred boundaries between reality and fiction. Survey findings reveal demographic divides in familiarity and usage\, but also shared concerns about privacy\, emotional dependency\, and the appropriateness of AI companions for children and older adults. While respondents recognise benefits such as reducing loneliness and aiding education\, anthropomorphic design elements evoke mixed reactions\, raising ethical questions about simulated emotion and inappropriate user deception. The talk advocates for age-appropriate design and stronger regulatory frameworks\, emphasising the need for balanced policies to protect vulnerable populations while fostering creativity and responsible innovation. Actionable recommendations aim to guide policymakers\, industry leaders\, and scholars in addressing the ethical complexities of this emerging digital technology. \n\n\n\n\nBio\nAndrew McStay is Professor of Technology & Society at Bangor University and the author of Automating Empathy: Decoding Technologies that Gauge Intimate Life\, published open access in 2024 with Oxford University Press. His work explores the ethical implications of AI systems claimed empathise and understand emotion. Director of the Emotional AI Lab\, his current projects include Responsible AI (RAI) funded work to diversify regional input into IEEE-based ethical technical standards for emulated empathy and human-AI partnering (IEEE P7014.1). Other recent work includes a project for the Office of the Privacy Commissioner of Canada on child-focused emotional AI systems. He is also a technology advisory panel member for the UK’s Information Commissioners’ Office. \n\n\n\n\nWatch the recording here:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-prof-andrew-mcstay
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/2.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250213T160000
DTEND;TZID=UTC:20250213T170000
DTSTAMP:20260404T093235
CREATED:20250129T100735Z
LAST-MODIFIED:20250304T114010Z
UID:3232-1739462400-1739466000@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar - Dr Denis Newman-Griffis
DESCRIPTION:Responsible AI takes practice: Cross-sector insights into how we shape responsible use of AI methodologies\n\n\nEvery organisation seems to be staking a claim to responsible AI\, issuing new statements of ethical principles for AI to follow\, but what does it look like to actually do responsible AI in practice? This gap is one of the biggest challenges meaning that responsible AI too often stays as a talking point\, and too rarely becomes an action plan. This talk will present emerging findings from the past two years of research in the  Framing Responsible AI Implementation and Management (FRAIM) and Getting responsible about AI and machine learning in research funding and evaluation (GRAIL) responsible AI projects\, funded by BRAID and the Research on Research Institute. These projects are building shared knowledge of what is involved in putting responsible AI into everyday practice and how to do it effectively\, working in coproduction with nearly 20 partner organisations around the world. I will also highlight the emerging role of AI skills and competencies in bringing responsible AI practice forward in research and education. \n\n\n\n\nBio\nDenis Newman-Griffis (they/them) is a Senior Lecturer in Computer Science and AI for Health Lead in the Centre for Machine Intelligence\, University of Sheffield. Their work is an interdisciplinary blend of natural language processing\, investigates responsible AI principles\, practices\, and technologies\, with a particular focus on healthcare and disability. They are also a British Academy Innovation Fellow\, a Research Fellow of the Research on Research Institute\, and Co-Chair of the UK Young Academy\, and their research has been recognised with the American Medical Informatics Association’s Doctoral Dissertation Award. Denis is a proudly queer and neurodivergent academic and committed to fostering diversity of identity\, perspective\, and experience around the AI table. \n\n\n\n\nWatch the recording here:\n \n  \n 
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-denis-newman-griffis
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/01/1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250130T160000
DTEND;TZID=UTC:20250130T170000
DTSTAMP:20260404T093235
CREATED:20250117T110105Z
LAST-MODIFIED:20250304T114233Z
UID:3140-1738252800-1738256400@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar - Dr Alex Taylor
DESCRIPTION:Red Teaming and the Operationalising of Responsibility\nThis spring\, I’ll be embarking on fieldwork investigating the outsourced labours and operational logics associated with red teaming. Currently in vogue and linked to responsible AI (RAI) programmes across the tech sector\, red teaming is being touted as a way to identify weaknesses in language and multi-modal AI models through adversarial or provocative prompts. The reasoning here is that weaknesses identified through this prompting might help in fine-tuning or re-training AI models\, mitigating issues such as systematically unsafe or toxic content. \nForming the basis for my BRAID fellowship\, this fieldwork will take place across so-called ‘data enrichment’ centres in the Philippines (and possibly other sites in the Global South) and examine red teaming from two standpoints. First\, it will interrogate the portrayal of red teaming as a sector-wide solution to the toxic tendencies of data-driven models\, such as large language models (LLMs). Second\, it will analyse red teaming as a case study of what I term the operationalising of responsibility in the tech sector. Across both dimensions\, my focus will be on the global flows of capital and the forms and concentrations of labour being mobilised to “responsiblise” AI. I see implications here not just for a more responsible AI but vital questions about responsibility in late capitalism. \nIn preparation for this work\, I want to use this talk to think with an audience about some of the assumptions behind and controversies surrounding red teaming. I’ll begin by elaborating on ways red teaming is being approached and put into practice in R&D. I’ll then set this technical work in a wider context of RAI in the sector to raise and invite questions about the adequacy of a ‘solution’ that continues to valorise technological innovation whilst failing to reward or indeed recognise the extractive conditions necessary for AI’s proliferation. \n\n\n\nBio\nAlex Taylor is a sociologist by training\, with longstanding commitments to critically investigating and intervening in the proliferation of technology and machine intelligence. His work has been shaped most heavily by a critical yet hopeful scholarship in feminist technoscience\, including works from Ruha Benjamin\, Simone Browne\, Vinciane Despret\, Donna Haraway\, and Anna Lowenhaupt Tsing. He’s currently a Reader in Design Informatics at the University of Edinburgh and an AHRC BRAID Fellow\, and co-runs the Critical Data Studies Cluster at the Edinburgh Futures Institute. He is also a Fellow of the RSA and holds visiting roles at the University of Sweden and City\, University of London. \n\n\n\n\nWatch the recording here:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-alex-taylor
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/01/Alex-talk-visual-for-socials.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241114T160000
DTEND;TZID=UTC:20241114T170000
DTSTAMP:20260404T093235
CREATED:20241105T100444Z
LAST-MODIFIED:20250304T133053Z
UID:2768-1731600000-1731603600@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Dr Peaks Krafft
DESCRIPTION:CREAATIF: Crafting Responsive Assessments of AI and Tech-Impacted Futures\nI will be presenting the first results and prospective policy directions of the BRAID Scoping project on which I am a co-investigator\, CREAATIF: Crafting Responsive Assessments of AI and Tech-Impacted Futures. This project was designed to engage with creative workers to co-develop impact assessments that address fundamental rights and working conditions in the context of generative AI. Through collaboration with several creative industry trade union and professional bodies we have sought to ensure workers have a voice in the development of these technologies and corresponding labour policy. I will be presenting the results of a series of co-designed workshops and surveys that we undertook\, and I will discuss our next steps and future aspirations. \n\n\n\n\nBio\nDr Peaks Krafft (they/them) is Lecturer in Sociology at the University of Edinburgh\, Co-Director of Edinburgh’s MSc Digital Sociology\, and Co-Investigator on the BRAID Scoping project CREAATIF: Crafting Responsive Assessments of AI and Tech-Impacted Futures. Prior to joining Edinburgh\, Dr Peaks launched the University of the Arts London Creative Computing Institute’s MA Internet Equalities and lectured in Social Data Science at the University of Oxford Internet Institute. Dr Krafft received their PhD in Computer Science from MIT in 2017 and undertook postdoctoral work at the University of Washington Information School\, the University of California Berkeley Department of Psychology and the Data & Society Research Institute. Their publications cross AI\, cognitive science\, science & technology studies\, communications\, and sociology. \n\n\n\n\n 
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-peaks-krafft
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/11/Peaks.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241031T160000
DTEND;TZID=UTC:20241031T170000
DTSTAMP:20260404T093235
CREATED:20241003T121252Z
LAST-MODIFIED:20250121T114027Z
UID:2630-1730390400-1730394000@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar - Dr Christopher Burr
DESCRIPTION:Trustworthy and Ethical Assurance of Digital Twins\nWatch the recording now:\n\n \nDigital twins are virtual representations of natural\, engineered\, or social systems that can be dynamically updated with data from the physical twin (e.g. smart building\, ocean\, human heart) using a variety of sensory and techniques. The increasing use of ML and AI to enhance their predictive capacities\, inform decision-making\, and drive scientific insights demands critical investigation. The BRAID-funded Trustworthy and Ethical Assurance of Digital Twins (TEA-DT) scoping research project has engaged several researchers and practitioners of digital twins\, across the domains of health\, natural environment\, and infrastructure. In this presentation\, Dr Christopher Burr will discuss the results of this scoping research and introduce the TEA platform—an open-source and community-centred tool that helps project teams develop and communicate justifiable assurance that a digital twin realises key ethical properties. \n\n\n\n\nBio\nDr Christopher Burr is Senior Researcher in Trustworthy Systems at the Alan Turing Institute—the UK’s national institute for data science and AI. He leads the Innovation and Impact Hub as part of the Turing’s Research and Innovation Cluster in Digital Twins. He is also principal investigator of an AHRC/BRAID-funded project\, Trustworthy and Ethical Assurance of Digital Twins (TEA-DT). He completed his PhD in Philosophy of Cognitive Science at the University of Bristol. \n\n\n\n\nRunning Order \n16.00 – Welcome by Ewa Luger \n16.10 – Talk by Christopher Burr \n16.40 – Q&A \n17.00 – End \nIn-person: Inspace\, 1 Crichton St\, Newington\, Edinburgh EH8 9AB\nOnline: Zoom \nPlease note limited seats are available at Inspace for in-person audiences\, so please book tickets in advance. For those joining online please visit the online event page for the Zoom joining link and password. \nFor inquiries about accessibility\, please contact the DI team at designinformatics@ed.ac.uk or visit the Access webpage for more information about the venue: https://inspace.ed.ac.uk/venue-access/
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-christopher-burr
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/10/Chris.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240516T160000
DTEND;TZID=UTC:20240516T170000
DTSTAMP:20260404T093235
CREATED:20240412T154432Z
LAST-MODIFIED:20240530T124804Z
UID:1853-1715875200-1715878800@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar Series – Bhargavi Ganesh
DESCRIPTION:Reframing Governance as Innovation: Steamboat accidents and their lessons for AI governance\nDespite the emergence of promising policy proposals worldwide\, AI governance is often discussed by policymakers and scholars alike as an intractable challenge. This is largely due to the technical/organisational complexity of sociotechnical AI systems\, and a fear that imperfect regulation will result in suppression of technological innovation. In this talk\, I will draw on the historical example of a previously “ungovernable” technology- the steamboat in the 1800’s- to challenge latent scepticism and argue that the governance of AI should in and of itself be viewed as an exercise in innovation. Steamboat governance was iterative\, requiring many instances of trial and error before achieving its aims. Similarly\, global AI governance can be reframed as an exercise in policy innovation. In doing so\, we can both celebrate the progress that has already been made\, and remain optimistic about the emergence of new regulatory interventions in response to novel challenges generated by AI. \nBio\nBhargavi Ganesh is a PhD student at the University of Edinburgh\, working on mixed-method approaches for designing and evaluating the governance of AI. In the past year\, she has worked within the Bridging Responsible AI Divides (BRAID) on a consultation for the Department of Science\, Innovation\, and Technology (DSIT)\, and interned within the former Centre for Data\, Ethics\, and Innovation. She is a member of the School of Informatics’ Artificial Intelligence Applications Institute and the Edinburgh Futures Institute’s Centre for Technomoral Futures. Bhargavi is currently affiliated with the Regulation and Design Lab at the University of Edinburgh and the Governance and Responsible AI Lab at Purdue University. Prior to her PhD\, Bhargavi’s research focused on the impacts of consumer finance policies on marginalized groups. Bhargavi holds a Bachelor’s degree (with honours) from New York University and Master’s in Computational Analysis and Public Policy from the University of Chicago. \nX: @Bhargavi_Ganesh \n  \nWatch recording now:\n﻿﻿
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-series-bhargavi-ganesh
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/04/Bhargavi-Ganesh-banner.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240502T160000
DTEND;TZID=UTC:20240502T170000
DTSTAMP:20260404T093235
CREATED:20240412T154543Z
LAST-MODIFIED:20240503T142505Z
UID:1850-1714665600-1714669200@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar Series – Dr Emily Postan
DESCRIPTION:Uncanny Kinds\n  \nIn the fields of healthcare and health research\, there is particular interest using machine learning (ML) to generated novel or refined diagnostic\, prognostic\, risk\, and treatment categories. This talk interrogates the nature of these categories and their implications for the people thus (re)categorised. It approaches these questions through the lens of the philosophical idea of ‘human kinds’. It asks to what extent health-related categories generated by ML might function as human kinds and\, if so\, whether they might  differ\, in ethically significant ways\, from socially-originating kinds. In doing so\, it suggests that our understanding of responsible ML categorisation practices need to look beyond technical capabilities and clinical utility to consider wider personal and social impacts. \nBio\nEmily Postan is a Chancellor’s Fellow in Bioethics at the University of Edinburgh Law School and a Deputy Director of the J Kenyon Mason Institute for Medicine Life Sciences and the Law. Her research principally focuses on ethical questions about the relationship between our bodies\, our health\, and our identities\, and the ways that health technologies affect these relationships.  Her current research project ‘Identity by Algorithm’ explores the ethical implications of novel social categories generated by health applications of AI. Her wider research interests includes addressing the ethical challenges posed by data sharing\, neurotechnologies\, genomics\, and assisted reproduction. Emily has a background in philosophy and as a policy-manager at the Scottish Government. She received her PhD from Edinburgh Law School in 2017. Her monograph ‘Embodied Narratives: Protecting Identity Interests through Ethical Governance of Bioinformation’ was published by CUP 2022. \nX: @emily_postan \nWatch the recording now:\n﻿
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-series-dr-emily-postan
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/04/Emily-Postan-banner.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240411T160000
DTEND;TZID=UTC:20240411T170000
DTSTAMP:20260404T093235
CREATED:20240327T101139Z
LAST-MODIFIED:20240419T101838Z
UID:1791-1712851200-1712854800@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar Series - Dr Fiona Smith
DESCRIPTION:Can artistic approaches help invite more voices into discussions about AI in Healthcare?\nThe potential benefits of utilising AI technology in healthcare are vast but there are important practical\, technological\, ethical\, and legal implications that need to be addressed in order to safeguard patients. Doctor\, AI researcher and artist Fiona Smith is particularly interested in how we can ethically curate the diverse datasets that are required to make accurate fair models. Smith will be talking about how these themes informed her latest exhibition “The BOX” which premiered at the 2024 Edinburgh Science Festival. “The BOX” is the outcome of the Creator Residency ‘STEAM Imaging V’\, hosted by Fraunhofer MEVIS\, in collaboration with the Institute for Design Informatics\, the International Fraunhofer Talent School Bremen & the School Center Walle supported by Ars Electronica. \n\n\n\n\nBio\nDr Fiona Smith is a medical doctor and an UKRI funded PhD student in the Biomedical AI CDT\, School of Informatics\, University of Edinburgh. Alongside her research and clinical work\, she is interested in the use of artistic approaches to highlight complex medical\, ethical and social issues. \nFiona Smith LinkedIn: https://www.linkedin.com/in/fiona-n-smith/ \nFiona Smith website: https://fionaniamhsmith.wixsite.com/ \nWatch the recording now:
URL:https://braiduk.org/event/braid-x-idi-hybrid-seminar-dr-fiona-smith
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/03/Eventbrite-banner-3.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240321T160000
DTEND;TZID=UTC:20240321T170000
DTSTAMP:20260404T093235
CREATED:20240319T124925Z
LAST-MODIFIED:20240411T105713Z
UID:1775-1711036800-1711040400@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar Series - Dr Elinor Carmi
DESCRIPTION:A Feminist Critique to Digital and AI Consent\nThis talk presents a feminist critique to digital and AI consent and argues that the current system is flawed. The online surveillance adtech industry that funds the web had to use a mechanism that commodifies people\, rendering their behaviors into data – products that can be sold and traded for the highest bidder. In this way\, digital consent serves as an authorizing and legalizing instrument to the business model of spying\, selling and trading people in the online ecosystem. This then helps to fuel the AI industry that uses these data as training data for the generation of text\, image and video. Using four key feminist approaches – process\, embodiment\, network and context – this talk argues that digital consent is a mechanism that transfers responsibility to people and enables exploitative-extractivist markets to exist. Consequently\, the broader educational effects of digital consent produces people as products with narrow agency and understanding. \n\n\n\n\nBio\nDr Elinor Carmi is a Senior Lecturer in Data Justice and Social Justice at the Sociology & Criminology Department at City University\, London\, UK. Dr Carmi is a digital rights advocate\, feminist\, researcher and journalist who has been working\, writing and teaching on data politics\, data literacies\, feminist approaches to media and data\, data justice and internet governance. Currently Dr Carmi works on the Nuffield Foundation project “Developing a Minimum Digital Living Standard”. Dr Carmi’s work contributes to emerging debates in academia\, policy\, health organisations and digital activism. She was a Parliamentary Academic Fellowship working with the UK’s Digital\, Culture\, Media & Sport Committee\, as well as gave evidence on Digital Literacy for the House of Lords Committee on Democracy and Digital Technologies. In 2020\, Dr Carmi was invited by the World Health Organization (WHO) as an expert on data literacy and disinformation to the first scientific discussion on infodemiology. \nX – @Elinor_Carmi; \nMastodon – @drPinkeee@assemblag.es \nBluesky -@elinorcarmi.bsky.social \nWatch the recording now:
URL:https://braiduk.org/event/braid-x-idi-hybrid-seminar-dr-elinor-carmi
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/jpeg:https://braiduk.org/wp-content/uploads/2024/03/Dr-Elinor-Carmi-BRAID-x-IDI-Hybrid-Seminar.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240314T160000
DTEND;TZID=UTC:20240314T170000
DTSTAMP:20260404T093235
CREATED:20240322T153424Z
LAST-MODIFIED:20240404T161237Z
UID:1781-1710432000-1710435600@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar Series - Dr Bronwyn Jones
DESCRIPTION:What’s new in news? How AI is impacting journalism\nThe use of AI systems in newsrooms and across society is shifting the terrain in which journalism operates and changing what it means to make and consume news. We’re recommended personalised articles based on datafied readings of our behaviour\, while our clicks drive analytics that shape editorial decisions about what to report. Deepfakes and mis- and dis-information creep into our social news feeds and pose increasingly intractable verification challenges for journalists. From monitoring online sources\, to transcribing interviews and processing data for investigations\, AI-driven automation of rote tasks has become commonplace. However as news organisations rapidly adopt generative AI\, they are increasingly delegating core processes of communication and meaning making to machines. In this talk\, Bronwyn weaves together three strands of her current work which spans practice\, research\, and policy. She asks: what happens if we co-write the ‘first draft of history’ with AI? If we semi-automate this ‘cornerstone of democracy’? And how might we innovate responsibly with AI for journalism in the public interest? \n\n\nBio\nDr Bronwyn Jones is a social scientist and journalist. As a Translational Fellow for the Bridging Responsible AI Divides (BRAID) programme\, she researches artificial intelligence and data-driven technologies in news production and their implications for the public sphere in democracies. As a DCMS Policy Fellow\, she is exploring the risks generative AI poses for journalism as an industry and form of knowledge production. At the BBC\, she covers regional news online and works with the research and development department to help newsrooms navigate technological change. Bronwyn is focused on fostering fruitful collaboration and translation between academia and industry; she works to ensure theory and evidence inform the development of public interest-driven socio-technical systems in the media industry\, and on-the-ground realities inform scholarly debate. \nX – @bronwynjo \n\n\nWatch the recording now:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-series-dr-bronwyn-jones
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/03/LINKEDIN-4.png
END:VEVENT
END:VCALENDAR