BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//BRAID UK - ECPv6.14.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://braiduk.org
X-WR-CALDESC:Events for BRAID UK
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20250101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20250213T160000
DTEND;TZID=UTC:20250213T170000
DTSTAMP:20260411T100728
CREATED:20250129T100735Z
LAST-MODIFIED:20250304T114010Z
UID:3232-1739462400-1739466000@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar - Dr Denis Newman-Griffis
DESCRIPTION:Responsible AI takes practice: Cross-sector insights into how we shape responsible use of AI methodologies\n\n\nEvery organisation seems to be staking a claim to responsible AI\, issuing new statements of ethical principles for AI to follow\, but what does it look like to actually do responsible AI in practice? This gap is one of the biggest challenges meaning that responsible AI too often stays as a talking point\, and too rarely becomes an action plan. This talk will present emerging findings from the past two years of research in the  Framing Responsible AI Implementation and Management (FRAIM) and Getting responsible about AI and machine learning in research funding and evaluation (GRAIL) responsible AI projects\, funded by BRAID and the Research on Research Institute. These projects are building shared knowledge of what is involved in putting responsible AI into everyday practice and how to do it effectively\, working in coproduction with nearly 20 partner organisations around the world. I will also highlight the emerging role of AI skills and competencies in bringing responsible AI practice forward in research and education. \n\n\n\n\nBio\nDenis Newman-Griffis (they/them) is a Senior Lecturer in Computer Science and AI for Health Lead in the Centre for Machine Intelligence\, University of Sheffield. Their work is an interdisciplinary blend of natural language processing\, investigates responsible AI principles\, practices\, and technologies\, with a particular focus on healthcare and disability. They are also a British Academy Innovation Fellow\, a Research Fellow of the Research on Research Institute\, and Co-Chair of the UK Young Academy\, and their research has been recognised with the American Medical Informatics Association’s Doctoral Dissertation Award. Denis is a proudly queer and neurodivergent academic and committed to fostering diversity of identity\, perspective\, and experience around the AI table. \n\n\n\n\nWatch the recording here:\n \n  \n 
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-denis-newman-griffis
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/01/1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250227T160000
DTEND;TZID=UTC:20250227T170000
DTSTAMP:20260411T100728
CREATED:20250210T115309Z
LAST-MODIFIED:20250310T092852Z
UID:3249-1740672000-1740675600@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Prof Andrew McStay
DESCRIPTION:Empathic AI companions: moving fast and breaking people?\nThis talk examines the ethical challenges and societal implications of empathic AI companions\, drawing on UK public attitudes and civil lawsuits against Character.ai. The lawsuit highlights critical design flaws\, inadequate safeguards\, and ethical dilemmas\, especially the blurred boundaries between reality and fiction. Survey findings reveal demographic divides in familiarity and usage\, but also shared concerns about privacy\, emotional dependency\, and the appropriateness of AI companions for children and older adults. While respondents recognise benefits such as reducing loneliness and aiding education\, anthropomorphic design elements evoke mixed reactions\, raising ethical questions about simulated emotion and inappropriate user deception. The talk advocates for age-appropriate design and stronger regulatory frameworks\, emphasising the need for balanced policies to protect vulnerable populations while fostering creativity and responsible innovation. Actionable recommendations aim to guide policymakers\, industry leaders\, and scholars in addressing the ethical complexities of this emerging digital technology. \n\n\n\n\nBio\nAndrew McStay is Professor of Technology & Society at Bangor University and the author of Automating Empathy: Decoding Technologies that Gauge Intimate Life\, published open access in 2024 with Oxford University Press. His work explores the ethical implications of AI systems claimed empathise and understand emotion. Director of the Emotional AI Lab\, his current projects include Responsible AI (RAI) funded work to diversify regional input into IEEE-based ethical technical standards for emulated empathy and human-AI partnering (IEEE P7014.1). Other recent work includes a project for the Office of the Privacy Commissioner of Canada on child-focused emotional AI systems. He is also a technology advisory panel member for the UK’s Information Commissioners’ Office. \n\n\n\n\nWatch the recording here:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-prof-andrew-mcstay
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/2.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250313T160000
DTEND;TZID=UTC:20250313T170000
DTSTAMP:20260411T100728
CREATED:20250210T124138Z
LAST-MODIFIED:20250226T103246Z
UID:3267-1741881600-1741885200@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Ananda Rutherford
DESCRIPTION:Do ‘Words Matter’ in Machine Learning?\n  \n\nBook your online ticket now!\n\nThis presentation will reflect on the distinctions between what is needed to produce equitable\, anti-racist information on artworks and what it is possible or desirable with the application of machine learning. What word choices in the cultural sector can or should ML be taught to make? Can machine learning be applied to address structural inequality and systemic bias within art historical and museological practice? What relationship should we be crafting between machine learning and the histories of art as presented through museum labels and interpretation? \nThe focus of this research was a dataset of texts from Tate’s Art and Artists online collection\, identified as biased in terms of language and interpretation. The research was conducted as part of the AHRC Towards a National Collection Programme\, on the Transforming Collections project. Reviewing the Tate texts against Hodan Warsame’s essay ‘Mechanisms and Tropes of Colonial Narratives’\, part of the pivotal publication Words Matter: An Unfinished Guide to Word Choices in the Cultural Sector (2018)\, alongside the development of an application to analyse object label texts revealed the need for deep contextual understanding\, both of art historical writing conventions and the artwork itself. \n\n\n\n\nBio\nAnanda Rutherford is a Research Fellow with UAL’s Decolonising Arts Institute. Her research for the AHRC/TaNC funded Transforming Collections project explored the language of museum catalogue texts and the potential application of machine learning to evidence and problematise issues of colonialism and racial bias. She is also interested in ethics in practice at the intersection of academic research\, data and technology\, and GLAM and heritage organisations. Ananda is a former museum collections and documentation manager\, with a career focus on the digitisation and dissemination of collections information online and continues to work and consult in this area. \n\n\n\n\nRunning Order \n16.00 – Talk by Ananda Rutherford \n16.40 – Q&A \n17.00 – EndOnline: Zoom \nFor those joining online please visit the online event page for the Zoom joining link and password. \nFor inquiries about accessibility\, please contact the DI team at designinformatics@ed.ac.uk or visit the Access webpage for more information about the venue: https://inspace.ed.ac.uk/venue-access/
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-ananda-rutherford
LOCATION:Online only
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/3.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250327T160000
DTEND;TZID=UTC:20250327T170000
DTSTAMP:20260411T100728
CREATED:20250210T124840Z
LAST-MODIFIED:20250210T124840Z
UID:3271-1743091200-1743094800@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Dr Claire Paterson-Young
DESCRIPTION:Ethical review to support Responsible Artificial Intelligence (AI) in policing: A preliminary study of West Midlands Police’s specialist data ethics review committee\n  \nBook your hybrid ticket now!\n\n\nThe deployment of AI by the police\, while promising more effective use of data for the prevention and detection of crime\, brings with it considerable threats of disproportionality and interference with fundamental rights. The West Midlands Office of the Police and Crime Commissioner (WMOPCC) and West Midlands Police (WMP) Ethics Committee aims to bridge the gap between ethical reflection\, scientific rigour\, and a focus on human rights\, thus contributing to responsible AI in policing. This seminar explores findings from an interdisciplinary research project that examined the impact and influence of the Committee\, including: \n\nDeveloping an understanding within the police of key ethical\, scientific\, legal and operational issues for planning and implementation.\nEmbedding genuine representation from the community that the police serve in ethical oversight committees to ensure opportunities for transparent engagement.\nImportance of explaining clearly how AI will be used in policing\, so as to enable potential benefits\, risks/harms and proportionality to be assessed in the same conversation.\nNeed for Police forces\, Police and Crime Commissioners and national bodies embarking on AI-driven policing to address the ethical\, legal and technical questions raised by policing AI\, such as reconciling privacy and security priorities relevant to the assessment of the proportionality of using suspect data.\n\n\n\n\n\nBio\nClaire Paterson-Young (BA MSc PhD) is an Associate Professor & Research Leader at the Institute for Social Innovation and Impact (ISII). Claire’s current major research projects include AI in Law Enforcement (RAI-UK funded 4-year interdisciplinary project titled ‘PROBabLE Futures – Probabilistic AI Systems in Law Enforcement Futures’). Claire has over 15 years practice and management experience in safeguarding\, child sexual exploitation\, trafficking\, sexual violence\, youth and restorative justice. Claire is Chair of the University of Northampton Research Ethics Committee and a serving member of the West Midlands Police and Crime Commissioner Ethics Committee. She formerly served as a member of the Health and Research Association Research Ethics Committee. She is a trustee of the National Association for Youth Justice (NAYJ)\, Fellow of the Royal Society of Arts for the encouragement of Arts\, Manufactures and Commerce (RSA) and Fellow of the Higher Education Academy (HEA). Claire is a Research Affiliate at Vulnerability & Policing Futures Research Centre. She has held a Visiting Fellowship position at Binus University (Indonesia) and Associate Fellowship position at Children and Young People Centre for Justice (Scotland). \nRunning Order \n16.00 – Talk by Claire Paterson-Young \n16.40 – Q&A \n17.00 – End \nIn-person: Inspace\, 1 Crichton St\, Newington\, Edinburgh EH8 9AB\nOnline: Zoom \nPlease note limited seats are available at Inspace for in-person audiences\, so please book tickets in advance. For those joining online please visit the online event page for the Zoom joining link and password. \nFor inquiries about accessibility\, please contact the DI team at designinformatics@ed.ac.uk or visit the Access webpage for more information about the venue: https://inspace.ed.ac.uk/venue-access/
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-claire-paterson-young
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/4.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250424T160000
DTEND;TZID=UTC:20250424T170000
DTSTAMP:20260411T100728
CREATED:20241003T113453Z
LAST-MODIFIED:20250429T132426Z
UID:2624-1745510400-1745514000@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar - Dr Dan McQuillan
DESCRIPTION:Responsible AI means Decomputing\nIn this talk Dan McQuillan will argue that having a responsible approach to AI means decomputing. To start with\, decomputing means less computing; in particular\, less of the hyperscale infrastructures which underpin generative AI and whose datacentres are sprouting like mushrooms across the globe. \n\n\nBut decomputing goes beyond concern for environmental impacts to challenge the commitment of the wider AI apparatus to extractivism and scale. AI as we know it exploits sources of data and labour as well as natural resources like energy\, water and minerals. Meanwhile its claims to superior intelligence rest on the continually expanding size of its models and datasets. Decomputing draws on both decolonialism and degrowth\, arguing for an approach to AI based on the need for social justice and a just transition. \nAll too often\, AI acts as a reductive diversion from complex social and environmental questions\, so decomputing seeks alternatives that are relational\, collective and truly response-able\, because they can respond to the complexities of lived experience. \n\n\n\n\nBio\nDr Dan McQuillan\, Lecturer in Creative and Social Computing at Goldsmiths\, University of London \nAfter a Ph.D in Experimental Particle Physics\, Dan worked with people learning disabilities & mental health issues\, created websites with asylum seekers\, ran social tech camps in Kyrgyzstan and Sarajevo and worked for Amnesty International and the NHS. He recently authored ‘Resisting AI – An Anti-fascist Approach to Artificial Intelligence’ \nWatch the recording below:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-dan-mcquillan
LOCATION:Online only
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/10/4.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250508T160000
DTEND;TZID=UTC:20250508T170000
DTSTAMP:20260411T100728
CREATED:20241105T114930Z
LAST-MODIFIED:20250512T114432Z
UID:2782-1746720000-1746723600@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Dr Lydia Farina
DESCRIPTION:Determining responsibility considerations for AI ecosystems in the context of the creative industries\nThis talk provides key insights from our scoping BRAID project ‘Creating a dynamic archive of Responsible AI Ecosystems in the context of Creative AI’. The project lays the foundation work for mapping RAI ecosystems in this context by using bottom-up evidence already collected in specific research and artistic projects. We interpret AI ecosystems as interlinked ecosystems consisting of different individual actors and groups interacting in complex ways with one another and with AI applications. Evidence collected from the case studies are modelled into a dynamic archive to enable us to determine the boundaries of these ecosystems and the relevant responsibility considerations. The structure of the dynamic archive is based on present and future stakeholders and on responsibility priorities identified by the case studies participants. The talk includes insights relating to the responsible use of AI applications both as actors within the ecosystem and as external curators of the dynamic archive. \n\n\n\n\nBio\nLydia Farina is an Assistant Professor in Philosophy at the University of Nottingham\, working on the philosophy of mind\, metaphysics and the philosophy of artificial intelligence. More specifically she researches the nature of emotion\, AI Responsibility\, affective computing and social kinds. In the past year she researched the use of dynamic archives to determine responsible use of AI in the creative industries as the Primary Investigator of a BRAID scoping project. She holds a PhD and a MA in Philosophy from the University of Manchester\, a MA in Classics from University College London and a BA in Classics from Aristotle University of Thessaloniki. Before Academia she worked in Finance and is a member of the Chartered Institute of Taxation (CIOT). \n\n\n\n\nWatch the recording below:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-lydia-farina
LOCATION:Online only
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/11/8-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250515T160000
DTEND;TZID=UTC:20250515T170000
DTSTAMP:20260411T100728
CREATED:20250226T110248Z
LAST-MODIFIED:20250606T083255Z
UID:3298-1747324800-1747328400@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Srravya Chandhiramowuli
DESCRIPTION:Millions of workers\, particularly in global south regions\, are engaged in creating large-scale annotated datasets used for training and fine-tuning models\, as well as making AI work as intended by verifying and correcting its outcomes where required. Yet\, there is little recognition\, in AI development or governance\, of the role of data workers or the challenges they face. In this talk\, I bring attention to the contributions as well as concerns arising from data work through ethnographic insights into two data work projects\, one in which data work is structured as a repetitive\, unitised activity and another which aims to recover data work from such reductive frames using feminist-led\, participatory approaches. By tracing the work practices\, values and tensions across the two projects\, I highlight how data work\, including efforts to responsibilize it\, is caught within and shaped by the globalised supply chains that prioritise efficiency and expansion. Critically examining data work allows us to confront the scalar logics that underpin dataset (and indeed AI) production and to intervene in them as part of envisioning responsible AI futures. \n\n\n\n\nBio\nSrravya Chandhiramowuli is a PhD candidate in the University of Edinburgh’s Institute for Design Informatics and a PhD affiliate at the Centre for Technomoral Futures. Her research closely follows the on-ground practices of dataset production for AI\, bringing particular attention to systemic challenges and frictions in data and AI pipelines. Building on scholarship in Human Computer Interaction (HCI) and Science and Technology Studies (STS)\, Srravya’s research seeks to contribute towards just and equitable AI futures. \n\n\n\n\nWatch the recording here:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-srravya-chandhiramowuli
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/2025-sem-2-eventbrite-images-1.png
END:VEVENT
END:VCALENDAR