BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//BRAID UK - ECPv6.14.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:BRAID UK
X-ORIGINAL-URL:https://braiduk.org
X-WR-CALDESC:Events for BRAID UK
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20240101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20240502T160000
DTEND;TZID=UTC:20240502T170000
DTSTAMP:20260410T193353
CREATED:20240412T154543Z
LAST-MODIFIED:20240503T142505Z
UID:1850-1714665600-1714669200@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar Series – Dr Emily Postan
DESCRIPTION:Uncanny Kinds\n  \nIn the fields of healthcare and health research\, there is particular interest using machine learning (ML) to generated novel or refined diagnostic\, prognostic\, risk\, and treatment categories. This talk interrogates the nature of these categories and their implications for the people thus (re)categorised. It approaches these questions through the lens of the philosophical idea of ‘human kinds’. It asks to what extent health-related categories generated by ML might function as human kinds and\, if so\, whether they might  differ\, in ethically significant ways\, from socially-originating kinds. In doing so\, it suggests that our understanding of responsible ML categorisation practices need to look beyond technical capabilities and clinical utility to consider wider personal and social impacts. \nBio\nEmily Postan is a Chancellor’s Fellow in Bioethics at the University of Edinburgh Law School and a Deputy Director of the J Kenyon Mason Institute for Medicine Life Sciences and the Law. Her research principally focuses on ethical questions about the relationship between our bodies\, our health\, and our identities\, and the ways that health technologies affect these relationships.  Her current research project ‘Identity by Algorithm’ explores the ethical implications of novel social categories generated by health applications of AI. Her wider research interests includes addressing the ethical challenges posed by data sharing\, neurotechnologies\, genomics\, and assisted reproduction. Emily has a background in philosophy and as a policy-manager at the Scottish Government. She received her PhD from Edinburgh Law School in 2017. Her monograph ‘Embodied Narratives: Protecting Identity Interests through Ethical Governance of Bioinformation’ was published by CUP 2022. \nX: @emily_postan \nWatch the recording now:\n﻿
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-series-dr-emily-postan
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/04/Emily-Postan-banner.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240516T160000
DTEND;TZID=UTC:20240516T170000
DTSTAMP:20260410T193353
CREATED:20240412T154432Z
LAST-MODIFIED:20240530T124804Z
UID:1853-1715875200-1715878800@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar Series – Bhargavi Ganesh
DESCRIPTION:Reframing Governance as Innovation: Steamboat accidents and their lessons for AI governance\nDespite the emergence of promising policy proposals worldwide\, AI governance is often discussed by policymakers and scholars alike as an intractable challenge. This is largely due to the technical/organisational complexity of sociotechnical AI systems\, and a fear that imperfect regulation will result in suppression of technological innovation. In this talk\, I will draw on the historical example of a previously “ungovernable” technology- the steamboat in the 1800’s- to challenge latent scepticism and argue that the governance of AI should in and of itself be viewed as an exercise in innovation. Steamboat governance was iterative\, requiring many instances of trial and error before achieving its aims. Similarly\, global AI governance can be reframed as an exercise in policy innovation. In doing so\, we can both celebrate the progress that has already been made\, and remain optimistic about the emergence of new regulatory interventions in response to novel challenges generated by AI. \nBio\nBhargavi Ganesh is a PhD student at the University of Edinburgh\, working on mixed-method approaches for designing and evaluating the governance of AI. In the past year\, she has worked within the Bridging Responsible AI Divides (BRAID) on a consultation for the Department of Science\, Innovation\, and Technology (DSIT)\, and interned within the former Centre for Data\, Ethics\, and Innovation. She is a member of the School of Informatics’ Artificial Intelligence Applications Institute and the Edinburgh Futures Institute’s Centre for Technomoral Futures. Bhargavi is currently affiliated with the Regulation and Design Lab at the University of Edinburgh and the Governance and Responsible AI Lab at Purdue University. Prior to her PhD\, Bhargavi’s research focused on the impacts of consumer finance policies on marginalized groups. Bhargavi holds a Bachelor’s degree (with honours) from New York University and Master’s in Computational Analysis and Public Policy from the University of Chicago. \nX: @Bhargavi_Ganesh \n  \nWatch recording now:\n﻿﻿
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-series-bhargavi-ganesh
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/04/Bhargavi-Ganesh-banner.png
END:VEVENT
END:VCALENDAR