When does AI go wrong?

Date: Monday 22 July 2024, 1.00PM - 2.00PM
Location: Microsoft Teams
Microsoft Teams
Meeting ID: 326 655 293 235
Passcode: yg2kCi
Local Group Meeting
Book now


Share this event

For the Edinburgh RSS Local Group's fourth event of the year, join us to hear to hear three speakers talk all about 'When does AI go wrong?' We will be joined by Professor Shannon Vallor, Professor Desmond Higham and Dr Eve Poole OBE who will each give a short talk on the topic

Link: Click here to join.
Meeting ID: 326 655 293 235
Passcode: yg2kCi
 
Dr Eve Poole OBE, Robot Souls? (13:00-13:20)
Global concern about the Control Problem in AI could counter-intuitively best be addressed by looking at the so-called bugs in our own programming as humans. Eve will explain what this 'junk code' is, and the role it plays in keeping our own species on track.

Dr Eve Poole OBE has a BA from Durham, an MBA from Edinburgh, and a PhD in Theology and Capitalism from Cambridge. She has written several books, including Robot Souls and Leadersmithing, which was Highly Commended in the 2018 Business Book Awards.​

Professor Desmond Higham, AI can go Awry (13:20-13:40)
Des will illustrate some practical “adversarial” algorithms which reveal vulnerabilities in AI systems. He will also explain how it is possible to prove that, under appropriate assumptions, these vulnerabilities are unavoidable. Implications around regulation and policymaking will also be discussed.

Prof. Des Higham is a Professor of Numerical Analysis in the School of Mathematics at The University of Edinburgh. His research interests include stochastic computation, data science and applications in AI. He is a Fellow of the Royal Society of Edinburgh and a Fellow of the Society for Industrial and Applied Mathematics (SIAM).

Professor Shannon Vallor, Looking in the AI Mirror (13:40-14:00)
This talk will use the metaphor of the mirror to explain how generative AI tools work, and how and why they go wrong. The mirror metaphor also offers helpful lessons on how to confront generative AI’s failure modes, and how to form more reasonable expectations about what AI can do for us, and what it can’t.

Prof. Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy. She is Director of the Centre for Technomoral Futures in EFI, and co-Director of the BRAID (Bridging Responsible AI Divides) programme.
 
 
Contact Chris Oldnall - Edinburgh Local Group Secretary
 
Book now