Part of the AI Fringe
The launch of ChatGPT less than a year ago is a milestone moment in the story of artificial intelligence. Overnight, large language models were transformed from research projects into consumer products, now used by millions each month. The capabilities are impressive, the productivity gains undeniable. But, what of the downsides?
In March 2023, the UK government published a white paper that spoke of the risks AI poses to "people’s privacy, their human rights [and] their safety" as well as "concerns about the fairness of using AI tools to make decisions which impact people’s lives". The Royal Statistical Society, in response, has called for investment in a centre for AI evaluation methodology, arguing that users of AI systems should be able to judge the trustworthiness of claims made by AI companies as well as the outputs of their systems.
In early November the UK government hosts an AI Safety Summit. Ahead of that event, we ask: What should AI evaluation look like? How will it work in practice? What metrics are most important, and -- crucially -- who gets to decide this? Join us for a special panel debate at the RSS, where these questions, and more, will be discussed.
- Andy Garrett, RSS president (chair)
- Mihaela van der Schaar, John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge and a Fellow at The Alan Turing Institute
- Detlef Nauck, Head of AI and Data Science Research, BT
- Stephanie Hare, researcher, broadcaster, and author of Technology is Not Neutral
- Mark Levene, Principal Scientist, Department of Data Science, National Physical Laboratory
- Martin Goodson, Chief Executive, Evolution AI, and former chair of the RSS Data Science and AI Section
Royal Statistical Society, in partnership with the RSS Data Science & AI, Data Ethics & Governance, and Computational Statistics & Machine Learning Sections, and Real World Data Science (realworlddatascience.net)