Uncertainty Quantification Methods (In person)

Date: Tuesday 23 September 2025, 1.00PM
Location: Edinburgh
Heriot-Watt University, Edinburgh
Section Group Meeting


Share this event

The Applied Probability Section of the Royal Statistical Society will host a half-day meeting on Uncertainty Quantification Methods at Heriot-Watt University, Edinburgh.
The meeting will feature talks from Prof. Mike Giles and Prof. Lorenzo Pareschi on recent developments in UQ across stochastic modelling, numerical methods, and data-driven inference. Speakers will present theoretical and computational tools for quantifying uncertainty in complex probabilistic systems, with applications from finance to engineering and beyond.
The meeting aims to foster discussion between probabilists, statisticians, and practitioners, and welcomes early-career researchers.
 
Schedule (titles/abstract TBD):
- 1:00-1:30pm: Veronika Chronholm (University of Bath)
- 1:30-2:15pm: Prof. Lorenzo Pareschi (Heriot-Watt University)
- 2:15-2:45pm Coffee break
- 2:45-3:15: Jonathan Spence (University of Edinburgh)
- 3:15-4:00pm: Prof. Mike Giles (University of Oxford)
 
 
Veronika Chronholm (University of Bath)

Prof. Lorenzo Pareschi (Heriot-Watt University)
From Stochastic Particles to Neural Network Surrogates: New Directions in Uncertainty Quantification
 
Uncertainty quantification (UQ) has become an indispensable component of modern computational science, providing essential tools to assess the reliability of predictions in complex multiscale systems. In this talk I will present a perspective on recent developments at the interface of numerical analysis, multiscale modelling, and scientific computing, with examples ranging from particle-based methods to multi-fidelity strategies incorporating neural network surrogates. Applications will span areas from classical physics and engineering to the social and life sciences, with the aim of highlighting general principles and outlining future challenges for UQ methods.

Jonathan Spence (University of Edinburgh)
Efficient Data-Driven Sampling via Unfolded MCMC
 
Data-driven models are capable of representing complex and high data distributions by learning model parameters to embed feature information from a training dataset. Recent developments in Bayesian algorithms for imaging inverse problems focus on the use of data-driven methods to model prior information over a distribution of images. For example, plug-and-play methods represent the gradient of a prior distribution through a denoiser which is approximated by a deep neural-network trained to minimise a mean-squared error. The denoiser then provides gradient information for MCMC or gradient descent methods. In this work, we consider a novel approach to training data-driven models for inverse problems by 'unfolding' a small number of iterations of an MCMC kernel as a recurrent neural-network, with hyper-paremetric weights. By training using a combination of adversarial and regularization losses, the method promotes architectures which generate accurate samples from target posterior distributions.

Prof. Mike Giles (University of Oxford)  
Optimisation of mixed-precision MLMC for FPGAs
 
Multilevel Monte Carlo (MLMC) is a computational method that reduces the cost of Monte Carlo simulations by combining SDE approximations with multiple resolutions. A further avenue for significantly reducing cost and improving power efficiency of MLMC, notably for financial option pricing, is the use of low-precision calculations on configurable hardware devices such as Field-Programmable Gate Arrays (FPGAs). With this goal in mind, in this talk, we propose a new MLMC framework that exploits approximate random variables and fixed-point operations with optimised precision to compute most SDE paths with a lower cost
 

 
 
Abdul-Lateef Haji-Ali and Emma Horton for RSS Applied Probability Section
 
RSS Fellows FREE
Non-RSS members £10