Time series generation and anomaly detection in high dimensions

Time series generation and anomaly detection in high dimensions

Date: Wednesday 16 March 2022, 2.00PM
Online
Section Group Meeting


Share this event

With increasing amount of high dimensional time series data there is a need for developing probabilistic methods which can help analyse such data, for example with respect to detecting anomalies or financial portfolios. Moreover, methods for generating synthetic data which reproduce main features of the original data are much in demand.  The Wall Street Journal recently published a piece stating that “By 2024, 60% of the data used for the de­vel­op­ment of AI and an­a­lyt­ics projects will be syn­thet­i­cally gen­er­ated”. Approaches from Applied Probability may help develop and understand such generating methods.
 
This workshop will bring together researchers from the field of applied probability and data science. The first two talks will be live talks. The second two presentations will discuss two pre-recorded talks. The workshop will conclude with an overarching discussion.
 
Live talks:
2:00-2:45 (45 min) Prof Terry Lyons FLSW FRSE FRS (Oxford and The Alan Turing Institute)
2:50-3:25 (30+5 min) Prof Lukasz Szpruch (Edinburgh and The Alan Turing Institute)
 
Discussion of pre-recorded talks:
3:30-3:50 Dr. Peter Foster (The Alan Turing Institute)
3:55-4:15 Dr. Renyuan Xu (University of Southern California)
4:20-4:40 overarching discussion

Titles and Abstracts: 
Neural SDEs, Deep Learning and Stochastic Control - Dr. Lukasz Szpruch (Edinburgh and The Alan Turing Institute)
Neural SDEs combine neural networks with risk models based on classical stochastic differential equations (SDEs). We use them to find robust bounds for prices of derivatives and the corresponding hedging strategies while incorporating relevant market data. Neural SDE is an instantiation of generative models, and it can be used to simulate market scenarios needed for assessing risk profiles and hedging strategies. We develop and analyse algorithms needed for the efficient use of neural SDEs and validate our approach with numerical experiments using both local and stochastic volatility models.
Interesting training neural SDEs, which provide an abstraction of recurrent neural networks, can be studied using the theory of relaxed stochastic control. I will show how the theory of gradient flows in Wasserstein space can be used to prove convergence of noisy stochastic gradient descent algorithms used to train neural networks-based models.

Anomaly Detection on Streamed Data - Dr. Peter Foster (The Alan Turing Institute)
We introduce powerful but simple methodology for identifying anomalous observations against a corpus of normal observations. All data are observed through a vector-valued feature map. Our approach depends on the choice of corpus and that feature map but is invariant to affine transformations of the map and has no other external dependencies, such as choices of metric; we call it conformance. Applying this method to (signatures) of time series and other types of streamed data we provide an effective methodology of broad applicability for identifying anomalous complex multimodal sequential data. We demonstrate the applicability and effectiveness of our method by evaluating it against multiple data sets.

Learning to Simulate Tail Risk Scenarios - Dr Renyuan Xu (University of Southern California) 
The estimation of loss distributions for dynamic portfolios requires the simulation of scenarios representing realistic joint dynamics of their components. Scalability to large or heterogeneous portfolios involving multiple asset classes is particularly challenging, as is the accurate representation of tail risk.
 
We propose a novel data-driven approach for the simulation of realistic multi-asset scenarios with a particular focus on the accurate estimation of tail risk for a given class of static and dynamic portfolios selected by the user. By exploiting the joint elicitability property of Value-at-Risk (VaR) and Expected Shortfall (ES), we design a Generative Adversarial Network (GAN) architecture capable of learning to simulate price scenarios that preserve tail risk features for these benchmark trading strategies, leading to consistent estimators for their Value-at-Risk and Expected Shortfall. We demonstrate the accuracy and scalability of our method via extensive simulation experiments using synthetic and market data. Our results show that, in contrast to other data-driven scenario generators, our proposed scenario simulation method correctly captures tail risk for both static and dynamic portfolios.
 
This is based on joint work with Rama Cont, Mihai Cucuringu, and Chao Zhang from Oxford University.
 

 
 
Speakers:
  • Prof Terry Lyons FLSW FRSE FRS (Oxford and The Alan Turing Institute)
  • Prof Lukasz Szpruch (Edinburgh and The Alan Turing Institute)
  • Dr. Peter Foster (The Alan Turing Institute)
  • Dr. Renyuan Xu (University of Southern California)
 
Organisers: Alex Cox, Kathrin Glau, Hao Ni, Gesine Reinert
Organising Group: RSS Applied Probability Section
 
Fellows: Free
Non-Fellows: £10.00