Journal webinars


Journal webinars are held every few months and last about an hour. Journal papers are carefully selected from recent issues of the Royal Statistical Society's journals by the editorial board for their importance, relevance and/or use of cutting-edge methodology; authors are invited to present their work and take questions from attendees.

Webinar format

Two papers are selected from our journals and authors are invited to present their papers (20 minutes) followed by discussion (25 minutes) for each paper.

An open discussion led by our discussants will follow the author presentations in which everyone is encouraged to take part. You can ask the authors a question over the phone or type a message if you prefer in the chat box.

Questions on the paper or general queries can be emailed in advance of the session to journalwebinar@rss.org.uk.

How to join

Journal webinars are free, open to members and non-members and simple to join via Microsoft Teams. Read our instructions on how to join

We'll publish slides on this page when they become available. You may require Adobe Reader and/or PowerPoint viewer to access papers and slides. 

Those unable to listen in live can listen to the podcast and view slides from the presentation afterwards on our YouTube Webinar playlist.

Next events  

Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models, Monday 23 September 2024

Model-Based Clustering for Social Networks, Monday, 30 September 2024
 
  • Paper: 'Model-Based Clustering for Social Networks' by Mark S. Handcock, and Adrian E. Raftery.
  • Chair: Christian Martin Hennig (University of Bologna)
  • Discussants: Claire Gormley & Nial Friel (University College Dublin)


Past events

The Conditional Permutation Test for Independence While Controlling for Confounders​, Monday 13 May 2024

Theoretical Guarantees for Approximate Sampling and Log-Concave D, Tuesday 31 October 2023.

Watch the webinar 'Theoretical Guarantees for Approximate Sampling and Log-Concave' D on YouTube
  


  • 2021-2022
    02 December 2021
    Journal webinar on Multiple Testing
    Paper 1: ‘AdaPT: an interactive procedure for multiple testing with side information’ by Lei & Fithian
    Abstract

    We consider the problem of multiple-hypothesis testing with generic side information: for each hypothesis urn:x-wiley:13697412:media:rssb12274:rssb12274-math-0001we observe both a p-value pi and some predictor xi encoding contextual information about the hypothesis. For large-scale problems, adaptively focusing power on the more promising hypotheses (those more likely to yield discoveries) can lead to much more powerful multiple-testing procedures. We propose a general iterative framework for this problem, the adaptive p-value thresholding procedure which we call AdaPT, which adaptively estimates a Bayes optimal p-value rejection threshold and controls the false discovery rate in finite samples. At each iteration of the procedure, the analyst proposes a rejection threshold and observes partially censored p-values, estimates the false discovery proportion below the threshold and proposes another threshold, until the estimated false discovery proportion is below α. Our procedure is adaptive in an unusually strong sense, permitting the analyst to use any statistical or machine learning method she chooses to estimate the optimal threshold, and to switch between different models at each iteration as information accrues. We demonstrate the favourable performance of AdaPT by comparing it with state of the art methods in five real applications and two simulation studies.
    View slides
    Paper 2: ‘Generalized α-investing: definitions, optimality results and application to public databases’ by Aharoni & Rosset
    The increasing prevalence and utility of large public databases necessitates the development of appropriate methods for controlling false discovery. Motivated by this challenge, we discuss the generic problem of testing a possibly infinite stream of null hypotheses. In this context, Foster and Stine suggested a novel method named α-investing for controlling a false discovery measure known as mFDR. We develop a more general procedure for controlling mFDR, of which α-investing is a special case. We show that, in common practical situations, the general procedure can be optimized to produce an expected reward optimal version, which is more powerful than α-investing. We then present the concept of quality preserving databases which was originally introduced by Aharoni and co-workers, which formalizes efficient public database management to save costs and to control false discovery simultaneously. We show how one variant of generalized α-investing can be used to control mFDR in a quality preserving database and to lead to significant reduction in costs compared with naive approaches for controlling the familywise error rate implemented by Aharoni and co-workers.
    View slides
    Chair: Yi Yu, University of Warwick
    Discussant: Aaditya Ramdas, Carnegie Mellon University
    Watch the webinar on 'Dynamic networks' on YouTube
  • 2019-2020

    9 December 2020
    Dynamic networks 
    Paper 1: 'Statistical clustering of temporal networks through a dynamic stochastic block model’ by Matias & Miele. It was published in Series B, Volume 79, Issue 4, September 2017 and was presented by Catherine Matias, CNRS, Sorbonne Université and Université de Paris.
    Abstract
    Statistical node clustering in discrete time dynamic networks is an emerging field that raises many challenges. Here, we explore statistical properties and frequentist inference in a model that combines a stochastic block model for its static part with independent Markov chains for the evolution of the nodes groups through time. We model binary data as well as weighted dynamic random graphs (with discrete or continuous edges values). Our approach, motivated by the importance of controlling for label switching issues across the different time steps, focuses on detecting groups characterized by a stable within‐group connectivity behaviour. We study identifiability of the model parameters and propose an inference procedure based on a variational expectation–maximization algorithm as well as a model selection criterion to select the number of groups. We carefully discuss our initialization strategy which plays an important role in the method and we compare our procedure with existing procedures on synthetic data sets. We also illustrate our approach on dynamic contact networks: one of encounters between high school students and two others on animal interactions. An implementation of the method is available as an R package called dynsbm.
    View slides
    Paper 2: 'A network analysis of the volatility of high dimensional financial series' by Barigozzi & Hallin. It was published in Series C, Volume 66, Issue in April 2017 and was presented by Matteo Barigozzi, University of Bologna, Italy.
    Abstract
    Interconnectedness between stocks and firms plays a crucial role in the volatility contagion phenomena that characterize financial crises, and graphs are a natural tool in their analysis. We propose graphical methods for an analysis of volatility interconnections in the Standard & Poor's 100 data set during the period 2000–2013, which contains the 2007–2008 Great Financial Crisis. The challenges are twofold: first, volatilities are not directly observed and must be extracted from time series of stock returns; second, the observed series, with about 100 stocks, is high dimensional, and curse‐of‐dimensionality problems are to be faced. To overcome this double challenge, we propose a dynamic factor model methodology, decomposing the panel into a factor‐driven and an idiosyncratic component modelled as a sparse vector auto‐regressive model. The inversion of this auto‐regression, along with suitable identification constraints, produces networks in which, for a given horizon h, the weight associated with edge (i,j) represents the h‐step‐ahead forecast error variance of variable i accounted for by variable j's innovations. Then, we show how those graphs yield an assessment of how systemic each firm is. They also demonstrate the prominent role of financial firms as sources of contagion during the 2007–2008 crisis.
    View slides
    Chair: Yi Yu, University of Warwick
    Discussant: Ernst Wit, University of Groningen, The Netherlands
    Watch the webinar on 'Dynamic networks' on YouTube.



    02 April 2019
    Paper: ‘Confidence intervals for low dimensional parameters in high dimensional linear models’ by Cun-Hui Zhang and Stephanie S Zhang. 
    The paper was published in JRSS Series B (Vol 76:1) in January 2014. It is an open access paper available from Wiley Online Library
    Abstract: The purpose of this paper is to propose methodologies for statistical inference of low dimensional parameters with high dimensional data. We focus on constructing confidence intervals for individual coefficients and linear combinations of several of them in a linear regression model, although our ideas are applicable in a much broader context. The theoretical results that are presented provide sufficient conditions for the asymptotic normality of the proposed estimators along with a consistent estimator for their finite dimensional covariance matrices. These sufficient conditions allow the number of variables to exceed the sample size and the presence of many small non‐zero coefficients. Our methods and theory apply to interval estimation of a preconceived regression coefficient or contrast as well as simultaneous interval estimation of many regression coefficients. Moreover, the method proposed turns the regression data into an approximate Gaussian sequence of point estimators of individual regression coefficients, which can be used to select variables after proper thresholding. The simulation results that are presented demonstrate the accuracy of the coverage probability of the confidence intervals proposed as well as other desirable properties, strongly supporting the theoretical results.
    Presenter: Cun-Hui Zhang will present and discuss his paper ‘Confidence intervals for low dimensional parameters in high dimensional linear models’. The paper is co-authored by Stephani S Zhang.
    Chair: Yi Yu, University of Bristol
    Discussants: Andrea Montenari, Stanford University and Sara van de Geer, ETH Zurich

     

  • 2017-2018

    24 September 2018
    RSS & PSI Journal webinar on ‘Meta-analysis’
    Watch video (YouTube)
    Presenter: Ken Rice will present and discuss his paper ‘A re-evaluation of fixed effect(s) meta-analysis’ published January 2018 in JRSS Series A Volume 181/1.’ The paper is co-authored by Julian P T Higgins and Thomas Lumley. Download slides (PDF).
    Presenter: Natalie Dimier will present and discuss her paper ‘An investigation into the two-stage meta-analytic copula modelling approach’ published in Pharmaceutical Statistics, Volume 16/5 in 2017. The paper is co-authored by Susan Todd. Download slides (.pptx).
    Journal publishers, Wiley, will make the papers free to access from a few weeks before until a few weeks after the webinar.
    Chair: Ian White
    Discussants: James Carpenter and Daniel Jackson


    21st February 2018
    'A Bayesian spatiotemporal model to estimate long-term exposure to outdoor air pollution at coarser administrative geographies in England and Wales by Sabyasachi Mukhopadhyay & Sujit K Sahu - Download slides (PDF), Watch video (YouTube)
    The paper was published online in Series A of the journal in June 2017 . Publishers, Wiley, will make the paper free to access a couple of weeks before and after the event on 21st February.
    Abstract: Estimation of long-term exposure to air pollution levels over a large spatial domain, such as the mainland UK, entails a challenging modelling task since exposure data are often only observed by a network of sparse monitoring sites with variable amounts of missing data. The paper develops and compares several flexible non-stationary hierarchical Bayesian models for the four most harmful air pollutants, nitrogen dioxide and ozone, and PM10 and PM2.5 particulate matter, in England and Wales during the 5-year period 2007–2011. The models make use of observed data from the UK's automatic urban and rural network as well as output of an atmospheric air quality dispersion model developed recently especially for the UK. Land use information, incorporated as a predictor in the model, further enhances the accuracy of the model. Using daily data for all four pollutants over the 5-year period we obtain empirically verified maps which are the most accurate among the competition. Monte Carlo integration methods for spatial aggregation are developed and these enable us to obtain predictions, and their uncertainties, at the level of a given administrative geography. These estimates for local authority areas can readily be used for many purposes such as modelling of aggregated health outcome data and are made publicly available alongside this paper
    Presenter: Sujit K Sahu, Professor of Statistics at the University of Southampton
    Chair: Richard Chandler, Professor of Statistics, University College London
    Discussant: Jonathan Rougier, Professor of Statistical Science, Bristol


    25th October 2017
    Paper: Shah & Samworth ‘
    Variable selection with error control: Another look at stability selection’; was published in Series B, Volume 75, Issue 1, January 2013. Download slides (PDF) Watch video (YouTube)
    Presenter/author: Rajen D Shah and Richard J Samworth
    Chair: Chenlei Leng
    Discussant: Nicolai Meinshausen
    Abstract: Stability selection was recently introduced by Meinshausen and Bühlmann as a very general technique designed to improve the performance of a variable selection algorithm. It is based on aggregating the results of applying a selection procedure to subsamples of the data. We introduce a variant, called complementary pairs stability selection, and derive bounds both on the expected number of variables included by complementary pairs stability selection that have low selection probability under the original procedure, and on the expected number of high selection probability variables that are excluded. These results require no (e.g. exchangeability) assumptions on the underlying model or on the quality of the original selection procedure. Under reasonable shape restrictions, the bounds can be further tightened, yielding improved error control, and therefore increasing the applicability of the methodology


    12 July 2017
    Joint webinars focus on one key paper from the PSI, one from the RSS Journal. One or more discussants are also invited to join.
    RSS author & paper:
    Zhiwei Zhang, 'Subgroup Selection in Adaptive Signature Designs of Confirmatory Clinical Trials', published February 2017 in JRSS Series C, Volume 2.Slide presentation.
    Co-authored by Meijuan Li, Min Lin, Guoxing Soon, Tom Greene and Changyu Shen.
    Abstract: The increasing awareness of treatment effect heterogeneity has motivated flexible designs of confirmatory clinical trials that prospectively allow investigators to test for treatment efficacy for a subpopulation of patients in addition to the entire population. If a target subpopulation is not well characterized in the design stage, it can be developed at the end of a broad eligibility trial under an adaptive signature design. The paper proposes new procedures for subgroup selection and treatment effect estimation (for the selected subgroup) under an adaptive signature design. We first provide a simple and general characterization of the optimal subgroup that maximizes the power for demonstrating treatment efficacy or the expected gain based on a specified utility function. This characterization motivates a procedure for subgroup selection that involves prediction modelling, augmented inverse probability weighting and low dimensional maximization. A cross-validation procedure can be used to remove or reduce any resubstitution bias that may result from subgroup selection, and a bootstrap procedure can be used to make inference about the treatment effect in the subgroup selected. The approach proposed is evaluated in simulation studies and illustrated with real examples.
    Dr Zhiwei Zhang is Associate Professor of Biostatistics at the University of California, Riverside. He received his PhD in Biostatistics from the University of Pittsburgh in 2004. Prior to his current position, he has worked at the US Food and Drug Administration (as Mathematical Statistician) and the US National Institutes of Health (as Investigator of Biostatistics). Dr. Zhang has done methodological research in many areas of biostatistics, including causal inference, precision medicine, and clinical trial design and analysis.
    PSI author & paper
    Gu Mi, 'Enhancement of the adaptive signature design for learning and confirming in a single pivotal trial' published May 2017 in Pharmaceutical Statistics. Slide presentation, Watch video
    Abstract: Because of the complexity of cancer biology, often the target pathway is not well understood at the time that phase III trials are initiated. A 2-stage trial design was previously proposed for identifying a subgroup of interest in a learn stage, on the basis of 1 or more baseline biomarkers, and then subsequently confirming it in a confirmation stage. In this article, we discuss some practical aspects of this type of design and describe an enhancement to this approach that can be built into the study randomization to increase the robustness of the evaluation. Furthermore, we show via simulation studies how the proportion of patients allocated to the learn stage versus the confirm stage impacts the power and provide recommendations.
    Dr Gu Mi is a Research Scientist at Eli Lilly and Company in Indianapolis, Indiana, USA. He received his PhD in statistics from Oregon State University in June 2014. He has been a clinical and biomarker statistician at Lilly Oncology with hands-on experience of oncology trials. He has been actively involved in trial designs, statistical methodology development in oncology, biomarker data analyses, and regulatory activities. He has been serving as an expert reviewer for eight peer-reviewed journals covering topics such as biopharmaceutical statistics, clinical trials, and bioinformatics.
    Discussant: Dr Richard Simon has just retired as associate director of the Division of Cancer Treatment & Diagnosis and Chief of the Computational and Systems Biology Branch at the National Cancer Institute after many years there. He has published extensively on the methodology of clinical trials and the analysis of high dimensional genomic data and is author of Using Genomics in Clinical Trials and Predictive Medicine and is the architect of the BRB-ArrayTools and BRB SeqTools software packages. Dr Simon is the recipient of the 2013 Karl Peace award of the American Statistical Association and the 2017 Marvin Zelen award from Harvard University.
    Chair:Steven Gilmour, Professor of Statistics at King’s College London, will chair the event. He research is mostly on the statistical theory, methodology and applications of the design and analysis of experiments. Much of his research is on experiments with complex treatment structures, such as those with many variables (factorial designs), those with continuous levels (especially leading to nonlinear models) and those with several continuous variables (response surface methodology).

  • 2015-2016

    26 April 2016
    A scalable bootstrap for massive data (RSS Series B, Volume 76, Issue 4, 2014) Download slides (Power Point), watch video (YouTube)
    Autor: Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. He received his Masters in Mathematics from Arizona State University, and earned his PhD in Cognitive Science in 1985 from the University of California, San Diego. He was a professor at MIT from 1988 to 1998. His research interests bridge the computational, statistical, cognitive and biological sciences, and have focused in recent years on Bayesian nonparametric analysis, probabilistic graphical models, spectral methods, kernel machines and applications to problems in distributed computing systems, natural language processing, signal processing and statistical genetics. Prof. Jordan is a member of the National Academy of Sciences, a member of the National Academy of Engineering and a member of the American Academy of Arts and Sciences. He is a Fellow of the American Association for the Advancement of Science.He has been named a Neyman Lecturer and a Medallion Lecturer by the Institute of Mathematical Statistics. He received the IJCAI Research Excellence Award in 2016, the David E. Rumelhart Prize in 2015 and the ACM/AAAI Allen Newell Award in 2009. He is a Fellow of the AAAI, ACM, ASA, CSS, IEEE, IMS, ISBA and SIAM.
    Co-authors: Ariel Kleiner, Ameet Talwalkar, Purnamrita Sarkar
    Chair: Richard Samworth, Cambridge University
    The bootstrap provides a simple and powerful means of assessing the quality of estimators. However, in settings involving large data sets—which are increasingly prevalent—the calculation of bootstrap-based quantities can be prohibitively demanding computationally. Although variants such as subsampling and the m out of n bootstrap can be used in principle to reduce the cost of bootstrap computations, these methods are generally not robust to specification of tuning parameters (such as the number of subsampled data points), and they often require knowledge of the estimator's convergence rate, in contrast with the bootstrap. As an alternative, we introduce the ‘bag of little bootstraps’ (BLB), which is a new procedure which incorporates features of both the bootstrap and subsampling to yield a robust, computationally efficient means of assessing the quality of estimators. The BLB is well suited to modern parallel and distributed computing architectures and furthermore retains the generic applicability and statistical efficiency of the bootstrap. We demonstrate the BLB's favourable statistical performance via a theoretical analysis elucidating the procedure's properties, as well as a simulation study comparing the BLB with the bootstrap, the m out of n bootstrap and subsampling. In addition, we present results from a large-scale distributed implementation of the BLB demonstrating its computational superiority on massive data, a method for adaptively selecting the BLB's tuning parameters, an empirical study applying the BLB to several real data sets and an extension of the BLB to time series data.


    10th November 2016
    ‘Is the Carli Index flawed? Assessing the case for the RPIJ’ published in JRSS-A in 2015 (Vol 178:2) Download slides (PDF)
    Author: Peter Levell is an economics researcher at the Institute for Fiscal Studies (IFS) and a part-time PhD student at University College London. His work at the IFS has so far covered a diverse set of subjects: from measurement issues in expenditure surveys, to issues around taxation, the distributional impact of inflation and behavioural economics. To date his academic work has focused on trying to answer questions concerning the appropriate measurement of consumer price inflation and at achieving a better understanding of household decisions over consumption and labour supply."
    Chair: Paul Smith, Associate Professor in Official Statistics, University of Southampton
    Discussant: Andrew Baldwin, a former employee of Statistics Canada
    This paper discusses the decision in March 2013 of the UK's Office for National Statistics to replace the controversial Carli index with the Jevons index in a new version of the Retail Prices Index - the RPIJ. In doing so we make three contributions to the way price indices should be selected for measures of consumer price inflation when quantity information is not available (i.e at the `elementary' level). Firstly, we introduce a new price bouncing test under the test approach for choosing index numbers. Secondly, we provide empirical evidence on the performance of the Carli and Jevons indices in different contexts under the statistical approach. Thirdly, applying something analogous to the principle of insufficient reason, we argue contrary to received wisdom in the literature, that the economic approach can be used to choose indices at the elementary level, and moreover that it favours the use of the Jevons index. Overall, we conclude that there is a case against the Carli index and that the Jevons index is to be preferred.


    6 July 2016
    Estimands
    Presented by Alan Philipps, Vice President Biostatistics, ICON Clinical Research and Peter Diggle, President of the RSS webcast (YouTube) slides1, slides2
    Chair: Andrew Garrett
    Estimands (what is to be estimated) is a new and hot topic in clinical development, in particular with the regulatory authorities who are responsible for approving new treatments. New regulatory guidance will be developed that will also address the subject of sensitivity analyses. This webinar will explore the current thinking on the topic and revisit some of the earlier work to consider how the topic has evolved over 10 years and how it might look in the future.


    1 March 2016
    Optimal design: getting more out of experiments with hard-to-change factors’
    Presented by Professor Peter Goos - download slides (PDF), watch video (YouTube)
    Discussant: Maria Lanzerath
    Chair:Steven Gilmour
    We introduce a new method for generating optimal split-plot designs. These designs are optimal in the sense that they are efficient for estimating the fixed effects of the statistical model that is appropriate given the split-plot design structure. One advantage of the method is that it does not require the prior specification of a candidate set. This makes the production of split-plot designs computationally feasible in situations where the candidate set is too large to be tractable. The method allows for flexible choice of the sample size and supports inclusion of both continuous and categorical factors. The model can be any linear regression model and may include arbitrary polynomial terms in the continuous factors and interaction terms of any order. We demonstrate the usefulness of this flexibility with a 100-run polypropylene experiment involving 11 factors where we found a design that is substantially more efficient than designs that are produced by using other approaches.
    Peter Goos is a professor at the Faculty of Bio-Science Engineering of the University of Leuven and at the Faculty of Applied Economics of the University of Antwerp, where he teaches various introductory and advanced courses on statistics and probability. His main research area is the statistical design and analysis of experiments. He has published books on 'The Optimal Design of Blocked and Split-Plot Experiments', 'Optimal Experimental Design: A Case-Study Approach', 'Statistics with JMP: Graphs, Descriptive Statistics and Probability' and 'Statistics with JMP: Hypothesis Tests, ANOVA and Regression'.
    To date, Peter Goos has received the Shewell Award and the Lloyd S Nelson Award of the American Society for Quality, the Ziegel Award and the Statistics in Chemistry Award from the American Statistical Association, and the Young Statistician Award of the European Network for Business and Industrial Statistics.


    21 October 2015
    Frequentist accuracy of Bayesian estimates
    Presented by Bradley Efron, Max H Stein Professor of Humanities and Sciences, Professor of Statistics at Stanford University - download slides
    Discussant: Andrew Gelman of Columbia University.
    Chair: Peter Diggle.
    Bradley's paper 'Frequentist accuracy of Bayesian estimates' was recently published in the Royal Statistical Society's Series B Journal (Volume 77 (2015), part 3). The abstract is as follows:
    In the absence of relevant prior experience, popular Bayesian estimation techniques usually begin with some form of 'uninformative' prior distribution intended to have minimal inferential influence. Bayes' rule will still produce nice-looking estimates and credible intervals, but these lack the logical force attached to experience-based priors and require further justification. This paper concerns the frequentist assessment of Bayes estimates. A simple formula is shown to give the frequentist standard deviation of a Bayesian point estimate. The same simulations required for the point estimate also produce the standard deviation. Exponential family models make the calculations particularly simple, and bring in a connection to the parametric bootstrap.
    Bradley Efron is Max H Stein professor of humanities and sciences, professor of statistics at Stanford University, and professor of biostatistics with the Department of Health Research and Policy in the School of Medicine. He is a former president of both the American Statistical Association and the Institute of Mathematical Statistics. A recipient of the Ford Prize of the Mathematical Association of America and of both the Wilks Medal and the Noether Prize from the American Statistical Association (ASA). In 2003 Bradley was given the inaugural Rao Prize for outstanding research in statistics by Pennsylvania State University in 2005 he received the National Medal of Science. In 2014, Bradley was awarded the Guy Medal in Gold by the Royal Statistical Society for his 'seminal contributions to many areas of statistics'.
    Webcast (YouTube)


    21 May 2015
    Adaptive Designs
    Speakers: Dr Guosheng Yin from the Department of Statistics and Actuarial Science at the University of Hong Kong - download slides (PDF)
    Professor Franz Koenig from the Center for Medical Statistics, Informatics and Intelligent Systems at the Medical University of Vienna - download slides (PDF)
    Chair: Robert Cuffe of ViiV Healthcare.
    Paper 1: Two-stage adaptive randomization for delayed response in clinical trials
    Summary

    Despite the widespread use of equal randomisation in clinical trials, response adaptive randomisation has attracted considerable attention. There is typically a prerun of equal randomisation before the implementation of response-adaptive randomisation, while it is often not clear how many subjects are needed in this prephase. Real-time response-adaptive randomisation often requires patients’ responses to be immediately available after the treatment, whereas clinical responses may take a relatively long period of time to exhibit. We propose a two-stage procedure to achieve a balance between power and response, which is equipped with a likelihood ratio test before skewing the allocation probability towards a better treatment. Furthermore, we develop a non-parametric fractional model and a parametric survival design with an optimal allocation scheme to tackle the common problem caused by delayed response. We evaluate the operating characteristics of the two-stage designs through simulation studies and show that the methods proposed satisfactorily resolve the arbitrary size of the equal randomisation phase and the delayed response problem in response-adaptive randomisation.
    Dr Guosheng Yin, currently a professor at University of Hong Kong, is also an adjunct professor at University of Texas MD Anderson Cancer Center. He received a PhD in Biostatistics from University of North Carolina and worked as in the Department of Biostatistics at MD Anderson Cancer Center, before becoming associate professor in the Department of Statistics and Actuarial Science at University of Hong Kong in 2009. Dr Yin was elected as a Fellow of the American Statistical Associationin 2013, and a Member of the International Statistical Institute in 2012. He is Associate Editor for the Journal of American Statistical Association, Bayesian Analysis,and Contemporary Clinical Trials. Hismain research areas include Bayesian adaptive designin clinical trials andsurvival analysis. He has publishedover 100 peer-reviewed papersand a book on ‘Clinical Trial Design: Bayesian and Frequentist Adaptive Methods’ in the John Wiley Series.
    Paper 2: Adaptive graph-based multiple testing procedures
    Summary

    Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well-known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalise these graph-based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid-trial design modifications based on unblended interim data as well as external information, while providing strong family wise error rate control. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof.
    Franz Koenig is currently associate professor at the Section of Medical Statistics at the Medical University of Vienna, Austria. He is currently member of ethics committee of the Medical University of Vienna and also of the ethics committee of the community of Vienna. From 2008-2010 he was seconded to the European Medicines Agency (EMA) in London as statistical expert in the Unit Human Medicines Development and Evaluation, where he also held the Scientific Secretariat of the Biostatistics Working Party (BSWP). He was involved in the development of guidelines and assessment of statistical methods and clinical trial protocols. His main research interests are multiple testing, adaptive/flexible designs, interim analyses and data safety monitoring boards (DSMB). Professor Koenig has served as guest editor for special issues in Biometrical Journal and Statistics in Medicine. He is currently leading work on the work package ‘adaptive designs’ in the EU funded research project IDEAL and is deputy coordinator of an EU Horizon 2020 funded Marie Curie ITN network IDEAS on early drug development studies.
    Webcast (YouTube)


    24 February 2015
    Doubly robust estimation of the local average treatment effect curve
    Speaker: Elizabeth Ogburn, Assistant Professor of Biostatistics at Johns Hopkins University.
    Chair: Dr Dylan Small, The Wharton School, University of Pennsylvania.
    Co-authors Andrea Rotnitzky and Jamie Robins
    Summary
    This paper is about estimation of the causal effect of a binary treatment on an outcome, conditional on covariates, from observational studies or natural experiments in which there may be unmeasured confounding of the treatment-outcome relationship but there is a binary instrument for treatment.
    The paper describes a doubly robust, locally efficient estimator of the parameters indexing a model for the local average treatment effect, conditional on covariates V, when randomisation of the instrument is only true conditional on a high dimensional vector of covariates X, possibly bigger than V. (The local average treatment effect is the treatment effect among compliers, or those subjects whose treatment value would agree with their instrument value, whether that value were 0 or 1). It also discusses the surprising result that inference is identical to inference for the parameters of a model for an additive treatment effect on the treated conditional on V that assumes no treatment-instrument interaction.
    Elizabeth Ogburn (Betsy) has been an Assistant Professor of Biostatistics at Johns Hopkins University since August 2013. She received her PhD in biostatistics from Harvard University, where she worked with Andrea Rotnitzky and Jamie Robins, followed by a postdoctoral fellowship with Tyler VanderWeele at the Harvard School of Public Health Program on Causal Inference. She works on developing statistical methodology for causal inference, with a focus on novel data sources and structures; for example, using electronic medical records to inform individual-level healthcare decisions and using social network and other data that evince complex dependence among observations. The paper is published in the Journal of the Royal Statistical Society: Series B (Statistical Methodology) and is available online to subscribers of the journal.
    Webcast (YouTube), Slides (PDF)

  • 2013-2014

    20 November 2014
    Modelling/predicting criminal behaviour
    Chair: Professor Chris Skinner, professor of statistics at the London School of Economics & Political Science.
    The item count method for sensitive survey questions: Modelling criminal behaviour
    Speakers: Jouni Kuha and Jonathan Jackson
    The item count method is a way of asking sensitive survey questions which protects the anonymity of the respondents by randomization before the interview. It can be used to estimate the probability of sensitive behaviour and to model how it depends on explanatory variables. The results of the author’s analysis of criminal behaviour highlight the fact that careful design of the questions is crucial for the success of the item count method.

    Which method predicts recidivism best? A comparison of statistical, machine learning and data mining prediction models
    Speakers: Nikolaj Tollenaar and Peter van der Heijden
    Risk assessment instruments are widely used in criminal justice settings all over the world. However, in recent times, different approaches to prediction have been developed. This paper investigates whether modern techniques in data mining and machine learning provide an improvement in predictive performance over classical statistical methods such as logistic regression and linear discriminant analysis. Using data from criminal conviction histories of offenders, these models are compared. Results indicate that in these data, classical methods tend to do equally well as or better than their modern counterparts.
    Webcast (YouTube), Slides (PPTX)


    1 April 2014
    Combination Therapies
    Joint event with Statisticians in the Pharmaceutical Industry (PSI) sponsored by Quintiles and Wiley
    Chair: James Carpenter (London School of Hygiene & Tropical Medicine)
    A Bayesian dose finding design for oncology clinical trials of combinational biological agents
    Speaker: Ying Yuan, Department of Biostatistics, University of Texas
    Co-authors: Chunyan Cai, Yuan Ji; Journal of the Royal Statistical Society: Series C (Applied Statistics), Volume 63, Issue 1, Pages 159–173, January 2014
    Slides (PDF)

    Escalation strategies for combination therapy Phase I trials
    Speaker: Michael J Sweeting (Department of Public Health and Primary Care, University of Cambridge)
    Discussant: Tony Sabin (Amgen)
    Co-author: Adrian P Mander; Pharmaceutical Statistics, Volume 11, Issue 3, Pages 258–266, May/June 2012
    Slides (PDF)


    10 December 2013
    Point process modelling for directed interaction networks
    Speakers: Patrick O Perry (pperry@stern.nyu.edu) and Patrick J Wolfe (p.wolfe@ucl.ac.uk)
    Chair: John Aston (J.A.D.Aston@warwick.ac.uk)

    Summary
    Network data often take the form of repeated interactions between senders and receivers tabulated over time. A primary question to ask of such data is which traits and behaviours are predictive of interaction. To answer this question, a model is introduced for treating directed interactions as a multivariate point process: a Cox multiplicative intensity model using covariates that depend on the history of the process. Consistency and asymptotic normality are proved for the resulting partial-likelihood-based estimators under suitable regularity conditions, and an efficient fitting procedure is described. Multicast interactions – those involving a single sender but multiple receivers – are treated explicitly. The resulting inferential framework is then employed to model message sending behaviour in a corporate email network. The analysis gives a precise quantification of which static shared traits and dynamic network effects are predictive of message recipient selection.
    Webcast (Flash), Audio (MP3), slides (PDF)


    30 September 2013
    A likelihood-based sensitivity analysis for publication bias in meta-analysis
    Speaker: Professor John B Copas (Emeritus Professor of Statistics, University of Warwick)
    Chair: Professor James Carpenter (London School of Hygiene & Tropical Medicine)

    Summary
    Publication bias, a serious threat to the validity of meta-analysis, is essentially a problem of non-random sampling. If the research studies identified in a systematic review are thought of as a sample from the population of all studies which have been done in the area of interest, and if studies which report a statistically significant result are more likely to be published than studies whose results are inconclusive, than a meta-analysis of the studies selected in the review will be biased, giving over-estimated treatment effects and exaggerated assessments of significance. This recent paper in Applied Statistics discusses a sample selection model for meta-analysis and suggests a sensitivity analysis that can be useful for assessing how large the effect of publication bias is likely to be. Two examples are discussed in detail, including an example of a published meta-analysis whose conclusion was completely contradicted by evidence from a later large collaborative clinical trial.
    Webcast (Flash), audio (MP3) and slides (PDF) available.


    13 June 2013
    Information quality
    Speakers: Ron S Kenett (KPA, Raanana, Israel, University of Turin, Italy, and New York University–Poly, USA) and Galit Shmueli (Indian School of Business, Gachibowli, India)
    Chair: Dr Shirley Coleman

    Summary
    We define the concept of information quality ‘InfoQ’ as the potential of a data set to achieve a specific (scientific or practical) goal by using a given empirical analysis method. InfoQ is different from data quality and analysis quality, but is dependent on these components and on the relationship between them. We survey statistical methods for increasing InfoQ at the study design and post-data-collection stages, and we consider them relatively to what we define as InfoQ.

    We propose eight dimensions that help to assess InfoQ: data resolution, data structure, data integration, temporal relevance, generalizability, chronology of data and goal, construct operationalization and communication. We demonstrate the concept of InfoQ, its components (what it is) and assessment (how it is achieved) through three case studies in on-line auctions research. We suggest that formalising the concept of InfoQ can help to increase the value of statistical analysis, and data mining both methodologically and practically, thus contributing to a general theory of applied statistics.
    Ron Kenet's slides (PDF) and PowerPoint presentation. Webcast (YouTube video) and Galit Shmueli's slides (PDF) also available.


    16 April 2013
    Survival analysis (joint session with PSI)
    Chair: James Carpenter (London School of Hygiene & Tropical Medicine)
    Evaluating joint effects of induction – salvage treatment regimes on overall survival in acute leukaemia
    Speaker: Abdus S Wahed (University of Pittsburgh and RSS)
    Co-author: Peter F Thall, Journal of the Royal Statistical Society: Series C (Applied Statistics), Volume 62, Issue 1, Pages 67–83, January 2013
    Slides (PDF). Abstract and the article are available on the Wiley Online Library website.

    Attenuation of treatment effect due to measurement variability in assessment of progression-free survival
    Speaker: Nicola Schmitt (AstraZeneca)
    Co-authors: S Hong, A Stone, J Denne, Pharmaceutical statistics, Volume 11, Issue 5, pages 394-402, September/October 2012
    Slides (PDF). Abstract and the article are available on the Wiley Online Library website. Webcast with slides (WMV | MP4) and audio only (MP3) available.