Håvard Rue: Two applications of the variational form of Bayes theorem

In this talk I will discuss the variational form of Bayes theorem by Zellner (1988). This result is the rationale behind the variational (approximate) inference scheme, although it is not always that clear in modern presentations. I will discuss two applications of this results. First, I will show how to do a low-rank mean correction within the INLA framework (with amazing results), which is essential for the next generation of the R-INLA software currently in development. In the second one, I will introduce the Bayesian   learning rule, which unify many machine-learning algorithms from fields such as optimization, deep learning, and graphical models. This includes classical algorithms such as ridge regression, Newton's method, and Kalman filter, as well as modern deep-learning algorithms such as stochastic-gradient descent, RMSprop, and Dropout.

The first part of the talk is based on our recent research at KAUST, while the second part is based upon \texttt{arxiv.org/abs/2107.04562} with Dr. Mohammad Emtiyaz Khan, RIKEN Center for AI Project, Tokyo.

Håvard Rue holds a Ph.D. from Norwegian Institute of Technology (NTNU), 1993. He had been a professor of statistics at NTNU for many years prior to becoming a professor at King Abdullah University of Science and Technology, where he is currently employed. Professor Rue’s research interests lie in computational Bayesian statistics and Bayesian methodology such as priors, sensitivity and robustness. His main body of research is built around the R-INLA project (www.rinla.org), which aims to provide a practical tool for approximate Bayesian analysis of latent Gaussian models, often at extreme data scales.

Published Sep. 24, 2021 12:52 PM - Last modified May 14, 2024 1:40 PM