Hans Kristian Kamfjord Eriksen: CMB analysis by Gibbs sampling

Fredagskollokvium

Abstract

Maximum-likelihood estimators are widely recognized as the best estimators in statistical theory. Unfortunately, in real-world applications one is often forced to adopt sub-optimal estimators, usually because of computational constraints. This was long believed to be the case also for the CMB analysis problem, because a naive approach to the problem involves inversion of large matrices, implying a computational scaling of O(N^3), with N ~ 10^7. However, three years ago a new approach was outlined by some of my collaborators (Jewell et al., Wandelt et al. 2004), allowing for a much faster route to the full likelihood solution, with a scaling of O(N^3/2). This approach relies on a general MCMC technique called the Gibbs sampler. The result is a highly efficient, flexible and accurate tool that allows for optimal end-to-end CMB analysis.

In this talk, I will present some of our most recent work, and show results from the latest 3-year WMAP analysis. Also, even though the particular application discussed here is CMB oriented, I will discuss the algorithm in general, and hopefully provide some intuition on how and why it works. Given the general nature of the algorithm, this might be of interest to anybody attempting to map out a probability distribution P(x_1, ..., x_n), where n is a large number.

Publisert 11. aug. 2009 15:18 - Sist endret 15. juni 2011 13:49