Andrea Montanari, Stanford University
Abstract: Computing expectations or drawing samples from high-dimensional
Bayesian posteriors is a notoriously difficult problem. Computational hardness
considerations have led the statistics community to develop ingenious approximate
techniques (often without guarantees) or onto regimes in which the posterior
concentrates tightly around a point.
I will consider a simple yet rich example: rank-one matrix estimation. Focusing on the truly
difficult regime, I will review recent progress towards constructing efficient algorithms for
inference and sampling. In particular, I will describe an approach to sampling that leverages
a class of diffusion processes that has been successfully exploited in
generative machine learning.
[Based on joint work with Alex Wein and Yuchen Wu.]
3:30pm - Pre-talk meet and greet teatime - 219 Prospect Street, 13 floor, there will be light snacks and beverages in the kitchen area.