Markovian Linear Stochastic Approximation: Bias and Extrapolation

Thu Nov 24, 2022 4:00 p.m.—5:00 p.m.
Qiaomin

This event has passed.

Qiaomin

Speaker

Qiaomin Xie, University of Wisconsin-Madison

We consider Linear Stochastic Approximation (LSA) with a constant stepsize and Markovian data. Viewing the LSA iterate as a Markov chain, we prove its convergence to a unique stationary distribution in Wasserstein distance and establish non-asymptotic, geometric convergence rates. Furthermore, we show that the bias vector of this limit admits an infinite series expansion w.r.t. the stepsize, and hence the bias is proportional to the stepsize up to higher order terms. This result stands in contrast with LSA under i.i.d. data, for which the bias vanishes. In fact, we show that the bias scales with the mixing time of the Markovian data. With the above characterization, one can employ Richardson-Romberg extrapolation with m stepsizes to eliminate the m−1 leading terms in the bias expansion, resulting in an exponentially smaller bias.

The above results give a recipe for approaching the best of three worlds: (1) use a constant stepsize to achieve fast, geometric convergence of the optimization error, (2) average the iterates to eliminate the asymptotic variance, and (3) employ extrapolation to order-wise reduce the asymptotic bias. Our results immediately apply to the Temporal Difference learning algorithm with linear function approximation.
Qiaomin Xie’s website

We invite you to attend our in-person seminar, with the option of zoom 

Zoom Link

Password: 24

Or Telephone:203-432-9666 (2-ZOOM if on-campus) or 646 568 7788

Meeting ID: 924 1107 7917