Hybrid

Jeremy Cohen, Research Fellow at the Flatiron Institute's Center for Computational Mathematics (CCM)

Mon Jan 26, 2026 4:00 p.m.—5:00 p.m.
Jeremy Cohen

This event has passed.

Kline Tower, Kline Tower, 13th Floor, Rm. 1327
219 Prospect Street New Haven, CT 06511

Webcast Option: https://yale.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=1f9b1a17-6…

Title: How does gradient descent work?

Abstract: Optimization is the engine of deep learning, yet the theory of optimization has had little impact on the practice of deep learning. Why?  In this talk, we will first show that traditional theories of optimization cannot explain the convergence of the simplest optimization algorithm — deterministic gradient descent — in deep learning. Whereas traditional theories assert that gradient descent converges because the curvature of the loss landscape is “a priori” small, we will explain how in reality, gradient descent converges because it *dynamically avoids* high-curvature regions of the loss landscape. Understanding this behavior requires Taylor expanding to third order, which is one order higher than normally used in optimization theory. While the “fine-grained” dynamics of gradient descent involve chaotic oscillations that are difficult to analyze, we will demonstrate that the “time-averaged” dynamics are, fortunately, much more tractable. We will present an analysis of these time-averaged dynamics that yields highly accurate quantitative predictions in a variety of deep learning settings. Since gradient descent is the simplest optimization algorithm, we hope this analysis can help point the way towards a mathematical theory of optimization in deep learning.

3:30pm - Pre-talk meet and greet teatime - 219 Prospect Street, 13 floor, there will be light snacks and beverages in the kitchen area.  For more details and upcoming events visit our website at https://statistics.yale.edu/calendar.