Speaker
Wei Hu, Princeton University
Deep learning builds upon the mysterious abilities of gradient-based optimization algorithms. Not only can these algorithms often achieve low loss on complicated non-convex training objectives, but the solutions found can also generalize remarkably well to unseen test data despite significant over-parameterization of the models. Classical approaches in optimization and learning theory that treat empirical risk minimization as a black box are insufficient to explain these mysteries. In this talk, I will illustrate how we can make progress towards understanding deep learning by a more refined approach that opens the black box of the optimizer. In particular, I will present some recent results that take into account the trajectories taken by the gradient descent algorithm, including two case studies: (i) solving low-rank matrix completion via deep linear neural networks, (ii) the connection between wide neural networks and neural tangent kernels, and their implications.
Wei Hu’s website
You are invited to a scheduled Zoom meeting. Zoom is Yale’s audio and visual conferencing platform.
- Join the Zoom
- Or Telephone:203-432-9666 (2-ZOOM if on-campus) or 646 568 7788
- Password: 24
- Meeting ID: 958 6320 8758
- International numbers available
For H.323 and SIP information for video conferencing units