Wei Hu, Princeton University
Deep learning builds upon the mysterious abilities of gradient-based optimization algorithms. Not only can these algorithms often achieve low loss on complicated non-convex training objectives, but the solutions found can also generalize remarkably well to unseen test data despite significant over-parameterization of the models. Classical approaches in optimization and learning theory that treat empirical risk minimization as a black box are insufficient to explain these mysteries. In this talk, I will illustrate how we can make progress towards understanding deep learning by a more refined approach that opens the black box of the optimizer. In particular, I will present some recent results that take into account the trajectories taken by the gradient descent algorithm, including two case studies: (i) solving low-rank matrix completion via deep linear neural networks, (ii) the connection between wide neural networks and neural tangent kernels, and their implications.
You are invited to a scheduled Zoom meeting. Zoom is Yale’s audio and visual conferencing platform.
- Join from PC, Mac, Linux, iOS or Android: https://yale.zoom.us/j/95863208758
- Or Telephone：203-432-9666 (2-ZOOM if on-campus) or 646 568 7788
- Password: 24
- Meeting ID: 958 6320 8758
- International numbers available: https://yale.zoom.us/u/acqwvKmSRE
For H.323 and SIP information for video conferencing units please click here: https://yale.service-now.com/it?id=support_article&sys_id=434b72d3db9e8fc83514b1c0ef961924