Mikhail Belkin, University of California San Diego
Remarkable recent advances in deep neural networks are rapidly changing science and society.
Never before had a technology been deployed so widely and so quickly with so little understanding of its fundamentals. I will argue that developing a fundamental mathematical theory of deep learning is necessary for a successful AI transition and, furthermore, that such a theory may well be within reach. I will discuss what a theory might look like and some of its ingredients that we already have available.
In particular, I will discuss how deep neural neural networks of various architectures learn features and how the lessons of deep learning can be incorporated in non-backpropagation-based algorithms that we call Recursive Feature Machines. I will provide a number of experimental results on different types of data, including texts and images, as well as some connections to classical statistical methods, such as Iteratively Reweighted Least Squares.
3:30pm - Pre-talk meet and greet teatime - 219 Prospect Street, 13 floor, there will be light snacks and beverages in the kitchen area.