On Symmetries and Feature Learning in Simple Neural Networks

Mon Dec 12, 2022 4:00 p.m.—5:00 p.m.
Joan

This event has passed.

Joan

Speaker

Joan Bruna, New York University

For all their mathematical mysteries, two important features of neural networks are their ability to encode symmetries into their architectures, and their ability to ‘discover’ hidden low-dimensional structures within high-dimensional data.  In this talk, I will cover two snippets capturing each of these phenomena. In the first part, we will study approximation properties of symmetric and antisymmetric functions by neural networks, and establish an exponential advantage of pairwise models (underpinning transformers) over unary ones (underpinning ‘DeepSets’). In the second part, we study the learnability of ‘single-index models’, a class of semiparametric models with hidden low-dimensional structure, and show how shallow neural networks are able to learn them with near optimal sample complexity, showcasing the benefits of feature learning in the high-dimensional regime.  

Joint work with A. Zweig (first part) and A. Bietti, MJ Song and C. Sanford (second part). 

In-Person seminars will be held at Dunham Lab, 10 Hillhouse Ave., Room 220, with an option of remote participation via zoom. 

Password: 24

Or Telephone:203-432-9666 (2-ZOOM if on-campus) or 646 568 7788

Meeting ID: 924 1107 7917

Joan Bruna’s website