Speaker
Dan Yamins, Stanford University
The emerging field of NeuroAI has leveraged techniques from artificial intelligence to analyze large-scale brain data. In this talk, I will show that the connection between neuroscience and AI can be fruitful in both directions. Towards “AI driving neuroscience”, I will discuss recent advances in self-supervised learning with deep recurrent networks that yield a developmentally-plausible model of the primate visual system. In the direction of “neuroscience guiding AI”, I will present a novel cognitively-grounded computational theory of perception that generates powerful new learning algorithms for real-world scene understanding. Taken together, these ideas illustrate how neural networks optimized to solve cognitively-informed tasks provide a unified framework for both understanding the brain and improving AI.
In-Person seminars will be held at Mason Lab 211 with optional remote access
3:30pm – Pre-talk meet and greet teatime – Dana House, 24 Hillhouse Avenue
Dan Yamin’s website