Andrew Saxe

2023 United Kingdom Award Finalist — Faculty

Andrew Saxe

Current Position:
Joint Group Leader, Gatsby Computational Neuroscience Unit & Sainsbury Wellcome Centre

University College London

Machine Learning

Recognized for: Fundamental contributions to the study of deep neural networks, which provide insight into representation learning—the method by which systems discover and organise knowledge—in artificial and natural systems.

Areas of Research Interest and Expertise: Deep learning, Psychology

Previous Positions:

BS, Princeton University, USA
Stanford University, PhD, USA
Postdoc, Harvard University Center for Brain Science, USA
Postdoc, University of Oxford
Sir Henry Dale Fellow, University of Oxford
Associate Professor, University of Oxford

Research Summary: Understanding how thoughts and behaviours arise from the interactions of billions of neurons is one of the great scientific challenges of our time. We are not born with knowledge of the objects, concepts, and plans that populate the inner world of our mind; instead, these representations are learned, both during critical periods of development and continuing into adulthood. Unravelling the influence of learning on neural representations is a fundamental goal in neuroscience because learning underpins a great diversity of behaviours, including the ability to update old knowledge with new information. Andrew Saxe, PhD, uses mathematics to understand a type of learning model—modern ‘deep’ artificial neural networks—and applies this knowledge to develop theories for how learning influences behaviours.

Deep learning is a class of artificial neural network modeling that takes inspiration from the brain; deep learning models can be refined with biological information and studied to generate theories of how our brains work. Deep learning is also prominent in engineering, where research can be applied to help understand and improve artificial intelligence systems. Saxe has produced exact mathematical solutions to our understanding of how neural networks learn, resulting in generalised theories that apply equally to artificial neural networks, rodent brains, or human brains. By providing these theories and solutions, Saxe has been able to explain complex behaviours such as how children acquire knowledge, how and when neural networks (both artificial and biological) can generalise their knowledge to new scenarios, and a new theory of mental replay that addresses a longstanding debate in cognitive neuroscience – whether long term memory requires input from the hippocampus.

"My research seeks to unravel the computational principles governing learning in artificial and biological systems. Understanding the brain and mind is one of the grand scientific challenges of our time and I’m honored to see our efforts recognized."

Key Publications: 

  1. Cao, C. Summerfield, A. Saxe. Characterizing Emergent Representations in A Space of Candidate Learning Rules for Deep Networks. 34th Conference on Neural Information Processing Systems(NeurIPS), 2020.
  2. Lee, S. Goldt, A. Saxe. Continual Learning in the Teacher-Student Setup: Impact of Task Similarity. Proceedings of the 38th International Conference on Machine Learning, 2021
  3. Flesch, K. Juechems, T. Dumbalska, A. Saxe, C. Summerfield. Orthogonal Representations for Robust Context-Dependent Task Performance in Brains and Neural Networks. Neuron, 2022.
  4. A.M. Saxe, S. Sodhani, S. Lewallen. The Pathway Race Reduction: Dynamics of Abstraction in Gated Networks. Proceedings of the 39th International Conference on Machine Learning, 2022.

Other Honors: 

2022 Schmidt Science Polymath Award, Schmidt Futures
2020 CIFAR Azrieli Global Scholar, CIFAR
2019 Wellcome-Beit Prize, Wellcome Trust
2016 Robert J. Glushko Outstanding Doctoral Dissertations Prize, Cognitive Science Society
2010—2013 National Defense Science and Engineering Graduate Fellowship

In the Media: 

Nature - Model architecture can transform catastrophic forgetting into positive transfer

UCL Theory of Learning Lab Saxe Lab