Current Projects / Funding

We are currently funded by the Wellcome Trust and the Cooperative AI Foundation. Students have funding from diverse sources, including the Medical Research Council UK, the Economic and Social Research Council UK, the Clarendon Fund as well as private sources.

Currently, major projects in the lab include the following:

1. Learning in humans and machines

We want to understand the principles by which humans learn, and to find ways to help people learn faster and more effectively. Using behavioural research, mathematical modelling, and brain imaging, we examine the way that people acquire knowledge and how patterns of brain activity change during this process. A major focus is how humans can learn and generalise both structure and contents of experienced data. We compare these changes to those that occur during learning in deep neural networks. We study how different curricula help and hinder learning, and why they do so, and are using machine learning tools to design new training regimes that promote learning.

Recent papers:

Holton et al Humans and neural networks show similar patterns of transfer and interference during continual learning. In press, Nature Human Behaviour.

Pesnot-Lerousseau and Summerfield. Do humans learn like transformer networks? In press, Nature Human Behaviour.

Mi and Summerfield. Curriculum learning of a cue combination task. PsyarXiv preprint.

Thompson et al. Zero-shot counting in a dual-streams neural network. Neuron 2025.

2. Human intrinsic motivation

What motivates us to achieve our goals? Whilst people respond to rewards and punishments, they are also intrinsically motivated, that is, they seek to achieve ends for their own sake. We want to understand the principles that underlie human intrinsic motivation, and especially to understand how humans combine an impluse for curiosity with the need to control their environment. We are also interested in human preferences over technology, including AI, and how AI systems can be trained to behave in ways that meet human needs and preferences.

Recent papers:

Sandbrink et al. Understanding human meta-control and its pathologies using deep neural networks. PsyarXiv preprint.

Christian et al. Reward Model Interpretability via Optimal and Pessimal Tokens. ACM Conference on Fairness, Accountability, and Transparency (Oral).

3. Using AI to promote cooperation

Many global challenges, like climate change and social justice, are problems collective action. Humans need to agree in order to decide how to divide assets or coordinate behaviour. We study how AI can be used to assist with this process, fairly and beneficially.

Recent papers:

Tessler et al 2024. AI can help humans find common ground in democratic deliberation. Science 2024.

Koster et al 2025. Using deep reinforcement learning to promote sustainable human behaviour on a common pool resource problem. Nature Communications 2025.

Past Projects / Funding

In the past we have been funded by the European Research Council, the Human Brain Project, Schmidt Futures and others. We are grateful to all our funders for their generous support.

European Research Council

1/ Abstraction and Generalisation in Human Decision-Making (ERC Consolidator Award 725937 NEUROABSTRACTION). Collaborators: Tim Behrens, Mark Stokes, Matthew Rushworth.

The goal of this project is to understand how humans acquire conceptual knowledge, and use this knowledge to make decisions in novel settings. We want to address the following questions:

  1. How do neural representations in the human brain change during concept acquisition?
  2. How do humans learn to perform of multiple tasks at once, and encode task representations in a way that avoids interference?
  3. How can we build computational models, such as neural networks, that learn and generalise new abstract concepts?

Human Brain Project

2/ Hierarchical Planning During Navigation (Human Brain Project award, SGA2 T2.2.7 and T2.2.8). PIs: Giovanni Pezzulo, Hugo Spiers, and Christopher Summerfield. Collaborators: Nico Schuck, Kate Jeffery.

The goal of this project is to understand how representations of the world are formed and used during the navigation. The work combines neural recordings in rats and brain imaging in humans. We want to answer the following questions:

  1. How to rats and humans plan in complex environments? We predict that they will use multi-scale representations (i.e., both coarse and fine in space and time) and plan over both scales simultaneously.
  2. What is the relationship between the hippocampus, the medial orbitofrontal cortex, and the dorsomedial prefrontal cortex during planning? All three regions have been implicated in forming state representations that may be useful for planning. What is their relative contribution?