Teaching

How to build a brain from scratch

This advanced option course discusses the search for a general theory of learning and inference in biological brains. It draws upon diverse themes in the fields of psychology, neuroscience, machine learning and artificial intelligence research. We begin by posing broad questions. What are brains for, and what does it mean to ask how they “work”? Then, over a series of lectures, we discuss parallel computational approaches in machine learning/AI and psychology/neuroscience, including reinforcement learning, deep learning, and Bayesian methods. We contrast computational and representational approaches to understanding neuroscience data. We ask whether current approaches in machine learning are feasible and scaleable, and which methods – if any – resemble the computations observed in biological brains. We review how high-level cognitive functions – attention, episodic memory, concept formation, reasoning and executive control – are being instantiated in artificial agents, and how their implementation draws upon what we know about the mammalian brain. Finally, we contemplate the outlook for the future, and whether AI will be “solved” in the near future.

Lecture 1: Building and understanding brains.

Introduction; Recent advances in AI research; Biological and Artifical Brains; The computational approach; Definitions of intelligence; Good old-fashioned AI

Lecture 2: Model-free reinforcement learning

Why do we have a brain; Classical and operant conditioning; reinforcement learning and the Bellman equation; Temporal difference learning; Q-learning, eligibility traces, actor-critic methods

Lecture 3: Feedforward networks and object categorisation

Parametric models for object recognition; Critiques of pure representationalism; Perceptrons and sigmoid neurons; Depth: the multilayer perceptron; Challenges: optimisation, generalisation and overfitting

Lecture 4: Structuring information in space and time

Convnets and translational invariance; Convnets and the primate ventral stream; Limitations of feedforward deep networks; Hierarchies of temporal integration in the brain; Temporal integration in perceptual decision-making; Recurrent neural networks and the parietal cortex

Lecture 5: Computation and modular memory systems

Modular memory systems; working memory gating in the PFC; LSTMs; The differentiable neural computer; The problem of continual learning

Lecture 6: Complementary learning systems theory

Dual process memory models; the hippocampus as a parametric storage devide; experience-dependent replay and consolidation; the deep Q-network; knowledge partitioning and resource allocation

Lecture 7: Unsupervised and generative models

Unsupervised learning: knowing that a thing is a thing; Encoding models: Hebbian learning and sparse coding; Variational autoencoders; The Bayesian approach; Predictive coding

Lecture 8: Building a model of the world for planning and reasoning

Temporal abstraction in RL and the cingulate cortex; Multiple controllers for behaviour; Cognitive maps and the hippocampus; Hierarchical planning; Grid cells and conceptual knowledge