Interpretability

Interpretability

ℹ️

Our current methods of training capable neural networks give us little insight into how or why they function. This week we cover the field of interpretability, which aims to change this by developing methods for understanding how neural networks think.

This week’s curriculum starts with readings related to mechanistic interpretability. Mechanistic interpretability is a subfield of interpretability which aims to understand networks on the level of individual neurons. It then moves onto concept-based interpretability, which focuses on techniques for automatically probing (and potentially modifying) human-interpretable concepts stored within neural networks. Note that this week’s readings differ significantly depending on whether readers want to cover the topics in more or less technical detail.

Core readings:

πŸ”—
Mechanistic interpretability readings:
  1. For those with significant ML background:
    1. Zoom In: an introduction to circuits (Olah et al., 2020) (35 mins)
    2. Toy models of superposition (Elhage et al., 2022) (only sections 1 and 2) (30 mins)
  2. For those with less ML background:
    1. Feature visualization (Olah et al., 2017) (20 mins)
      1. Feature visualization is a set of techniques for developing a qualitative understanding of what different neurons within a network are doing.
    2. Zoom In: an introduction to circuits (Olah et al., 2020) (35 mins)
      1. See above.
πŸ”—
Concept-based interpretability readings:
  1. For those with significant ML background:
    1. Discovering latent knowledge in language models without supervision (Burns et al., 2022) (only sections 1-3) (30 mins)
      1. This paper explores a technique for automatically identifying whether a model believes that statements are true or false, without requiring any ground-truth data.
  2. For those with less ML background:
    1. Probing a deep neural network (Alain and Bengio, 2018) (only sections 1 and 3) (15 mins)
      1. This paper introduces the technique of linear probing, a crucial tool in concept-based interpretability.
    2. Acquisition of chess knowledge in AlphaZero (McGrath et al., 2021) (only up to the end of section 2.1) (20 mins)
      1. This paper provides a case study of using concept-based interpretability techniques to understand AlphaZero’s development of human chess concepts. The first two sections are a useful review of the field of interpretability.
πŸ”—
For everyone:
  1. Locating and Editing Factual Associations in GPT: blog post (Meng et al., 2022) (10 mins)
    1. Meng et al. demonstrate how concept-based interpretability can be used to modify neural weights in semantically meaningful ways.

Optional readings:

Mechanistic interpretability:

Concept-based interpretability:

More speculative topics:

πŸ”—
Intro to brain-like AGI safety (Byrnes, 2022) (part 3: two subsystems, part 6: big picture, part 7: worked example)
πŸ”—
Eliciting latent knowledge (Christiano et al., 2021) (up to the end of the Ontology Identification section on page 38) (60 mins)

Next in the AGI Safety Fundamentas curriculum

Topics (1)