AI, Machine Learning, and their Potentially Extreme Stakes

AI, Machine Learning, and their Potentially Extreme Stakes

ℹ️
This is a link post for the 2023 AI Governance Curriculum

This week aims to provide background on AI and machine learning.

This week also provides some background on a focus of this course: extreme-stakes impacts that AI might have this century, including potential impacts on humanity’s trajectory. First, some initial context on that focus. Admittedly, the possibility of AI having such extreme impacts is in many ways wild and unintuitive; these possibilities can seem remote from our experiences and from the important problems that hurt people today. That is true in some ways… does it also mean we should dismiss extreme possibilities? Maybe not. As some of this week’s readings argue, deeply wild societal and technological changes have happened historically, and they could happen again, especially through AI. If these changes are not steered well, real people will be hurt, and potential value will be wasted, potentially at a massive scale. With that at stake, we may have more than enough reason to—as a start—learn what we can about the extreme risks and opportunities of AI.

Core Readings:

🔗
Three Impacts of Machine Intelligence (Christiano, 2014) (10 minutes)
🔗
But what is a neural network? [video] (3Blue1Brown, 2017) (20 minutes)
🔗
🔗
🔗
This Can't Go On (Karnofsky, 2021) (13 minutes

Additional Recommendation:

🔗

On AI and machine learning:

How it works:

🔗
Introduction to reinforcement learning (von Hasselt, 2021) (watch from 2:00 and end at 36:30, at section titled Inside the Agent) (37 mins)

Some of what cutting-edge AI can do:

On the long term and existential risks:

🔗
Sections “Longtermism isn’t necessary…” (8:21-11:16) and “How X-Risk Reduction Compares…” (34:34-50:22) of this interview (Wiblin and Harris, 2021), on why catastrophic risk reduction may be a priority even if we are just considering present generations (20 minutes)
🔗
The Precipice (Ord, 2020), especially ch. 3-5 on why the largest source of existential risk may be AI. (Note the specific arguments about AI are arguably somewhat outdated.)
🔗
For perspectives that are skeptical of parts or all of the above, see e.g. the above Shulman interview; MacAskill, 2019 (30 minutes); Bostrom, 2009 (see e.g. here for an AI-relevant response) (10 minutes); Tarsney, 2019 (90 minutes); Masrani, 2020 (40 minutes)

Next in the AI Governance curriculum

Topics