Introduction and AI Forecasting

Introduction and AI Forecasting

This is a link post for the 2023 AI Governance Curriculum

Some researchers argue that, with the coming impacts of AI, the world is sailing into mostly uncharted, unfamiliar waters. AI, they argue, will drastically transform the world, likely starting this century. That may bring many new opportunities as well as risks, and foreseeing these risks may be critical. After all, failure to anticipate emerging risks can leave us unable to steer away from hazards, and unprepared to respond well in crises.

To anticipate some of this change, and to gain a foundation that can help inform our steering, we will spend the first part of this course studying research that aims to forecast the impacts of AI. Some of the research we will read about makes strong claims about important issues—arguably too strong to accept without scrutiny, yet too important to not give serious consideration. Of course, there are limits to how much we can know about the future, but learning what we can still seems worthwhile.

This week, following an introductory overview of AI governance research, we’ll examine several ideas that, at least for some researchers, are core motivations for focusing on AI governance: a mix of historical, economic, and other empirical arguments which suggest that AI will likely have transformative impacts this century. Additionally, we’ll study some research that aims to forecast the nature of these changes.

Core Readings:

AI Strategy, Policy, and Governance (Dafoe, 2019) (22 mins) A brief introduction to the AI governance field and its research directions.

Several pieces aiming to better understand and forecast advanced AI:

Yudkowsky Contra Christiano On AI Takeoff Speeds (2022) (just read up to but not including the section “Fine, Let’s Nitpick The Hell Out Of The Chimps Vs. Humans Example”

Additional Recommendations:

On AI takeoff speeds (roughly, how abrupt AI advances will be) and whether AI will drive explosive economic growth, including more skeptical perspectives:

General discussion:

Historical research:

Economics research:

The first talk in the event “Economic Growth in the Long Run” (starts at 4:06) (summarizes Aghion et al.’s 2017 paper “Artificial Intelligence and Economic Growth”)

More on (other aspects of) technical forecasting:

The Scaling Hypothesis (Shah, 2021) (just read the first section)

[10] To elaborate, takeoff speeds seem important for the question, “How hard is solving the AI alignment problem?” (The alignment problem will be introduced next week.) After all, if AI advances are more gradual, then researchers may be able to safely use each generation of AI systems as a testing ground and toolkit for aligning the next, slightly more advanced AI systems. But there will be little or no time for that if AI advances are sufficiently abrupt.

Next in the AI Governance curriculum

Topics (1)