Some researchers argue that, with the coming impacts of AI, the world is sailing into mostly uncharted, unfamiliar waters. AI, they argue, will drastically transform the world, likely starting this century. That may bring many new opportunities as well as risks, and foreseeing these risks may be critical. After all, failure to anticipate emerging risks can leave us unable to steer away from hazards, and unprepared to respond well in crises.
To anticipate some of this change, and to gain a foundation that can help inform our steering, we will spend the first part of this course studying research that aims to forecast the impacts of AI. Some of the research we will read about makes strong claims about important issues—arguably too strong to accept without scrutiny, yet too important to not give serious consideration. Of course, there are limits to how much we can know about the future, but learning what we can still seems worthwhile.
This week, following an introductory overview of AI governance research, we’ll examine several ideas that, at least for some researchers, are core motivations for focusing on AI governance: a mix of historical, economic, and other empirical arguments which suggest that AI will likely have transformative impacts this century. Additionally, we’ll study some research that aims to forecast the nature of these changes.
Core Readings:
Several pieces aiming to better understand and forecast advanced AI:
Additional Recommendations:
On AI takeoff speeds (roughly, how abrupt AI advances will be) and whether AI will drive explosive economic growth, including more skeptical perspectives:
General discussion:
Historical research:
Economics research:
More on (other aspects of) technical forecasting:
[10] To elaborate, takeoff speeds seem important for the question, “How hard is solving the AI alignment problem?” (The alignment problem will be introduced next week.) After all, if AI advances are more gradual, then researchers may be able to safely use each generation of AI systems as a testing ground and toolkit for aligning the next, slightly more advanced AI systems. But there will be little or no time for that if AI advances are sufficiently abrupt.