Moving on from non-governmental actors, this week we examine some basic aspects of how governments and international cooperation work, as well as a couple of concrete tools they have for influencing AI. Some of this week’s readings are mostly focused on foundational topics for readers with limited relevant expertise, similarly to how Week 0 covered technical basics. Foreign policy and IR wonks should feel free to go through familiar topics quickly, and all readers may find it interesting to focus on what the political dynamics we cover might mean for AI.
Understanding governments may be relevant because they may have much influence on the future of AI. Governments have major relevant authorities; among other levers, they can (at least in theory) regulate AI companies and hardware companies, shape the global movement of key inputs into AI research (e.g., researchers and hardware), fund AI-related research and development, deploy AI systems, and take part in international negotiations on any of these policy areas. Some of these abilities may have much room to worsen—or to prevent—a risky race to the bottom on AI. Additionally, historical precedents (Leung, 2019) and the potential usefulness of AI to governments suggest that governments may take major roles in the development and use of AI.
(On the other hand, some skeptics argue that governments will lack the speed or interest to be heavily involved in important AI advances, especially if these advances come soon or with little warning.)
Core Readings
Additional Recommendations
Some materials from China-based sources, showing that there is also at least some China-based interest in AI safety and cooperation:
Some materials from China-based sources, showing that there is also at least some China-based interest in AI safety and cooperation:
The full historical studies that the excerpts in the above compilation come from:
Additional case studies / related analysis:
Some pieces on AI and international security: