International Competition and Cooperation

International Competition and Cooperation

Moving on from non-governmental actors, this week we examine some basic aspects of how governments and international cooperation work, as well as a couple of concrete tools they have for influencing AI. Some of this week’s readings are mostly focused on foundational topics for readers with limited relevant expertise, similarly to how Week 0 covered technical basics. Foreign policy and IR wonks should feel free to go through familiar topics quickly, and all readers may find it interesting to focus on what the political dynamics we cover might mean for AI.

Understanding governments may be relevant because they may have much influence on the future of AI. Governments have major relevant authorities; among other levers, they can (at least in theory) regulate AI companies and hardware companies, shape the global movement of key inputs into AI research (e.g., researchers and hardware), fund AI-related research and development, deploy AI systems, and take part in international negotiations on any of these policy areas. Some of these abilities may have much room to worsen—or to prevent—a risky race to the bottom on AI. Additionally, historical precedents (Leung, 2019) and the potential usefulness of AI to governments suggest that governments may take major roles in the development and use of AI.

(On the other hand, some skeptics argue that governments will lack the speed or interest to be heavily involved in important AI advances, especially if these advances come soon or with little warning.)

Core Readings

🔗
Nuclear Proliferation (and Nonproliferation) Explained [video] (Council on Foreign Relations, 2016) (7 minutes)
🔗
Winning the Tech Talent Competition [video] (Zwetsloot, 2021) (4 minutes)
🔗
Choking Off China’s Access to the Future of AI (Allen, 2022) (read the introduction and the “Three Key Takeaways” section) (5 minutes)
🔗
Compilation: Historical case studies of technology governance and international agreements (various authors, compiled by this course’s organizers) (60 minutes)

Additional Recommendations

Some materials from China-based sources, showing that there is also at least some China-based interest in AI safety and cooperation:

🔗
The rise and importance of Secret Congress (Bazelon and Yglesias, 2021)
🔗
The role of philanthropic funding in politics (Karnofsky, 2013) (5 minutes)
🔗
U.S. Relations With China (Council on Foreign Relations, 2022)
🔗
“Obstacles to Policy Investment in Democracies” section of Policy Making for the Long Term in Advanced Democracies (Jacobs, 2016) (15 minutes)

Some materials from China-based sources, showing that there is also at least some China-based interest in AI safety and cooperation:

🔗
AI Governance in 2020: A Year in Review: Observations from 52 Global Experts (Shanghai Institute for Science of Science, 2021) (just read the table of contents and any sections that seem especially interesting)
🔗
Some Thoughts and Analyses on How AI will Impact International Relations (Fu, 2019) (just read the abstract and any sections that seem especially interesting)
🔗
Deciphering China’s AI Dream (Ding, 2018)—In this report, Ding outlines key features of China's AI strategy, and it aims to address relevant misconceptions. (See also his podcast on this topic and a more recent talk.)
🔗
For further readings on AI and high-skill immigration, see The Center for Security and Emerging Technology’s publications on immigration

The full historical studies that the excerpts in the above compilation come from:

Additional case studies / related analysis:

Some pieces on AI and international security:

🔗
“Governance theory” section of this reading list

Next in the AI Governance curriculum

Topics (1)