Some Strategy and Policy Ideas

Some Strategy and Policy Ideas

ℹ️
This is a link post for the 2023 AI Governance Curriculum

As we have seen, future developments in AI may pose major risks. For the remainder of the course, we will focus on what to do about all this—how we can help society mitigate the risks and realize the benefits of AI. To begin, we will attempt to spell out some context on potential paths to impact.

We can think of AI governance as the challenge of ensuring that leading AI developers are willing and able to develop and deploy AI in safe and beneficial ways. As discussed earlier, one way that might fail to happen—and a major concern of the long-term AI governance field—is that global vulnerability to unilateral action, alongside incentives to cut corners, could make it likely for advanced AI to be deployed before safety problems are solved. (More generally, these factors could make it more likely for AI to be deployed in harmful ways before systems to mitigate harms are put into place.)

Considering the potential importance of cooperation for several approaches to these problems—especially if they are to be taken widely enough to make a big dent in risk—some AI governance work focuses on advancing various forms of cooperation on AI. From a different angle, some of the field focuses on shaping who leads in AI (e.g., informing actors, or boosting actors), sometimes with the aim of enabling the above approaches to be taken (e.g., by increasing leaders’ cautiousness, beneficence, ability to have competition constrained, [22] or lead size).[23]

Arguably, given deep strategic uncertainty, lack of expert consensus, and how under-explored many of these issues are, [24] the field cannot yet offer anything close to a comprehensive AI policy wish list. So instead, we will study a short, very incomplete, and mostly preliminary list of ideas. We hope that many readers help improve this current state of affairs, by working with others to help identify, refine, and realize promising ideas.

Core Readings

🔗
“International Security” and “AI Ideal Governance” [25] sections of “AI Governance: A Research Agenda” (Dafoe, 2018) (pages 42-51 in the PDF)
🔗

Additional Recommendations

Some more potential framings of AI safety problems from a governance angle:

🔗

On prestige motivations in AI competition:

On AI research publication norms:

On corporate self-regulation:

On advancing certain kinds of AI:

Game theoretic models of AI competition:

Miscellaneous:

[22] As hypothetical examples, leading AI developers may be more able to have competition among them constrained if they are all in one jurisdiction (so that just one government can regulate them), or if they are all in a few countries that are open to international coordination.

[23] One way of thinking about this is that, if an AI developer has a bigger lead, they may be more comfortable taking the time for responsible AI development (rather than rushing and cutting corners).

[24] See e.g., Dafoe, 2018; Muehlhauser, 2021; Karnofsky, 2021 for discussion of the preliminary nature of some of these ideas.

[25] People sometimes read “ideal governance” as referring to perfect governance, but it may be more useful to interpret it as referring to good governance.

Next in the AI Governance curriculum

Topics (1)