Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity - Open Philanthropy Project - Why reducing risks from AI might be one of the most outstanding philanthropic opportunities. (40 mins.)
Research Agenda, Legal Priorities Project, Section 4, pp. 35–46 (20 mins.)
Research Agenda, Legal Priorities Project, Section 4, pp. 46–55
Misuse risks (security):
Accident risks (safety):
Concrete problems in AI safety (1.5 hours)
Regulatory markets for AI safety (1.5 hours)
Guide to working in AI policy and strategy, 80,000 Hours
The case for building expertise for US AI policy and how, 80,000 Hours
How should AI and machine learning be defined for legal purposes? What are the consequences of different definitions? Think about different dimensions, e.g.,
- technology-based regulation vs. risk-based regulation
- regulation vs. co-regulation vs. self-regulation
- hard law vs. soft law (e.g., international standards)
- international regulation vs. national regulation
Does the regulation of the internet provide a helpful analogy to the regulation of AI?
- The early internet was largely unregulated in the United States (e.g. no enforced sales tax on ecommerce; no speech liability for platform providers; no treatment of ISPs as utility providers) - what are the most compelling rationales for treating AI differently?
Much of the concern around AI considers a “hard takeoff” scenario for AGI, what if any current legal tools would help minimize a hard take off happening and/or going badly?
How open to competition should the AI development process be? Are their risks in trying to define AI research and development broadly and regulate it? What does regulation of non-public advances even look like? How is this different than arms control?
Could the recognition of rights for AI create a moral hazard for designers or parties who would otherwise be held accountable?
Would a windfall clause be legal in your country/jurisdiction? Can you think of other ways to distribute the benefits from AI?