Existential Risk

Existential Risk

In this week of the programme, you will grapple with how to think about risks that could permanently destroy humanity’s long term potential. If we want to improve the long term future, should addressing these risks be our top priority?

You will also discuss why emerging technologies like the ability to create genetically engineered pathogens and advanced artificial intelligence could be a major source of existential risk.

Curriculum

Core materials

🔗
Chapter 2: Existential Risk’ in ‘The Precipice’ by Toby Ord
🔗
‘’Pandemics’ and ‘Unaligned Artificial Intelligence’ in ‘Chapter 5: Future Risks’ in ‘The Precipice’ by Toby Ord

Recommended reading

🔗
Chapter 4: Anthropogenic Risks’ in ‘The Precipice’ by Toby Ord

More to explore

Specific risks:

General:

🔗
'Carl Shulman on the common-sense case for existential risk work and its practical implications' 80,000 Hours. Existential risk reduction could be a top priority even if we are not longtermists.
🔗
Appendix B: Population Ethics and Existential Risk’’ in ‘The Precipice’ by Toby Ord. Discussion of how our perspective on existential risk might be affected by our view of the moral value of different types of populations and of affecting which populations come into existence - a branch of ethics called ‘population ethics’.
🔗
'Are we living at the hinge of history?'- William Macaskill. Toby Ord argues we live at a uniquely important ‘time of perils’ when the way we respond to existential threats that are just now emerging could determine our species’ fate. This paper pushes back against the view that right now is a uniquely important time.

Exercise: Reducing extinction risk on current priorities

In this week's exercise, we will be reflecting on whether the case for putting more resources towards reducing extinction risk is only compelling if we value future generations.

If focussing more on risks of extinction seems valuable even if we only consider the interests of the present generation, then it seems extremely valuable if we also take into account the interests of future generations.

💡
Imagine that the US government is willing to spend $1 million to save the life of a citizen (This is broadly in line with the figures used by real government agencies. For instance, the US EPA values a statistical American life at $7.4 million)

Say an asteroid is about to hit the Americas and kill every US citizen. Up to how much should the US government be willing to spend to avert this disaster?

In this case, we’re not asking for your view of how much the US government should put towards avoiding this outcome! Imagine that they simply extended their evaluation of the badness of one US death to every US citizen dying

💡
Imagine that the cumulative probability of all the disasters which could kill every US citizen this century is ⅙. How much should the US government be willing to spend to reduce this probability to 0?
💡
Imagine we know of an intervention that can remove 1% of the risk of all Americans being killed this century. What is the maximum amount the US government should be willing to spend on this intervention
💡
Do you think we already know of interventions that pass this cost benefit test (i.e., that would reduce the chance of extinction by 1% for less than the amount the US government would be willing to spend to do this)?
💡
How does this cost benefit analysis look different if we place higher or lower value on saving citizens lives?
💡
So far in this exercise we have assumed that only currently living American citizens matter - what happens if we take into account the interests of future generations?

Next in the Introduction to Longtermism Series

Topics (1)