Topics
AI Alignment
Interpretability, and MIRI's research agenda
AI Governance
Biosecurity
Wild Animal Welfare
Alternative Proteins - Technical
Introduction to Effective Altruism
A basic introduction to effective altruism and why it's important to prioritise when it comes to doing good
An introduction to some tools and approaches we can use to quantify and evaluate how much good an intervention achieves
Arguments to expand our moral consideration beyond our ordinary societal norms
Introduction to the concept of longtermism and a philosophical definition
What is an existential risk and what is the case for treating them as a moral priority?
Why try and improve the way we reason and what are some promising approaches to doing so?
Introduction to Longtermism Series
Introduction to the concept of longtermism and a philosophical definition
Thinking about the broad picture of where humanity has been and is going
What is an existential risk and what is the case for treating them as a moral priority?
Can we predictably intervene to shape the balance of possible futures?
Where and how should we intervene?
Inspiration from people who are doing longtermist work
Why care about AI?
Resources that explain what AI and machine learning are
Why AI might be a big deal
Forecasting when transformative AI will arrive
How can we get AI to act in accordance with our values?
Other Topics
Overview of population ethics relevant to longtermism
Discussion on the ethics of a social discount rate
A list of sources behind the blog posts in The Most Important Century Series
How do we approach decision making when we lack certainty in any one moral theory?
A series of blog posts that argue that the 21st century could be the most important century ever for humanity
Is the expected value of the future positive or negative? A crucial consideration for the value of extinction risk reduction.
What should we do given that we can't evaluate the vast indirect effects of our actions?
Is maximising expected value the right approach for trying to do the most good? How should we approach tiny probabilities of doing vast amounts of good?
Finding the motivation to do good in a broken world