Introduction to Machine Learning

Introduction to Machine Learning

ℹ️

This week focuses on foundational concepts in machine learning, for those who are less familiar with them, or who want to review the basics. If you’d like to learn about machine learning (ML) in more depth, see the Learn More section at the end of this curriculum. For those with little background ML knowledge, this week’s core readings will take roughly 45 mins longer than in other weeks; also, the exercises are much more extensive than in other weeks. We recommend spending the time to work through them all, to provide a solid foundation for the rest of the course.

After the first reading, which gives a high-level outline of the fields of artificial intelligence and machine learning, the next six readings work through six core concepts. The first three are the three crucial techniques behind deep learning (the leading approach to machine learning): neural networks, gradient descent, and backpropagation. The next three are the three types of tasks which machine learning is used for: supervised learning, self-supervised learning, and reinforcement learning.

Core readings:

🔗
What is self-supervised learning? (CodeBasics, 2021) (5 mins)
🔗
Introduction to reinforcement learning (von Hasselt, 2021) (from 2:00 to 1:02:10, ending at the beginning of the section titled Inside the Agent: Models) (60 mins)

Optional readings:

🔗
The spelled-out intro to neural networks and backpropagation: building micrograd (Karpathy, 2022) (150 mins) A lecture introducing the most foundational concepts in deep learning in a very comprehensive way, from a leading expert.
🔗
Machine learning for humans (Maini and Sabri, 2017) Maini and Sabri provide a long but accessible introduction to machine learning.
🔗
Machine learning glossary (Google, 2017) For future reference, see this glossary for explanations of unfamiliar terms.

On reinforcement learning:

🔗
Spinning up deep RL: part 1 and part 2 (OpenAI, 2018) (40 mins) This reading provides a more technical introduction to reinforcement learning (for more, see also the last half-hour of von Hasselt (2021)).
🔗
A (long) peek into reinforcement learning (Weng, 2018) (35 mins) Weng provides a concise yet detailed introduction to reinforcement learning.

Exercises:

  1. What are the main similarities and differences between the process of fitting a linear regression to some data, and the process of training a neural network on the same data?
  2. Explain why the “nonlinearity” in an artificial neuron (e.g. the sigmoid or RELU function) is so important. What would happen if we removed all the nonlinearities in a deep neural network? (Hint: try writing out explicit equations for a neural network with only one hidden layer between the input and output layers, and see what happens if you remove the nonlinearity.)
  3. Practical exercises

Next in the AGI Safety Fundamentals curriculum

Topics (1)