This week focuses on foundational concepts in machine learning, for those who are less familiar with them, or who want to review the basics. If you’d like to learn about machine learning (ML) in more depth, see the Learn More section at the end of this curriculum. For those with little background ML knowledge, this week’s core readings will take roughly 45 mins longer than in other weeks; also, the exercises are much more extensive than in other weeks. We recommend spending the time to work through them all, to provide a solid foundation for the rest of the course.
After the first reading, which gives a high-level outline of the fields of artificial intelligence and machine learning, the next six readings work through six core concepts. The first three are the three crucial techniques behind deep learning (the leading approach to machine learning): neural networks, gradient descent, and backpropagation. The next three are the three types of tasks which machine learning is used for: supervised learning, self-supervised learning, and reinforcement learning.
On reinforcement learning:
- What are the main similarities and differences between the process of fitting a linear regression to some data, and the process of training a neural network on the same data?
- Explain why the “nonlinearity” in an artificial neuron (e.g. the sigmoid or RELU function) is so important. What would happen if we removed all the nonlinearities in a deep neural network? (Hint: try writing out explicit equations for a neural network with only one hidden layer between the input and output layers, and see what happens if you remove the nonlinearity.)
- Practical exercises