Machine learning is a scientific discipline that is concerned with the design and development of algorithms that allow computers to learn from data. A major focus of machine learning is to automatically learn complex patterns and to make intelligent decisions based on them. The set of possible data inputs that feed a learning task can be very large and diverse, which makes modelling and prior assumptions critical problems for the design of relevant algorithms.
This course focuses on the methodology underlying supervised and unsupervised learning, with a particular emphasis on the mathematical formulation of algorithms, and the way they can be implemented and used in practice. We will therefore describe some necessary tools from optimization theory, and explain how to use them for machine learning.
Numerical illustrations will be given for most of the studied methods.
A glimpse about theoretical guarantees, such as upper bounds on the generalization error, are provided at the end of the lecture.
A prerequisite for this course is Machine Learning I.
1. Supervised learning (3 lectures)
- LDA / QDA for Gaussian models
- Logistic regression, generalized linear models
- Regularization (Ridge, Lasso, etc.)
- Support Vector Machine (SVM), the Hinge loss and margin
- Kernel methods, the kernel trick and RKHS
- Multiclass classification
2. Neural Networks (1 lecture)
- Introduction to neural networks
- The perceptron, multilayer neural networks, deep learning
- Stochastic gradient descent, backpropagation
3. Optimization for Machine Learning (2 lectures)
- Proximal gradient descent
- Quasi-newton methods
- Stochastic gradient descent and beyond
4. Unsupervised learning (2 lectures)
- Gaussian mixtures and EM
- Matrix Factorization, Non-negative Matrix Factorization
- Kernel PCA
5. A glimpse about theoretical guarantees (1 lecture)
- Generalization error
- Oracle inequalities
- Concentration inequalities
Langue du cours : Français ou Anglais, slides en Anglais
Credits ECTS : 4
- Teaching coordinator: Erwan Le Pennec