Objectives :
Machine learning is a scientific discipline that is concerned with the design and development of algorithms that allow computers to learn from data. A major focus of machine learning is to automatically learn complex patterns and to make intelligent decisions based on them. The set of possible data inputs that feed a learning task can be very large and diverse, which makes modelling and prior assumptions critical problems for the design of relevant algorithms.
This course focuses on the methodology underlying supervised and unsupervised learning, with a particular emphasis on the mathematical formulation of algorithms, and the way they can be implemented and used in practice. We will therefore describe some necessary tools from optimization theory, and explain how to use them for machine learning. Numerical illustrations will be given for most of the studied methods. A glimpse about theoretical guarantees, such as upper bounds on the generalization error, are provided at the end of the lecture.


Syllabus :
1.    Supervised learning
•    LDA / QDA for Gaussian models
•    Logistic regression, generalized linear models
•    Regularization (Ridge, Lasso, etc.)
•    Support Vector Machine (SVM), the Hinge loss and margin
•    Kernel methods, the kernel trick and RKHS
•    Multiclass classification

2.    Neural Networks
•    Introduction to neural networks
•    The perceptron, multilayer neural networks, deep learning
•    Stochastic gradient descent, backpropagation

3.    Optimization for Machine Learning
•    Gradient descent, proximal gradient descent
•    Quasi-newton methods
•    Stochastic gradient descent and beyond

4.    Unsupervised learning
•    Gaussian mixtures and EM
•    Matrix Factorization, Non-negative Matrix Factorization
•    Spectral clustering
•    Kernel PCA

5.    A glimpse about theoretical guarantees
•    Generalization error
•    Concentration / oracle inequalities