The course MAA304 begins with a detailed overview of convergence, both in probability and in distribution, and revisiting two key theorems in statistics: the law of large numbers and the central limit theorem. We will then look in detail at asymptotic statistics, including fundamental topics such as the asymptotic properties of maximum likelihood estimators (MLEs), the formulation of asymptotic confidence intervals, and the principles underlying asymptotic test theory.

We will then highlight the crucial role of information theory in statistics, with particular emphasis on notions of efficiency, Cramer-Rao theory, and sufficiency. Moving to multivariate linear regression, the focus shifts to inference in Gaussian models and model validation to give students a solid understanding of this important statistical paradigm.

Next, we turn to nonlinear regression and delve into a comprehensive study of logistic regression. The course concludes with a brief introduction to nonparametric statistics, emphasising the importance of distribution-free tests.

MAA307 is composed of three connected parts. The first one lays the foundation of convex analysis in Hilbert spaces, and covers topics such as: convex sets, projection, separation, convex cones, convex functions, Legendre-Fenchel transform, subdifferential. The second part deals with optimality conditions in convex or differentiable optimization with equality and inequality constraints, and opens the way to duality theory and related algorithms (Uzawa, augmented Lagrangian, decomposition and coordination). The last part is an introduction to the optimal control of ordinary differential equations.

Convex Optimization and Optimal Control is composed of three connected parts. The first one lays the foundation of convex analysis in Hilbert spaces, and covers topics such as: convex sets, projection, separation, convex cones, convex functions, Legendre-Fenchel transform, subdifferential. The second part deals with optimality conditions in convex or differentiable optimization with equality and inequality constraints, and opens the way to duality theory and related algorithms. The last part is an introduction to the optimal control of ordinary differential equations and discusses, in particular, the concepts of adjoint state, Hamiltonian and feedback law.

Prerequisites: MAA203

MAA305 presents the basic theory of discrete Markov chains. It starts by introducing the Markov property and then moves on to develop fundamental tools such as transition matrices, recurrence classes and stopping times. With these tools at hand, the strong Markov property is proven and applied to the study of hitting probabilities. In the second part of the course, we introduce the notion of stationary distribution and prove some basic existence and uniqueness results. Finally, the long time behavior of Markov chains is investigated by proving the ergodic Theorem and exponential convergence under Doblin's condition. The course is concluded by surveying some stochastic algorithm whose implementation relies on the construction of an appropriate Markov Chain, such as the Metropolis-Hastings algorithm.

Digital images are ubiqutous : from professional and smartphone cameras to remote sensing and medical imaging, technology steadily improves, allowing to obtain ever more accurate images under ever more ex- treme acquisition conditions (shorter exposures, low light imaging, finer resolution, indirect computational imaging methods, to name a few).

This course introduces inverse problems in imaging (aka image restoration), namely the mathematical models and algorithms that allow to obtain high quality images from partial, indirect or noisy observa- tions. After a short introduction of the physical modeling of image acquisition systems, we introduce the mathematical and computational tools required to achieve that goal. The course is structured in two parts.

The first part deals with well-posed inverse problems where perfect reconstruction is possible under certain hypotheses. We first introduce the theory of continuous and discrete (fast) Fourier transforms, convolutions, and several versions of the Shannon sampling theorem, aliasing and the Gibbs effect. Then we review how imaging technology ensures the necessary band-limited hypothesis, and a few applications including: antialiasing and multi-image super-resolution, exact interpolation and registration for stereo vision, synthesis of stationary textures.

In the second part we deal with ill-posed inverse problems and the variational and Bayesian formula- tions, leading to regularized optimization problems (for posterior maximization) and to posterior sampling (not covered in this course). This part starts with a review of optimization algorithms including gradient descent, and the most simple splitting and proximal algorithms. Then we review increasingly powerful regularization techniques in historical order: from Wiener filters and Tikhonov regularization, to total variation, and non-local self-similarity. By the end of the course we briefly introduce an overture to recent approaches using pretrained denoisers as implicit regularizers of inverse problems via RED and plug and play algorithms for posterior maximization. The theory is illustrated by applications to image denoising, deblurring and inpainting.

In MAA312 “Numerical Methods for ODEs”, we will introduce numerical scheme to simulate ordinary differential equations. We will start by Euler schemes (explicit and implicit) and understand how the notions of stability and consistency can be used to study these methods. We will then consider Runge-Kutta schemes and apply the different methods to particular applications, e.g. the N-body problem.