The course MAA304 begins with a detailed overview of convergence, both in probability and in distribution, and revisiting two key theorems in statistics: the law of large numbers and the central limit theorem. We will then look in detail at asymptotic statistics, including fundamental topics such as the asymptotic properties of maximum likelihood estimators (MLEs), the formulation of asymptotic confidence intervals, and the principles underlying asymptotic test theory.

We will then highlight the crucial role of information theory in statistics, with particular emphasis on notions of efficiency, Cramer-Rao theory, and sufficiency. Moving to multivariate linear regression, the focus shifts to inference in Gaussian models and model validation to give students a solid understanding of this important statistical paradigm.

Next, we turn to nonlinear regression and delve into a comprehensive study of logistic regression. The course concludes with a brief introduction to nonparametric statistics, emphasising the importance of distribution-free tests.

MAA307 is composed of three connected parts. The first one lays the foundation of convex analysis in Hilbert spaces, and covers topics such as: convex sets, projection, separation, convex cones, convex functions, Legendre-Fenchel transform, subdifferential. The second part deals with optimality conditions in convex or differentiable optimization with equality and inequality constraints, and opens the way to duality theory and related algorithms (Uzawa, augmented Lagrangian, decomposition and coordination). The last part is an introduction to the optimal control of ordinary differential equations.

Convex Optimization and Optimal Control is composed of three connected parts. The first one lays the foundation of convex analysis in Hilbert spaces, and covers topics such as: convex sets, projection, separation, convex cones, convex functions, Legendre-Fenchel transform, subdifferential. The second part deals with optimality conditions in convex or differentiable optimization with equality and inequality constraints, and opens the way to duality theory and related algorithms. The last part is an introduction to the optimal control of ordinary differential equations and discusses, in particular, the concepts of adjoint state, Hamiltonian and feedback law.

Prerequisites: MAA203

MAA305 presents the basic theory of discrete Markov chains. It starts by introducing the Markov property and then moves on to develop fundamental tools such as transition matrices, recurrence classes and stopping times. With these tools at hand, the strong Markov property is proven and applied to the study of hitting probabilities. In the second part of the course, we introduce the notion of stationary distribution and prove some basic existence and uniqueness results. Finally, the long time behavior of Markov chains is investigated by proving the ergodic Theorem and exponential convergence under Doblin's condition. The course is concluded by surveying some stochastic algorithm whose implementation relies on the construction of an appropriate Markov Chain, such as the Metropolis-Hastings algorithm.

When several pictures (obtained from a camera, a CT scan, etc.) of an object are available, registration refers to mathematical methods to combine those images. Registration is then an important first step to extract information from those images. This will introduce variational methods that play a central role in many scientific problems and in particular in image analysis. Next, we will consider the problem of partitioning an image into different segments. These segments should be meaningful : an organ in a CT scan, an object in a picture. The lecture will cover a range of mathematical models and methods, such as regularization or level set methods.

In MAA312 “Numerical Methods for ODEs”, we will introduce numerical scheme to simulate ordinary differential equations. We will start by Euler schemes (explicit and implicit) and understand how the notions of stability and consistency can be used to study these methods. We will then consider Runge-Kutta schemes and apply the different methods to particular applications, e.g. the N-body problem.