Une erreur est survenue. Merci de réessayer ultérieurement
Le mail de partage a bien été envoyé.
M2 Optimization
Master's degree
Mathématiques et applications
Full-time academic programmes
English
Optimization is a field at the crossroads of mathematics, computer science, and economics, whose importance is steadily growing both academically and in its applications to socio-economic problems. The M2 program “Optimization, Games, and Control” covers all components of optimization in a broad sense, with particular focus on those specific to the research context of the Saclay area.
The program is organized into three trimesters of coursework, followed by a long internship in an academic or industrial research laboratory. The topics covered are diverse: optimal control (discrete and continuous time, deterministic and stochastic), game theory (theory, modeling in economics and networks), calculus of variations (and more generally optimization in analysis and PDEs), stochastic optimization and stochastic methods for optimization, operations research. A significant portion of the courses is shared with other tracks and specializations in mathematics, computer science, and economics.
Understand and mathematically model a problem in order to solve it
Objectives
The M2 program "Optimization, Games, Control" aims to provide students with a solid foundation in both the theoretical and numerical aspects of mathematical optimization, covering topics from fundamental principles to advanced research developments. The program trains students to understand, model, and solve complex problems at the interface of mathematics, computer science, and economics. By combining rigorous theory with practical applications in areas such as optimal control, game theory, calculus of variations, machine learning, and PDE methods, it prepares graduates for cutting-edge research and high-level careers in both academia and industry.
Career Opportunities
Career prospects
Après un Master ou Master + Doctorat : ingénieur (R&D, contrôle, production…)
Après un Master ou Master + Doctorat : chercheur ou enseignant-chercheur
Après un Master ou Master + Doctorat : ingénieur (recherche-développement, contrôle, production…) dans les domaines santé, pharmacie, agroalimentaire, biotechnologies, instruments et réactifs, cosmétique, dépollution et environnement
Après un Master ou Master + Doctorat : ingénieur (recherche et développement, contrôle, production…)
Après un Master : Ingénieur (analyste financier, économiste, statisticien)
Après un Master : Data scientist
Après un Master : Spécialiste en intelligence artificielle (IA)
Après un master : Chargé(e) d’études
ingénieur étude conception
Ingénieur d'études industrie / recherche publique
Ingénieur.e recherche & développement
Enseignant.es dans le secondaire
Fees and scholarships
The amounts may vary depending on the programme and your personal circumstances.
All transcripts of the years / semesters validated since the high school diploma at the date of application.
Curriculum Vitae.
Additional supporting documents
Supporting documents (TOEFL, TOEIC, certificate of a teacher ...) of a level of a foreign language.
VAP file (obligatory for all persons requesting a valuation of the assets to enter the diploma).
Document indicating the list of local M2 choices available here : https://urlz.fr/i3Lo.
Supporting documents :
- Residence permit stating the country of residence of the first country
- Or receipt of request stating the country of first asylum
- Or document from the UNHCR granting refugee status
- Or receipt of refugee status request delivered in France
- Or residence permit stating the refugee status delivered in France
- Or document stating subsidiary protection in France or abroad
- Or document stating temporary protection in France or abroad.
This course will guide you through the history of the complexity of linear programming, from Dantzig’s Simplex Method to the most recent breakthroughs. It focuses on methods with a geometric or combinatorial flavor, such as simplex-like, ellipsoid or interior point methods, as well as their theoretical foundations (combinatorics of polytopes, self-concordant barriers, diophantine approximation, etc). No special knowledge of computational complexity or programming is required.
Objectifs d'apprentissage
Linear Programming (LP), i.e., optimizing a linear objective under linear inequality constraints, plays a central role in Operations Research and combinatorial optimization. Its success comes from the variety of applications and the existence of efficient algorithms to solve it. Despite LP is widely used in practice, its computational complexity is still subject to major open problems, the most emblematic being «Does the simplex method terminate in a polynomial number of steps?».
Bibliographie
Geometric Algorithms and Combinatorial Optimization, Martin Grötschel,
László Lovász, Alexander Schrijver
Theory of Linear and Integer Programming, Alexander Schrijver
Technically, the following aspects will be developed:
Methodologies to study equilibria in games (existence theorems, uniqueness — when relevant —, efficiency, etc.).
Important special classes of games: potential games, S-modular games, and congestion games. Mathematical properties and applications.
Repeated games and stochastic games: Folk theorems for the case with perfect monitoring and imperfect monitoring. First connections with Shannon theory coding theorems.
Coalitional form games (solution concepts such as the core and nucleolus, existence theorems, etc.).
Multi-agent learning. Description and analysis of well-known learning rules: reinforcement learning in games, regret matching, etc. The role of stochastic approximation.
Objectifs d'apprentissage
The standpoint of this course is built on three objectives: (i) to give a unified overview of game theory to the students (direct game theory and mechanism design, strategic form and coalition form, analysis and algorithms, etc.); (ii) to provide students a good and rigorous methodology to study equilibria in games; (iii) to show to the students through fundamental case studies how game theory is applied in practice ( wireless networks, smart energy networks, climate change, viral marketing).
The course duration is 30h (22h of lecture and tutorial work) and 8 hours will be dedicated to a case study (TP). Through programming and numerical simulations, the problem of designing smart distributed charging algorithms for electric vehicles will be studied; the goal being to decarbonize their electricity consumption.
Bibliographie
S. Lasaulce and H. Tembiné, "Game Theory and Learning for Wireless Networks: Fundamentals and Applications", Academic Press, Elsevier, pp. 1-336, Oct. 2011, ISBN 978-0123846983
M. Maschler, S. Zamir, and E. Solan. “Game theory”. Cambridge University Press, 2020.
Decision problems (one-player games): Preference relations, utility and payoff functions, decision under risk and VnM axioms, representation theorems.
Zero-sum games: MinMax Theorems, pure and mixed strategies.
n-player games: Dominance and Rationalizability. Nash equilibria and fixed points, correlated equilibria.
Extensive form games: perfect information and backward induction, Zermelo and Kuhn's Theorems. Imperfect information, mixed and behavioral strategies, perfect recall and equivalence Theorems.
Incomplete information and Bayesian games.
Repeated games and Folk Theorem
Objectifs d'apprentissage
This course will present the main models of strategic games, related mathematical tools, important results and applications.
We shall consider essentially stochastic dynamical systems with discrete time and finite state space, or finite Markov chains. This framework already contains the essential difficulties (for instance for long term problems), and allows one to give at the same time an insight of algorithms, mathematical techniques and qualitative properties.
We shall present the different types of stochastic control problems: complete and incomplete observation problems, criteria with finite horizon, discounted infinite horizon, stopping time, ergodic criteria, risk-sensitive criteria, constrained problems. For some of these criteria, we shall state the corresponding dynamic programming equations, study their qualitative properties, and the algorithms for solving them (value iterations, policy iterations, linear programming), and deduce in some cases the structure of optimal strategies.
Objectifs d'apprentissage
The aim of this course is to introduce different stochastic control models and to present dynamic programming as a tool for solving them. Illustrations selected among stock management, portfolio selection, yield management, transportation or Web PageRank optimisation will be presented.
Bibliographie
D. P. Bertsekas. Dynamic programming. Prentice Hall Inc., Englewood Clis, NJ, 1987.
M. L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons Inc., New York, 1994.
P. Whittle. Optimization over time. Vol. I and II. John Wiley & Sons Ltd., Chichester, 1982, 1983.
Linear Programming, Probability theory, Convex analysis
Programme / plan / contenus
Two directions are explored:
We investigate the so-called "open-loop" situation, that is, the case where decisions do not depend on information available for the problem, and we thoroughly study the stochastic gradient method and its variants,
We also study "closed-loop" optimization problems, that is, the case where decisions are based on partial information. In particular, we details the two-stage case, providing modelization tools as well as dedicated algorithms (Progressive Hedging, L-Shaped), that can be extended to multistage limit. We also develop the Dynamic Programming approach when the uncertainty is stagewise independent. Both approaches are merged to give the Stochastic Dual Dynamic Programming algorithm, well known by the energy community.
Objectifs d'apprentissage
The course presents both theoretical and numerical aspects of decision problems with uncertainty, where one sets a probabilistic framework in order to minimize the expectation of a cost.
Bibliographie
Lectures on Stochastic Programming, Shapiro, Dentcheva, Ruszczynski
Course website: https://leclere.github.io/teaching/OS
We will study how many optimization problems can be modeled as linear, mixed-integer linear, or integer quadratic programs. We will then introduce some of the main algorithms (simplex method, Branch & Bound algorithm) used to solve these types of problems. Finally, we will devote some time on modeling and solving practical optimization problems using a dedicated software (CPLEX).
Objectifs d'apprentissage
Operations Research is an area of applied mathematics aiming at helping organizations to make better decisions. It relies on a variety of mathematical techniques to model and solve decision and optimization problems arising in many different contexts such as manufacturing, transportation, healthcare, telecommunications or financial planning. The objective of the course is to provide an introduction to Operations Research and its applications by focusing mainly on the use of mathematical programming.
Bibliographie
F. Hillier and G. Libermann. Introduction to Operations Research 11th Edition. Mc Graw Hill, 2021.
Basic knowledge in integration, calculus and optimization (Lagrange multipliers).
Programme / plan / contenus
The first part of the course will focus on Pontryagin's principle, a necessary optimality condition for optimal control problems. It will be established in various settings (time-optimal linear systems, linear-quadratic problems, nonlinear systems). It will lead to a numerical method called shooting method.
The second part of the course will focus on methods deriving from the dynamic programming principle. In particular, we will see how global solutions can be computed through the resolution of the Hamilton-Jacobi-Bellman equation, a nonlinear partial differential equation.
Objectifs d'apprentissage
The course is dedicated to the optimization of dynamical systems described by Ordinary Differential Equations (ODEs). We will consider situations for which at any time, an agent can have an impact on the evolution of the system through a variable called control.
Optimal control theory has applications in various fields, such as aerospace, energy management, biology, etc. A classical example is the Goddard problem : the fuel profile consumption of a rocket must be determined, so that the rocket reaches the highest possible altitude. For this problem, the ODE modeling the trajectory of the rocket derives from Newton's law.
Bibliographie
W.H. Flemming and R.W. Rishel. Deterministic and Stochastic Optimal Control, Springer, 1975.
E. Trélat. Contrôle optimal : théorie et applications. Electronic version of 2013 (in French).
We will in particular deal with the following topics:
Hahn-Banach theorem and representation of closed convex sets as intersection of closed half-spaces
the subdifferential of a lsc convex function, and some subdifferential calculus rules
the proximal operator of a convex function on a Hilbert space
convex duality, and in particular Lagrangian duality and Fenchel-Rockafellar theorem
The second part of the course will explain how solutions to convex optimization problems (and more) can be constructed using fixed-point methods.
Objectifs d'apprentissage
The aim of this course is to introduce the basic knowledge in convex analysis and convex optimization, which will be used in many of the courses of the Optimization program. In the first part, we will see how to obtain optimality conditions for convex optimization problems, that may involve non-smooth functions, possibly living on infinite dimensional spaces.
Bibliographie
Convex Analysis and Variational Problems, Ekeland and Temam
Convex Analysis and Monotone Operator Theory on Hilbert spaces, Bauschke and Combettes
Among the deterministic algorithms, we will present the Nelder-Mead method, the trust-region NEWUOA algorithm as well as quasi-Newton techniques where the gradient is evaluated numerically. For the stochastic (or randomized) part we will present concepts ensuing from Evolution Strategies and Estimation of Distribution Algorithms, discussing step-size and covariance matrix adaptation techniques. We will in particular present the CMA-ES algorithm.
The course will cover theoretical aspects of the methods related to invariance, (linear) convergence as well as practical ones. The students will have the opportunity to implement or to use the implementation of certain algorithms, to learn how to recognize and to diagnose numerical problems and the failure of one algorithm as well as to identify the convergence type. We will also explain how to conduct (fair) performance assessment.
In the last part we present concepts related to multiobjective optimization.
Objectifs d'apprentissage
Challenging numerical optimization problems occur in many domains in industry, medicine, biology, physics, … The underlying functions can be non-convex, non-differentiable, non-smooth, ill-conditioned and noisy. Often, the gradient of the function is not available (or does not exist). In this context, derivative-free optimization techniques, also referred to as zero order black-box optimization techniques, are needed.
The objective of this lecture is to present the main deterministic and stochastic derivative-free optimization algorithms used nowadays.
Bibliographie
Introduction to Derivative Free Optimization, A. Conn, K. Scheinberg et L. Vincente SIAM, 2009.
Numerical optimization, theoretical and practical aspects : JF Bonnans, JC Gilbert, C. Lemaréchal, C. Sagastizbal, Springer Verlag 2003.
Functions of bounded variation (approximation, embeddings, traces, co-area formula)
Sets of finite perimeter (reduce and essential boundary, generalized Gauss-Green formula)
Fine properties of functions of bounded variation (approximate continuity, approximate differentiability, decomposition of the gradient)
Special functions of bounded variation and application to the Mumford-Shah problem
Objectifs d'apprentissage
This course aims to introduce geometric measure theoretic tools to study singular variational problems. Typical applications arise in geometric type problems such as the Plateau problem: isoperimetry, minimal surfaces, imaging (denoising, segmentation) or materials science (fractures or plasticity).
Bibliographie
Ambrosio, Fusco, Pallara: Functions of bounded variation and free discontinuity problems
Evans, Gariepy: Measure theory and fine properties of functions
Federer: Geometric measure theory
Giusti: Minimal surfaces and functions of bounded variation
Maggi: Sets of finite perimeter and geometric variation problems
Application to optimization, variational inequalities, fixed-point iterations
Application to learning in games
Objectifs d'apprentissage
Online learning is an important topic at the intersection of machine learning, optimization and sequential decision. This course focuses on a general class of sequential decision problems in adversarial environments. We first introduce the mathematical tools to design and analyze a wide range of regret minimizing algorithms in online linear/convex optimization. These results are then extended to a wider range of problems through Blackwell's approachability. We then explore deep connections with various optimization problems: (stochastic) optimization, minimax problems, variational inequalities, learning in games, fixed-point iterations.
Bibliographie
"Prediction, learning, and games", Nicolò Cesa-Bianchi and Gábor Lugosi. Cambridge University Press, 2006
"Approachability, regret and calibration: Implications and equivalences" ,Vianney Perchet, Journal of Dynamics & Games (2014)
"A modern introduction to online learning", Francesco Orabona, lecture notes, 2023
Although most of the mathematical concepts will be reintroduced, it is strongly recommended that you familiar with Bellman’s dynamic programming (having taken the optimization course in the first semester for of the first semester for example). No knowledge of game theory is required. The course will be devoted largely to modelling issues, including the structural assumptions behind stability in these games.
Preliminaries: game theory and optimization
A first example from crypto-currencies
Population evolution and equilibrium equations in a mean field game
Example of optimal liquidation
Master equation
Additional modelling (learning procedure, presence of a majority player, …)
Objectifs d'apprentissage
The aim of this course is to present the new medium-field game theory, notably through some of its applications in finance. The aim of this course is to present the new theory of mean-field games, in particular through some of its applications in finance. Mean-field games are games involving infinite number of “small” players, i.e. who have only a marginal influence on the game. influence on the game. We will see in particular through two examples that we will follow the course (one from a crypto-currency market and one from an optimal liquidation problem), why such games are We will see in particular through two examples that we will follow during the course (one from a crypto-currency market and one from an optimal liquidation problem) why such games are natural models in finance. From a mathematical point of view, this theory is essentially based on optimal control (stochastic) control and game theory.
The first part is dedicated to presenting Clarke’s theory of generalized subdifferentials as well as other generalized subdifferential notions. Afterward we discuss several generalized convexity concepts such as quasiconvexity, almost convexity or weak convexity. In the third part of the course nonsmooth optimization problems are studied by means of the considered notions, in particular we derive necessary and sufficient optimality conditions for characterizing their extreme points. The morning sessions are dedicated to lectures, while during the afternoon ones problems involving the newly introduced notions are solved.
Objectifs d'apprentissage
In this course we discuss various aspects of Nonsmooth Optimization beyond the convex framework.
Bibliographie
Frank H. Clarke, “Optimization and Nonsmooth Analysis”, SIAM, 1990
Boris S. Mordukhovich, “Variational Analysis and Generalized Differentiation. I: Basic Theory”, Springer, 2006
In the first part of the course we will study equivalent definitions of Optimal Transport (OT). A static formulation based on probability theory, a dual formulation based on convex optimization and a dynamical formulation based on PDE, Hamilton Jacobi equations. We then go through classical theory for existence, uniqueness and regularity. The theory will be at the interplay between probability, measure theory, PDE and convex analysis.
For the second part we will concentrate on applications of OT on the one hand for the study of PDE, such as Incompressible Euler equations and PDE that can be recast as Gradient Flows for the metric defined by OT. On the other hand for statistics and data analysis.
In a third part we will present some efficient numerical methods to compute OT and its applications such as entropic regularization, Semi-Discrete OT, Primal Dual methods. We will finish with a discussion on possible extensions of OT: (unbalanced and partial OT).
Objectifs d'apprentissage
Optimal transport is a powerful mathematical theory at the interface between optimization, PDE and probability theory with far reaching applications. It defines a natural tool to study probability distributions in the many situations where they appear: data science, partial differential equations, statistics or shape processing. In this course we will present the classical theory of optimal transport, efficient algorithms to compute it and applications.
Bibliographie
Optimal Transport for applied Mathematicians, F. Santambrogio
Topics in Optimal Transport, C. Villani
Optimal Transport Old and New, C. Villani
Optimal transport: discretization and algorithms, Q. Mérigot and B. Thibert
Computational Optimal Transport: With Applications to Data Science, G. Peyré, M. Cuturi
Previous course on convex optimization, especially first-order algorithms (gradient descent), optimality conditions (KKT), and duality.
Programme / plan / contenus
The course presents continuous optimization techniques that have been developed to deal with the increasing amount of data. In particular, we look at optimization problems that depend on large-scale datasets, spatially distributed data, as well as local private data.
We will focus on three different aspects: (1) the development of algorithms to decompose the problem into smaller problems that can be solved with some degree of coordination; (2) the trade- off of cooperation vs. local computation; (3) how to design algorithms that ensure privacy of sensitive data.
Objectifs d'apprentissage
Understand the challenges in cooperative optimization for large-scale data applications.