Une erreur est survenue. Merci de réessayer ultérieurement
Le mail de partage a bien été envoyé.
Distributed and Parallel Computing (M2 DiPaC)
Master
Mention Informatique
Formation initiale
Anglais
Le master M2 DiPaC se concentre sur des thématiques avancées en calcul haute performance (HPC) et en algorithmique répartie sur systèmes distribués à grande échelle. Les étudiants apprennent à concevoir des solutions rapides, évolutives et robustes pour répondre aux besoins de calcul intensif d’applications en IA, en analyse de données massives, ainsi qu’en calcul scientifique et quantique. Le programme prépare à des carrières en ingénierie et Ramp;D ou doctorat en HPC ou systèmes distribués.
Le cursus comprend sept cours fondamantaux en calcul haute performance, parallèle et distribué. Deux cours au choixpermettent de se spécialiser soit en IA et analyse de données massives à haute performance (HPDA), soit en calcul hybride HPC / quantique (HQI). Un cours de soft-skill du catalogue de l’université renforce les aptitudes professionnelles utiles tout au long de la carrière. Un stage obligatoire de 6 mois sur les thèmes du M2 DiPaC complète le programme.
La langue officielle du programme est l’anglais ; tous les cours sont dispensés en anglais. La plupart de nos enseignants maîtrisent également le français ; des interactions en français sont donc possibles dans les cours et les évaluations (devoirs, examens, etc.).
Le programme est étroitement intégré à l’écosystème Paris-Saclay de laboratoires de recherche et de partenaires industriels.
Les étudiants du M2 DiPaC acquerront des compétences en :
Modèles de programmation parallèle et ingénierie de la performance sur CPU/GPU et clusters/supercalculateurs distribués.
Conception et analyse d’algorithmes distribués avec garanties théoriques.
Compétences en analyse de données à l’échelle et en IA : entraînement , inférence, analyse de données distribuée à grande échelle.
Simulation hybride HPC+quantique via des techniques exactes et d’approximation.
Planification et équilibrage de charge des travaux parallèles sur clusters et clouds.
Profilage de code, traçage, diagnostic des goulots d’étranglement, techniques d’optimisation des performances.
Ingénierie logicielle pour le HPC : C++ moderne, tests, CI, reproductibilité.
Gestion de versions avec Git et documentation rigoureuse.
Communication technique et cadrage de projet dans les contextes recherche et industrie.
Pratiques de calcul responsables, fiables et durables.
Objectifs pédagogiques de la formation
Les systèmes informatiques évoluent vers une plus grande efficacité et des fonctionnalités plus riches au croisement de trois grands domaines scientifiques interconnectés :
Les systèmes distribués assurent la connectivité et un fonctionnement fiable à l’échelle d’Internet, des clouds, des grappes et des capteurs, en s’attaquant à des problèmes difficiles de synchronisation, de sécurité, de concurrence et de robustesse.
Le calcul parallèle à haute performance (HPC) traite des charges intensives en science et en IA en exploitant des supercalculateurs avec une ingénierie de performance rigoureuse.
Le calcul quantique fournit des algorithmes et du matériel exploitant le parallélisme quantique pour atteindre des gains inaccessibles aux paradigmes classiques.
En s’appuyant sur les fondations posées par le M1 DiPaQ en HPC, systèmes distribués et calcul quantique, le M2 DiPaC se spécialise dans des sujets avancés en HPC et en algorithmes distribués pour les systèmes à grande échelle. Les étudiants apprennent à concevoir des solutions rapides, évolutives et robustes pour des applications en IA, en analyse de données massives, en simulations scientifiques et en l’informatique quantique. Les étudiants peuvent se spécialiser soit en high performance data analysis (HPDA), soit en hybrid HPC/quantum computing (HQI) via des cours au choix.
Objectifs de connaissances :
Modèles de programmation parallèle et ingénierie de la performance sur supercalculateurs et accélérateurs modernes.
Algorithmes et systèmes distribués à grande échelle : réplication, consensus, cohérence, agents mobiles et algorithmes inspirés de la nature, avec garanties de robustesse et de performance.
Algorithmes et applications de big data, d'apprentissage automatique et d'IA avec des défis de calcul massifs.
Algorithmes et simulation quantiques utilisant une approache classique/HPC et quantique (HQI)
Objectifs de compétences :
Concevoir des algorithmes et logiciels parallèles et distribués de haute qualité atteignant des objectifs de latence, de débit et de performance sur les architectures HPC visées.
Développer et analyser des algorithmes évolutifs et robustes avec garanties de passage à l’échelle, de consensus, de terminaison et de tolérance aux pannes.
Optimiser le code HPC sur toute la pile : complexité, localité mémoire, vectorisation, usage des accélérateurs, communications, entrée/sortie et réseaux.
Ressources et mise en pratique :
Accès aux machines universitaires et aux supercalculateurs partenaires pour TP, projets, développement et optimisation de code.
Utilisation de chaînes d’outils et bibliothèques open source largement adoptées par la communauté HPC.
Acquisition de pratiques modernes d’ingénierie logicielle HPC: C++ avancé, EDI, gestion de versions (git), documentation et intégration continue.
Débouchés
Professionnels
After Master and PhD : reseacher or assistant professor or professor
Après Master + Doctorat : chercheur ou enseignant-chercheur
data scientist
enseignant.e-chercheur.se (après un doctorat)
Ingénieur de recherche
ingénieur développement
ingénieur maintenance
Ingenieur R&D
ingénieur support technique
Sous réserve de réussite au concours de la fonction publique, les diplômés pourront accéder à des postes d'ingénieur d’étude ou chercheur au sein d’un organisme national de recherche
Poursuite d’études
Data Scientist, Data Analyst, Ingénieur·e en Machine Learning dans des secteurs innovants (tech, finance, santé, énergie, etc.) ;
ngénierie études, recherche et développement
PhD
Tarifs et bourses
Les montants peuvent varier selon les formations et votre situation.
Public. Le M2 DiPaC est la suite naturelle du M1 DiPaQ pour les étudiants souhaitant se spécialiser en calcul haute performance (HPC) avancé, calcul parallèle et informatique distribuée.
Prérequis. Solide formation en informatique avec des bases en programmation parallèle et en systèmes distribués. De solides compétences en mathématiques (notamment en algèbre linéaire) et en programmation sont attendues.
Admissions depuis des parcours apparentés. Des étudiants d’excellence issus de masters connexes peuvent être admis s’ils possèdent déjà des fondamentaux en calcul parallèle et distribué. Le cas échéant, ils peuvent suivre certaines UE du M1 DiPaQ en option pour rattraper les bases.
Voie double cursus. Nous admettons régulièrement des étudiants ingénieurs d’excellence achevant leur quatrième année (équivalent M1) dans des écoles de Paris-Saclay (CentraleSupélec, Polytech Paris-Saclay, ENSTA Paris) en double cursus afin d’effectuer leur cinquième année d’études à l’école en parallèle du M2 DiPaC. Un aménagement pédagogique dédié entre les programmes permet de suivre des cours dans les deux formations, avec des dispenses possibles sous réserve de l’accord des responsables du master. Si vous envisagez de candidater, contactez au préalable le coordinateur du M2 DiPaC et votre responsable de l’année à l’école.
Positionnement par rapport au M2 QMI. Bien que le M2 DiPaC propose des cours optionnels en calcul quantique alignées avec les workflows classiques/HPC, les candidats souhaitant se consacrer exclusivement à l’information quantique avancée sont encouragés à candidater au programme partenaire M2 QMI.
Un nombre limité de bourses (Eiffel, IDEX, Quantum Saclay) sont disponibles pour des candidats exceptionnels.
Période(s) de candidature
Plateforme Inception
Du 15/04/2026 au 30/05/2026
Pièces justificatives
Obligatoires
Lettre de motivation.
Lettre détaillant la motivation et les raisons pour vouloir étudier l'informatique parallèle et distribuée dans le programme de master M2 DiPaC, en rapport avec des études et des expériences précédentes ainsi que des plans de carrière futurs.
Tous les relevés de notes des années/semestres validés depuis le BAC à la date de la candidature.
Tous les relevés de notes depuis le BAC.
Curriculum Vitae.
CV détaillant toutes les études antérieures, les stages, les formations suivies, les expériences professionnelles (si pertinent), les distinctions/récompenses ainsi que les intérêts et activités personnelles.
Facultatives
Copie diplômes.
Lettre de recommandation ou évaluation de stage.
Pièce libre.
Dossier VAPP (obligatoire pour toutes les personnes demandant une validation des acquis pour accéder à la formation) https://www.universite-paris-saclay.fr/formation/formation-continue/validation-des-acquis-de-lexperience.
nécessaire uniquement si vous avez officiellement validé votre expérience professionnelle antérieure pour être considérée comme équivalente à un diplôme universitaire
Document justificatif des candidats exilés ayant un statut de réfugié, protection subsidiaire ou protection temporaire en France ou à l’étranger (facultatif mais recommandé, un seul document à fournir) :
- Carte de séjour mention réfugié du pays du premier asile
- OU récépissé mention réfugié du pays du premier asile
- OU document du Haut Commissariat des Nations unies pour les réfugiés reconnaissant le statut de réfugié
- OU récépissé mention réfugié délivré en France
- OU carte de séjour avec mention réfugié délivré en France
- OU document faisant état du statut de bénéficiaire de la protection subsidiaire en France ou à l’étranger.
Einstein summation convention and index manipulation
Tensor decompositions and networks
Refresher on matrix decompositions (QR, SVD, Cholesky, LU, low-rank decompositions)
Introduction to tensor decompositions and networks
Canonical polyadic decomposition (CPD)
Tucker and Hierarchical Tucker decompositions
Tensor-train decomposition (TT), Matrix product states (MPS) and Projected entangled-pair states (PEPS) networks
Numerical methods for tensor computations
Algorithms for computing tensor decompositions (Tensor SVD)
Low-rank tensor arithmetic
Tensor cross-approximation
Optimization algorithms on tensor manifolds (ALS, AMEN)
Tensor completion and recovery
Applications in Quantum Computing, High Performance Computing, and AI
Computational challenges in tensor algorithms
Tensor networks in quantum chemistry and physics, quantum simulation, ...
Tensor decompositions in multivariate data analysis, neural networks, recommender systems, ...
Objectifs d'apprentissage
Learning objectives:
At the end of this course, students will be able to:
Learn the mathematical foundation of tensors and tensor operations.
Understand the philosophy of low-rank computations using tensor decompositions/networks.
Implement tensor network computations using Python libraries such as NumPy or TT-toolbox.
Explore the applications of tensor computations in in quantum computing, high performance computing, and AI/data science.
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course combines lectures introducing theoretical concepts with hands-on programming labs that allow students to apply and test these concepts in practice.
Evaluation is continuous, based on multiple written and programming assignments distributed throughout the course.
Compétences
Competencies gained in the course:
Upon successful completion, students will have acquired the following competencies:
Technical competence: Ability to design and implement numerical methods using tensor networks.
Analytical competence: Mastering the mathematical foundations of low-rank tensor decompositions and networks and numerical algorithms leveraging them.
Practical competence: Using tensor network toolboxes in Python for rapidly developing tensor network applications.
Problem-solving competence: Understand when and where low-rank tensor computations can be applied to accelerate computations through effective numerical compression.
Bibliographie
Bibliography:
Grey Ballard, Tamara Kolda. Tensor Decompositions for Data Science. 2025.
Alain Franc. Tensor Ranks for the Pedestrian for Dimension Reduction and Disentangling Interactions, 2002.
Session 5 (Lab): Practical work on linear systems.
Session 6 (Lab): Practical work on PDEs.
Session 7 (Lab): Practical work on eigensolvers and SVD.
Objectifs d'apprentissage
Learning objectives:
At the end of this course, students will be able to:
Understand the interest of numerical algorithms for solving scientific problems.
Identify the main linear algebra kernels required for scientific computing applications.
Apply linear algebra routines to general fields like HPC, AI or quantum computing.
Develop and execute numerical algorithms in Python.
Know the main issues in finite precision computations and problem conditioning.
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course combines lectures introducing theoretical concepts with hands-on programming labs that allow students to apply and test these concepts in practice.
Evaluation is continuous, based on written and programming exercises throughout the course. Students will use Python to complete lab work.
Compétences
Competencies gained in the course:
Upon successful completion, students will have acquired the following competencies:
Technical competence: Ability to design and implement numerical algorithms in Python.
Analytical competence: Capacity to understand numerical algorithms in the context of finite precision computation.
Practical competence: Knowledge of the main linear algebra solvers.
Problem-solving competence: Ability to identify scientific problems and to choose the most suited algorithms for solving them.
Transferable skills: Experience working with numerical linear algebra problems.
Bibliographie
Bibliography:
Carl D. Meyer, Matrix Analysis and Applied Linear Algebra. SIAMM 2023 (second edition)
Gene H. Golub and Charles F. Van Loan. Matrix Computations. John Hopkins University Press 2013, 2013. 4th edition.
N. J. Higham. Accuracy and Stability of Numerical Algorithms. SIAM 2002 (second edition).
Y. Saad, Iterative Methods for Sparse Linear Systems. SIAM 2003 (second edition).
Session 1 Introduction to natural algorithms and to the model of Population Protocols.
Session 2: Computational power of Population Protocols and its fault-tolerance.
Session 3: Chemical Reaction Network model and its relation to Population Protocols.
Session 4: Robust and efficient counting in Population Protocols.
Session 5: Self-stabilizing Population Protocols.
Session 6: Proof labeling schemes and communication complexity techniques.
Session 7: Students’ presentations on natural algorithms and oral examination on the course material.
Objectifs d'apprentissage
Learning objectives:
Nature has developed distributed algorithms (without centralized control), which are efficient and require little resources and energy. Networking systems have sometimes been inspired by them. This course is devoted to the study of algorithms related, in one way or another, to natural phenomena. They are based on distributed models composed of components that are very limited in resources, in computing and communication capacities.
For example, in the population protocol model, agents, fixed or mobile, anonymous and undistinguishable, with limited memories, interact in pairs in an unpredictable and asynchronous manner.
Another example is micro-biological distributed systems, such as bacterial and viral cultures. These are again very limited systems, in which algorithms are developed to perform computations (microbiological circuits) or regulate self-administered drugs.
One of the main objectives of the course will be to understand how purely computational distributed problems (aggregation, synchronization, coordination, communication, etc.) can be solved in such models with limited resources.
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course lectures combine theoretical material together with illustrating examples and exercises.
The teaching is interactive, stimulating thinking and memorization.
Evaluation is based on students’ presentations of scientific works on natural algorithms and on oral examination on the course material.
Compétences
Competencies gained in the course:
Upon successful completion, students will have acquired the following competencies:
Understanding of how natural or nature inspired distributed systems can be modeled and being able to model such systems
Ability to analyze algorithms for such systems: prove their correctness and evaluate their complexity
Ability to design such algorithms using limited resources and tolerating failures
Performance analysis: profiling, benchmarking, vectorization, and cache optimization
Efficient work-group and work-item mapping for CPUs, GPUs, and accelerators
Advanced kernel design: nested parallelism, local memory, synchronization, and lambda-based kernels
Best practices in software engineering for high-performance data-parallel C++ applications
Objectifs d'apprentissage
Learning objectives:
At the end of this course, students will be able to:
Master data-parallel C++ paradigms – apply SYCL and oneAPI programming models to implement efficient and portable data-parallel computations across CPUs, GPUs, and other accelerators.
Optimize for performance and scalability – analyze and improve memory access patterns, vectorization, and workload distribution to maximize performance on heterogeneous architectures.
Design complex parallel algorithms – develop sophisticated data-parallel algorithms for high-performance applications such as scientific computing, simulations, and machine learning.
Integrate and exploit heterogeneous architectures – effectively orchestrate computation across multiple devices using SYCL/oneAPI to achieve high throughput and resource utilization.
Implement robust and maintainable parallel systems – apply advanced debugging, verification, and software engineering practices to ensure correctness and maintainability of large-scale data-parallel programs.
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course combines lectures introducing theoretical concepts with hands-on programming labs that allow students to apply and test these concepts in practice.
Evaluation is continuous, based on multiple written and programming assignments distributed throughout the course. Students will use CPU and GPU-equipped computing environment to complete lab work and assignments.
Compétences
Competencies gained in the course:
Upon successful completion, students will have acquired the following competencies:
Technical competence: Ability to design and implement high-performance data-parallel programs using SYCL/oneAPI across heterogeneous architectures (CPU, GPU, and accelerators).
Analytical competence: Capacity to analyze performance bottlenecks, profile SYCL/oneAPI applications, and optimize memory access, vectorization, and workload distribution.
Practical competence: Proficiency in using SYCL/oneAPI development environments, compilers, and profiling tools for high-performance C++ programming.
Problem-solving competence: Ability to identify computational problems suitable for data-parallel acceleration and implement efficient, scalable solutions.
Transferable skills: Experience in performance tuning, efficient memory and execution management, and structured experimentation with large-scale heterogeneous computational problems.
Bibliographie
Bibliography:
Rupp, Karl, et al. Data-Parallel C++: Mastering SYCL for Programming of Heterogeneous Systems. Addison-Wesley, 2022.
Familiarity with basic notions of algorithms and algorithm analysis (complexity measures, asymptotic notation).
Familiarity with basic notions of computational complexity theory (NP-hardness, reductions).
Familiarity with basic notions of graph theory and elementary graph algorithms.
Programme / plan / contenus
Total duration: 21 hours (7 sessions × 3 hours)
The mobile agent paradigm has been proposed since the 90s as a concept that facilitates various fundamental networking operational requirements and tasks, such as fault tolerance, network management, and data acquisition. Mobile agents serve as a natural model for the fundamental computing entities of systems with inherent mobility (mobile code, malware propagation, web crawlers, etc.) and, as such, they have found application as a software design paradigm for various networked systems. A second perspective on the mobile agent paradigm is as a model for robots that operate and move in continuous spaces, with typical applications in the fields of artificial intelligence, robotics, and control.
The distributed algorithms community has taken a strong interest both in software agents and in robots, developing a rich literature and an active research field. After presenting the two main model categories of robots (Look-Compute-Move) and of software agents, we will treat the fundamental algorithmic problems of the field, such as pattern formation by groups of robots, gathering, rendezvous, exploration, and black hole search (the detection of dangerous nodes in a network). The unifying theme of the course is the design and analysis of algorithms which enable agent collaboration and problem solving, despite their limited communication capabilities, their limited knowledge of the domain in which they move, and, in some cases, the asynchronicity of the system or the presence of faults.
Objectifs d'apprentissage
Learning objectives:
At the end of this course, students will be familiar with the main models of mobile agent computing, as well as with some of the most fundamental problems and solutions thereof that have been proposed in the literature. They will be able to develop algorithms, prove impossibility results, formulate and prove properties of such systems.
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course is organized in seven 3-hour lectures, allowing time for problem solving and discussion of algorithmic approaches.
Students are evaluated based on multiple written take-home assignments distributed throughout the course.
Bibliographie
Bibliography:
Paola Flocchini, Giuseppe Prencipe, and Nicola Santoro (eds.): Distributed computing by mobile entities: current research in moving and computing. Lecture Notes in Computer Science, vol. 11340. Springer, 2019.
Basic knowledge of programming in C or C++ is required.
Familiarity with parallel programming concepts and computer architecture.
[M1 DiPaC] Introduction to parallel algorithms and programming
[M2 DiPaC] High-performance computing on multicore architectures
Programme / plan / contenus
Course program, plan, content:
Total duration: 21 hours (6 sessions × 3,5 hours)
Session 1 (Lecture): Introduction to GPU computing; overview of parallel architectures; CUDA programming model and GPU execution model.
Session 2 (Lecture + Lab): CUDA programming basics — kernels, threads, blocks, and grids; CPU-GPU memory transfers and allocation, writing and launching simple CUDA kernels; introduction to thread indexing and memory access patterns.
Session 3 (Lecture + Lab): 2D and 3D indexing/kernels. CUDA matrix multiplication with 1D and 2D GPU kernels.
Session 4 (Lecture + Lab): GPU memory hierarchy — global, shared, constant, and texture memory; performance implications of memory access. Memory coalescing. Matrix multiplication 9-point grid computation using shared memory.
Session 5 (Lecture + Lab): Reduction algorithms and fast matrix transposition using CUDA.
Session 6 (Lecture + Lab): Advanced CUDA concepts — streams, events, and asynchronous execution; use of CUDA libraries (cuBLAS, cuSOLVER); integrating GPU computations into larger applications.Using GPU libraries CuBLAS and CuSOLVER. Small applicaiton using CUDA libraries.
Objectifs d'apprentissage
Learning objectives:
At the end of this course, students will be able to:
Understand the architecture and execution model of modern GPUs.
Explain the CUDA programming model and its main components (kernels, threads, blocks, grids).
Develop and execute GPU programs using CUDA C/C++.
Efficiently manage GPU memory and data transfers between host and device.
Apply optimization strategies to improve GPU program performance.
Design and implement small-scale applications leveraging GPU acceleration.
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course combines lectures introducing theoretical concepts with hands-on programming labs that allow students to apply and test these concepts in practice.
Each 3.5 hour session blends approximately 50% lecture and 50% programming exercises.
Evaluation is continuous, based on multiple written and programming assignments distributed throughout the course. Students will use CUDA on a GPU-equipped computing environment to complete lab work and assignments.
Compétences
Competencies gained in the course:
Upon successful completion, students will have acquired the following competencies:
Technical competence: Ability to design and implement high-performance programs using CUDA for parallel computation.
Analytical competence: Capacity to analyze performance bottlenecks, apply profiling tools, and optimize GPU-based solutions.
Practical competence: Proficiency in using GPU development environments, compilers, and debugging/profiling utilities.
Problem-solving competence: Ability to identify problems suitable for GPU acceleration and implement efficient solutions.
Transferable skills: Experience working with large-scale computational problems, performance tuning, and structured experimentation.
Bibliographie
Bibliography:
Sanders, J., & Kandrot, E. CUDA by Example: An Introduction to General-Purpose GPU Programming. Addison-Wesley, 2010.
Kirk, D. B., & Hwu, W. W. Programming Massively Parallel Processors: A Hands-on Approach. Morgan Kaufmann, 3rd Edition, 2016.
NVIDIA Corporation. CUDA C Programming Guide. (latest version available online at developer.nvidia.com/cuda-zone)
Farber, R. Parallel Programming with OpenACC. Morgan Kaufmann, 2016 (for comparison with directive-based models).
[DKAI] Distributed Query Processing and Optimization
ECTS :
3
Semestre calendaire :
Semestre 2
Détail du volume horaire :
Cours magistraux :12
Travaux pratiques :9
Langue d'enseignement
Anglais
Enseignement à distance
non
Programme / plan / contenus
By the end of this course, students will be able to:
- Understand the principles and architecture of modern systems for massive data processing.
- Explain the internal components of a relational DBMS, including buffer management, indexing, operator algorithms, and query evaluation plans.
- Apply techniques for query evaluation and optimization in SQL-based systems.
- Analyze how NoSQL systems manage, store, and process large-scale data.
- Describe the architecture and functionalities of distributed data systems such as Apache Hadoop and Apache Spark.
- Gain hands-on experience through practical lab sessions on data management and processing at scale.
Objectifs d'apprentissage
This course provides the foundations to understand and use efficiently systems that can process massive data. It covers both relational Database Management Systems (DBMS) and some distributed nosql systems, for which we study query evaluation and optimization techniques as well as techniques which allow the systems to scale data processing. The first part of the course covers the core of SQL relational query optimization. This includes the functionalities of a DBMS, buffer management, indexes, algorithms for operators, and query evaluation plans. The second part analyzes how nosql systems scale data management, storage and computation. We study the architecture, data structures formats and user interfaces of systems such as Apache Hadoop and Spark. Practical lab on machines will represent roughly one third of the lecture slots.
Session 3 (Lecture + TD): Phase-free normal forms and CSS states — Different normal forms, reduction, CSS stabilisers to/from ZX state
Session 4 (Lecture + TD): Clifford ZX & Clifford normal form — new generators, new axioms, Clifford circuits to ZX translation, graph-like form, normal form, reduction, Gottesman-Knill theorem
Session 5 (Lecture + TD): Clifford ZX & stabilisers — stabiliser groups, simplification of stabiliser groups, generation of Clifford ZX state
Session 6 (Lecture + TD): Full ZX and measurements — complete set of generators, universality, circuits to ZX translation, measurements with variables, measurements as channels, verification
Objectifs d'apprentissage
Learning objectives:
At the end of this course, students will be able to:
Compute the interpretation of a ZX-diagram
Turn circuits into ZX-diagrams / express quantum protocols as ZX-diagrams
Perform rewriting, simplify a diagram
Put a diagram in (phase-free/Clifford) normal form
Express a CSS/stabiliser state as a ZX-diagram (from its stabilisers)
Express channels as ZX-diagrams
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course combines lectures introducing theoretical concepts with exercises that allow students to apply and test these concepts.
Each 3,5-hour session blends approximately 60% lecture and 40% exercises.
Evaluation is done through a final exam.
Compétences
Competencies gained in the course:
Upon successful completion, students will have acquired the following competencies:
Technical competence: Ability to use ZX-calculus to tackle different quantum-related problems, such as state synthesis, verification, circuit optimisation, ...
Analytical competence: Capacity to analyse quantum protocols through the diagram’s interpretation and simplification, to show or disprove equivalence of protocols, etc...
Practical competence: Proficiency in using ZX-calculus simplifications to show a property of the quantum program
Problem-solving competence: Ability to identify how and where ZX-calculus can be used to address a quantum-related issue.
Transferable skills: Experience working with a formal graphical framework, as well as rewrite systems for optimisation purposes. Analysis of quantum protocols.
Bibliographie
Bibliography:
John van de Wetering. 2020. ZX-calculus for the working quantum computer scientist. Retrieved from https://arxiv.org/abs/2012.13966
Aleks Kissinger and John van de Wetering. 2024. Picturing Quantum Software. Preprint. Retrieved from https://zxcalc.github.io/book/
This course aims at mastering the core concept of algorithmic design in ML, from an optimization or a probabilitic point-of-view, using supervised and unsupervised algorithms
1. Regression/classification seen in optimization and probabilistic frameworks, implication on batch and stochastic gradient descent
2. Learning theory and Vapnick-Charvonenkis dimension
3. Evaluating performances of ML algorithms in different contexts (imbalanced, small-sized, etc)
4. Probabilistic framework for machine learning: Discriminative vs Generative learning, Empirical Risk Minimization, Risk Decomposition, Bias-Variance Tradeoff; Maximum Likelihood Estimation (MLE), MLE and OLS in regression, MLE and IRLS in softmax classification
5. Unsupervised Learning and Clustering: K-means, Mixture Models, EM algorithms,..)
6. Unsupervised Learning and Dimensionality reduction: PCA, Probabilistic PCA & EM, ICA,..
Recommendation of reading: An introduction to statistical learning (James, Witten, Hastie & Tibshirani - 2013). Pattern classification (Hart, Stork & Duda - 2000). Apprentissage artificiel: concepts et algorithmes (Cornuéjols & Miclet - 2011)
5 hours of lectures on direct and indirect impacts, and several tutorials including an IT equipment inventory, the analysis of CSR reports from major digital groups, assessment of the specific impacts of AI, and a poster session based on scientific articles on the subject.
Objectifs d'apprentissage
During this course, students will explore the topic of environmental damages linked to digital technology and various ways to assess and limit these damages. For those attending the course, the aim is to subsequently be able to think critically as computer engineers and know what steps to take to assess the impact of their hardware purchases and software developments and how to reduce this impact.