Une erreur est survenue. Merci de réessayer ultérieurement
Le mail de partage a bien été envoyé.
Distributed and Parallel Computing (M2 DiPaC)
Master's degree
Informatique
Full-time academic programmes
English
The M2 DiPaC master’s program focuses on advanced topics in high-performance computing (HPC) and distributed computing on large-scale systems. Students learn to design fast, scalable, and robust solutions that address the heavy computational demands of applications in artificial intelligence (AI), big data analytics, and scientific and quantum computing. The program prepares graduates for careers in engineering and Ramp;D, or doctoral studies in parallel and distributed computing.
The curriculum includes seven core disciplinary courses in high performance, parallel, and distributed computing. In addition, students choose two elective courses to further specialize either in high performance data analysis and AI (HPDA) or in hybrid high performance/quantum computing (HQI). In addition, one soft-skill course from the university catalog reinforces transferable professional skills that support their long-term career development. A mandatory 6 month internship on M2 DiPaC-related themes completes the program.
The program’s official language is English; all courses take place and course materials are provided in English. Most of our faculty are also fluent in French; hence interaction in French is possible in the courses and assignments (homework, exams, etc.) if needed.
The program is closely integrated within the Paris-Saclay ecosystem of research laboratories and industrial partners.
Software engineering for HPC: modern C++, testing, CI, reproducibility.
Version control with Git and rigorous documentation.
Technical communication and project scoping in research and industry contexts.
Responsible, reliable, and sustainable computing practices.
Objectives
Computer systems are evolving toward higher efficiency and richer functionality across three major, interconnected scientific fields:
Distributed systems deliver connectivity and dependable operation across the Internet, clouds, clusters, and sensors, addressing hard problems in synchronization, security, concurrency, and robustness.
High-performance and parallel computing (HPC) tackles intensive workloads in science and AI by exploiting supercomputing architectures and rigorous performance engineering.
Quantum computing provides algorithms and hardware that exploit quantum parallelism to achieve gains unreachable by classical paradigms.
Building on the foundations that M1 DiPaQ establishes in HPC, distributed systems, and quantum computing, M2 DiPaC specializes on advanced topics in HPC and distributed algorithms for large-scale systems. Students learn to design fast, scalable, and robust solutions for applications in AI, big data analytics, scientific simulations, and quantum-enabled workflows. Students have the opportunity to specialize either in high performance data analysis (HPDA) or hybrid HPC/quantum computing (HQI) through elective courses.
Knowledge objectives:
Parallel programming models and performance engineering on modern supercomputers and accelerators.
Large-scale distributed algorithms and systems: replication, consensus, consistency, mobile agents, and nature-inspired algorithms with robustness and performance guarantees.
Big data analysis, machine learning, and AI algorithms with massive computational challenges (HPDA)
Quantum algorithms and simulation using both classical/HPC or quantum workflows (HQI)
Skill objectives:
Building high-quality parallel and distributed algorithms and software that meet latency, throughput, and performance targets on target HPC architectures.
Developing and analyzing scalable, robust algorithms with theoretical guarantees on scalability, consensus, termination, and fault tolerance.
Optimizing HPC code across the stack: complexity, memory locality, vectorization, accelerator use, communication, I/O, and networks.
Resources and practice:
Access to university clusters and partner supercomputers for hands-on labs, course projects, code development, and tuning.
Use of open-source toolchains and libraries widely adopted by the HPC community.
Acquiring modern HPC software engineering practices with advanced C++, IDEs, version control (git), documentation, and continuous integration.
Career Opportunities
Career prospects
After Master and PhD : reseacher or assistant professor or professor
Après Master + Doctorat : chercheur ou enseignant-chercheur
data scientist
enseignant.e-chercheur.se (après un doctorat)
Ingénieur de recherche
ingénieur développement
ingénieur maintenance
Ingenieur R&D
ingénieur support technique
Sous réserve de réussite au concours de la fonction publique, les diplômés pourront accéder à des postes d'ingénieur d’étude ou chercheur au sein d’un organisme national de recherche
Further Study Opportunities
Data Scientist, Data Analyst, Ingénieur·e en Machine Learning dans des secteurs innovants (tech, finance, santé, énergie, etc.) ;
ngénierie études, recherche et développement
PhD
Fees and scholarships
The amounts may vary depending on the programme and your personal circumstances.
Audience. M2 DiPaC is the natural continuation of M1 DiPaQ for students aiming to specialize in advanced high-performance, parallel, and distributed computing.
Prerequisites. Strong computer science background with basics in parallel programming and distributed systems. Solid mathematics (especially linear algebra) and programming skills are expected.
Admissions from other related tracks. Outstanding students from related master tracks can be admitted if they already have some fundamentals in parallel or distributed computing. If needed, they will be allowed to take selected M1 DiPaQ courses as electives to close gaps.
Double-degree pathway. We routinely admit exceptional engineering students finishing their fourth year (M1 equivalent) at nearby Paris-Saclay schools (CentraleSupélec, Polytech Paris-Saclay, ENSTA Paris, ...) as double-degree students, who desire to complete their fifth year at the engineering school in parallel with M2 DiPaC. A dedicated arrangement between two programs allows to study in both, with possible course waivers subject to approval by the master’s coordinators on each side. If you plan to apply, please contact the M2 DiPaC coordinator and your academic coordinator at the engineering school in advance.
Positioning with respect to M2 QMI. While M2 DiPaC offers some optional quantum computing courses aligned with classical/HPC workflows, applicants seeking an exclusive focus on advanced quantum information science are encouraged to apply to our partner program M2 QMI.
A limited number of scholarships (Eiffel, IDEX, Quantum Saclay) are available for exceptional candidates.
Application Period(s)
Inception Platform
From 15/04/2026 to 30/05/2026
Supporting documents
Compulsory supporting documents
Motivation letter.
A letter detailing the motivation and reasons for willing to study parallel and distributed computing in the M2 DiPaC master's program in the light of previous studies and experiences as well as future career plans.
All transcripts of the years / semesters validated since the high school diploma at the date of application.
Grades of all courses since high school.
Curriculum Vitae.
A CV detailing all previous studies, internships, trainings, work experience (if any), distinctions and awards, as well as other personal interests and activities.
Additional supporting documents
Copy diplomas.
Letter of recommendation or internship evaluation.
Document at your convenience.
VAP file (obligatory for all persons requesting a valuation of the assets to enter the diploma).
only needed in case you have officially validated your prior professional experience to count as equivalent to a university degree
Supporting documents :
- Residence permit stating the country of residence of the first country
- Or receipt of request stating the country of first asylum
- Or document from the UNHCR granting refugee status
- Or receipt of refugee status request delivered in France
- Or residence permit stating the refugee status delivered in France
- Or document stating subsidiary protection in France or abroad
- Or document stating temporary protection in France or abroad.
Einstein summation convention and index manipulation
Tensor decompositions and networks
Refresher on matrix decompositions (QR, SVD, Cholesky, LU, low-rank decompositions)
Introduction to tensor decompositions and networks
Canonical polyadic decomposition (CPD)
Tucker and Hierarchical Tucker decompositions
Tensor-train decomposition (TT), Matrix product states (MPS) and Projected entangled-pair states (PEPS) networks
Numerical methods for tensor computations
Algorithms for computing tensor decompositions (Tensor SVD)
Low-rank tensor arithmetic
Tensor cross-approximation
Optimization algorithms on tensor manifolds (ALS, AMEN)
Tensor completion and recovery
Applications in Quantum Computing, High Performance Computing, and AI
Computational challenges in tensor algorithms
Tensor networks in quantum chemistry and physics, quantum simulation, ...
Tensor decompositions in multivariate data analysis, neural networks, recommender systems, ...
Objectifs d'apprentissage
Learning objectives:
At the end of this course, students will be able to:
Learn the mathematical foundation of tensors and tensor operations.
Understand the philosophy of low-rank computations using tensor decompositions/networks.
Implement tensor network computations using Python libraries such as NumPy or TT-toolbox.
Explore the applications of tensor computations in in quantum computing, high performance computing, and AI/data science.
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course combines lectures introducing theoretical concepts with hands-on programming labs that allow students to apply and test these concepts in practice.
Evaluation is continuous, based on multiple written and programming assignments distributed throughout the course.
Compétences
Competencies gained in the course:
Upon successful completion, students will have acquired the following competencies:
Technical competence: Ability to design and implement numerical methods using tensor networks.
Analytical competence: Mastering the mathematical foundations of low-rank tensor decompositions and networks and numerical algorithms leveraging them.
Practical competence: Using tensor network toolboxes in Python for rapidly developing tensor network applications.
Problem-solving competence: Understand when and where low-rank tensor computations can be applied to accelerate computations through effective numerical compression.
Bibliographie
Bibliography:
Grey Ballard, Tamara Kolda. Tensor Decompositions for Data Science. 2025.
Alain Franc. Tensor Ranks for the Pedestrian for Dimension Reduction and Disentangling Interactions, 2002.
Session 5 (Lab): Practical work on linear systems.
Session 6 (Lab): Practical work on PDEs.
Session 7 (Lab): Practical work on eigensolvers and SVD.
Objectifs d'apprentissage
Learning objectives:
At the end of this course, students will be able to:
Understand the interest of numerical algorithms for solving scientific problems.
Identify the main linear algebra kernels required for scientific computing applications.
Apply linear algebra routines to general fields like HPC, AI or quantum computing.
Develop and execute numerical algorithms in Python.
Know the main issues in finite precision computations and problem conditioning.
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course combines lectures introducing theoretical concepts with hands-on programming labs that allow students to apply and test these concepts in practice.
Evaluation is continuous, based on written and programming exercises throughout the course. Students will use Python to complete lab work.
Compétences
Competencies gained in the course:
Upon successful completion, students will have acquired the following competencies:
Technical competence: Ability to design and implement numerical algorithms in Python.
Analytical competence: Capacity to understand numerical algorithms in the context of finite precision computation.
Practical competence: Knowledge of the main linear algebra solvers.
Problem-solving competence: Ability to identify scientific problems and to choose the most suited algorithms for solving them.
Transferable skills: Experience working with numerical linear algebra problems.
Bibliographie
Bibliography:
Carl D. Meyer, Matrix Analysis and Applied Linear Algebra. SIAMM 2023 (second edition)
Gene H. Golub and Charles F. Van Loan. Matrix Computations. John Hopkins University Press 2013, 2013. 4th edition.
N. J. Higham. Accuracy and Stability of Numerical Algorithms. SIAM 2002 (second edition).
Y. Saad, Iterative Methods for Sparse Linear Systems. SIAM 2003 (second edition).
Session 1 Introduction to natural algorithms and to the model of Population Protocols.
Session 2: Computational power of Population Protocols and its fault-tolerance.
Session 3: Chemical Reaction Network model and its relation to Population Protocols.
Session 4: Robust and efficient counting in Population Protocols.
Session 5: Self-stabilizing Population Protocols.
Session 6: Proof labeling schemes and communication complexity techniques.
Session 7: Students’ presentations on natural algorithms and oral examination on the course material.
Objectifs d'apprentissage
Learning objectives:
Nature has developed distributed algorithms (without centralized control), which are efficient and require little resources and energy. Networking systems have sometimes been inspired by them. This course is devoted to the study of algorithms related, in one way or another, to natural phenomena. They are based on distributed models composed of components that are very limited in resources, in computing and communication capacities.
For example, in the population protocol model, agents, fixed or mobile, anonymous and undistinguishable, with limited memories, interact in pairs in an unpredictable and asynchronous manner.
Another example is micro-biological distributed systems, such as bacterial and viral cultures. These are again very limited systems, in which algorithms are developed to perform computations (microbiological circuits) or regulate self-administered drugs.
One of the main objectives of the course will be to understand how purely computational distributed problems (aggregation, synchronization, coordination, communication, etc.) can be solved in such models with limited resources.
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course lectures combine theoretical material together with illustrating examples and exercises.
The teaching is interactive, stimulating thinking and memorization.
Evaluation is based on students’ presentations of scientific works on natural algorithms and on oral examination on the course material.
Compétences
Competencies gained in the course:
Upon successful completion, students will have acquired the following competencies:
Understanding of how natural or nature inspired distributed systems can be modeled and being able to model such systems
Ability to analyze algorithms for such systems: prove their correctness and evaluate their complexity
Ability to design such algorithms using limited resources and tolerating failures
Performance analysis: profiling, benchmarking, vectorization, and cache optimization
Efficient work-group and work-item mapping for CPUs, GPUs, and accelerators
Advanced kernel design: nested parallelism, local memory, synchronization, and lambda-based kernels
Best practices in software engineering for high-performance data-parallel C++ applications
Objectifs d'apprentissage
Learning objectives:
At the end of this course, students will be able to:
Master data-parallel C++ paradigms – apply SYCL and oneAPI programming models to implement efficient and portable data-parallel computations across CPUs, GPUs, and other accelerators.
Optimize for performance and scalability – analyze and improve memory access patterns, vectorization, and workload distribution to maximize performance on heterogeneous architectures.
Design complex parallel algorithms – develop sophisticated data-parallel algorithms for high-performance applications such as scientific computing, simulations, and machine learning.
Integrate and exploit heterogeneous architectures – effectively orchestrate computation across multiple devices using SYCL/oneAPI to achieve high throughput and resource utilization.
Implement robust and maintainable parallel systems – apply advanced debugging, verification, and software engineering practices to ensure correctness and maintainability of large-scale data-parallel programs.
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course combines lectures introducing theoretical concepts with hands-on programming labs that allow students to apply and test these concepts in practice.
Evaluation is continuous, based on multiple written and programming assignments distributed throughout the course. Students will use CPU and GPU-equipped computing environment to complete lab work and assignments.
Compétences
Competencies gained in the course:
Upon successful completion, students will have acquired the following competencies:
Technical competence: Ability to design and implement high-performance data-parallel programs using SYCL/oneAPI across heterogeneous architectures (CPU, GPU, and accelerators).
Analytical competence: Capacity to analyze performance bottlenecks, profile SYCL/oneAPI applications, and optimize memory access, vectorization, and workload distribution.
Practical competence: Proficiency in using SYCL/oneAPI development environments, compilers, and profiling tools for high-performance C++ programming.
Problem-solving competence: Ability to identify computational problems suitable for data-parallel acceleration and implement efficient, scalable solutions.
Transferable skills: Experience in performance tuning, efficient memory and execution management, and structured experimentation with large-scale heterogeneous computational problems.
Bibliographie
Bibliography:
Rupp, Karl, et al. Data-Parallel C++: Mastering SYCL for Programming of Heterogeneous Systems. Addison-Wesley, 2022.
Familiarity with basic notions of algorithms and algorithm analysis (complexity measures, asymptotic notation).
Familiarity with basic notions of computational complexity theory (NP-hardness, reductions).
Familiarity with basic notions of graph theory and elementary graph algorithms.
Programme / plan / contenus
Total duration: 21 hours (7 sessions × 3 hours)
The mobile agent paradigm has been proposed since the 90s as a concept that facilitates various fundamental networking operational requirements and tasks, such as fault tolerance, network management, and data acquisition. Mobile agents serve as a natural model for the fundamental computing entities of systems with inherent mobility (mobile code, malware propagation, web crawlers, etc.) and, as such, they have found application as a software design paradigm for various networked systems. A second perspective on the mobile agent paradigm is as a model for robots that operate and move in continuous spaces, with typical applications in the fields of artificial intelligence, robotics, and control.
The distributed algorithms community has taken a strong interest both in software agents and in robots, developing a rich literature and an active research field. After presenting the two main model categories of robots (Look-Compute-Move) and of software agents, we will treat the fundamental algorithmic problems of the field, such as pattern formation by groups of robots, gathering, rendezvous, exploration, and black hole search (the detection of dangerous nodes in a network). The unifying theme of the course is the design and analysis of algorithms which enable agent collaboration and problem solving, despite their limited communication capabilities, their limited knowledge of the domain in which they move, and, in some cases, the asynchronicity of the system or the presence of faults.
Objectifs d'apprentissage
Learning objectives:
At the end of this course, students will be familiar with the main models of mobile agent computing, as well as with some of the most fundamental problems and solutions thereof that have been proposed in the literature. They will be able to develop algorithms, prove impossibility results, formulate and prove properties of such systems.
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course is organized in seven 3-hour lectures, allowing time for problem solving and discussion of algorithmic approaches.
Students are evaluated based on multiple written take-home assignments distributed throughout the course.
Bibliographie
Bibliography:
Paola Flocchini, Giuseppe Prencipe, and Nicola Santoro (eds.): Distributed computing by mobile entities: current research in moving and computing. Lecture Notes in Computer Science, vol. 11340. Springer, 2019.
Basic knowledge of programming in C or C++ is required.
Familiarity with parallel programming concepts and computer architecture.
[M1 DiPaC] Introduction to parallel algorithms and programming
[M2 DiPaC] High-performance computing on multicore architectures
Programme / plan / contenus
Course program, plan, content:
Total duration: 21 hours (6 sessions × 3,5 hours)
Session 1 (Lecture): Introduction to GPU computing; overview of parallel architectures; CUDA programming model and GPU execution model.
Session 2 (Lecture + Lab): CUDA programming basics — kernels, threads, blocks, and grids; CPU-GPU memory transfers and allocation, writing and launching simple CUDA kernels; introduction to thread indexing and memory access patterns.
Session 3 (Lecture + Lab): 2D and 3D indexing/kernels. CUDA matrix multiplication with 1D and 2D GPU kernels.
Session 4 (Lecture + Lab): GPU memory hierarchy — global, shared, constant, and texture memory; performance implications of memory access. Memory coalescing. Matrix multiplication 9-point grid computation using shared memory.
Session 5 (Lecture + Lab): Reduction algorithms and fast matrix transposition using CUDA.
Session 6 (Lecture + Lab): Advanced CUDA concepts — streams, events, and asynchronous execution; use of CUDA libraries (cuBLAS, cuSOLVER); integrating GPU computations into larger applications.Using GPU libraries CuBLAS and CuSOLVER. Small applicaiton using CUDA libraries.
Objectifs d'apprentissage
Learning objectives:
At the end of this course, students will be able to:
Understand the architecture and execution model of modern GPUs.
Explain the CUDA programming model and its main components (kernels, threads, blocks, grids).
Develop and execute GPU programs using CUDA C/C++.
Efficiently manage GPU memory and data transfers between host and device.
Apply optimization strategies to improve GPU program performance.
Design and implement small-scale applications leveraging GPU acceleration.
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course combines lectures introducing theoretical concepts with hands-on programming labs that allow students to apply and test these concepts in practice.
Each 3.5 hour session blends approximately 50% lecture and 50% programming exercises.
Evaluation is continuous, based on multiple written and programming assignments distributed throughout the course. Students will use CUDA on a GPU-equipped computing environment to complete lab work and assignments.
Compétences
Competencies gained in the course:
Upon successful completion, students will have acquired the following competencies:
Technical competence: Ability to design and implement high-performance programs using CUDA for parallel computation.
Analytical competence: Capacity to analyze performance bottlenecks, apply profiling tools, and optimize GPU-based solutions.
Practical competence: Proficiency in using GPU development environments, compilers, and debugging/profiling utilities.
Problem-solving competence: Ability to identify problems suitable for GPU acceleration and implement efficient solutions.
Transferable skills: Experience working with large-scale computational problems, performance tuning, and structured experimentation.
Bibliographie
Bibliography:
Sanders, J., & Kandrot, E. CUDA by Example: An Introduction to General-Purpose GPU Programming. Addison-Wesley, 2010.
Kirk, D. B., & Hwu, W. W. Programming Massively Parallel Processors: A Hands-on Approach. Morgan Kaufmann, 3rd Edition, 2016.
NVIDIA Corporation. CUDA C Programming Guide. (latest version available online at developer.nvidia.com/cuda-zone)
Farber, R. Parallel Programming with OpenACC. Morgan Kaufmann, 2016 (for comparison with directive-based models).
[DKAI] Distributed Query Processing and Optimization
Semester :
Semestre 2
Détail du volume horaire :
Lecture :12
Practical study :9
Langue d'enseignement
Anglais
Enseignement à distance
non
Programme / plan / contenus
By the end of this course, students will be able to:
- Understand the principles and architecture of modern systems for massive data processing.
- Explain the internal components of a relational DBMS, including buffer management, indexing, operator algorithms, and query evaluation plans.
- Apply techniques for query evaluation and optimization in SQL-based systems.
- Analyze how NoSQL systems manage, store, and process large-scale data.
- Describe the architecture and functionalities of distributed data systems such as Apache Hadoop and Apache Spark.
- Gain hands-on experience through practical lab sessions on data management and processing at scale.
Objectifs d'apprentissage
This course provides the foundations to understand and use efficiently systems that can process massive data. It covers both relational Database Management Systems (DBMS) and some distributed nosql systems, for which we study query evaluation and optimization techniques as well as techniques which allow the systems to scale data processing. The first part of the course covers the core of SQL relational query optimization. This includes the functionalities of a DBMS, buffer management, indexes, algorithms for operators, and query evaluation plans. The second part analyzes how nosql systems scale data management, storage and computation. We study the architecture, data structures formats and user interfaces of systems such as Apache Hadoop and Spark. Practical lab on machines will represent roughly one third of the lecture slots.
Session 3 (Lecture + TD): Phase-free normal forms and CSS states — Different normal forms, reduction, CSS stabilisers to/from ZX state
Session 4 (Lecture + TD): Clifford ZX & Clifford normal form — new generators, new axioms, Clifford circuits to ZX translation, graph-like form, normal form, reduction, Gottesman-Knill theorem
Session 5 (Lecture + TD): Clifford ZX & stabilisers — stabiliser groups, simplification of stabiliser groups, generation of Clifford ZX state
Session 6 (Lecture + TD): Full ZX and measurements — complete set of generators, universality, circuits to ZX translation, measurements with variables, measurements as channels, verification
Objectifs d'apprentissage
Learning objectives:
At the end of this course, students will be able to:
Compute the interpretation of a ZX-diagram
Turn circuits into ZX-diagrams / express quantum protocols as ZX-diagrams
Perform rewriting, simplify a diagram
Put a diagram in (phase-free/Clifford) normal form
Express a CSS/stabiliser state as a ZX-diagram (from its stabilisers)
Express channels as ZX-diagrams
Organisation générale et modalités pédagogiques
General course organization and teaching modalities:
The course combines lectures introducing theoretical concepts with exercises that allow students to apply and test these concepts.
Each 3,5-hour session blends approximately 60% lecture and 40% exercises.
Evaluation is done through a final exam.
Compétences
Competencies gained in the course:
Upon successful completion, students will have acquired the following competencies:
Technical competence: Ability to use ZX-calculus to tackle different quantum-related problems, such as state synthesis, verification, circuit optimisation, ...
Analytical competence: Capacity to analyse quantum protocols through the diagram’s interpretation and simplification, to show or disprove equivalence of protocols, etc...
Practical competence: Proficiency in using ZX-calculus simplifications to show a property of the quantum program
Problem-solving competence: Ability to identify how and where ZX-calculus can be used to address a quantum-related issue.
Transferable skills: Experience working with a formal graphical framework, as well as rewrite systems for optimisation purposes. Analysis of quantum protocols.
Bibliographie
Bibliography:
John van de Wetering. 2020. ZX-calculus for the working quantum computer scientist. Retrieved from https://arxiv.org/abs/2012.13966
Aleks Kissinger and John van de Wetering. 2024. Picturing Quantum Software. Preprint. Retrieved from https://zxcalc.github.io/book/
This course aims at mastering the core concept of algorithmic design in ML, from an optimization or a probabilitic point-of-view, using supervised and unsupervised algorithms
1. Regression/classification seen in optimization and probabilistic frameworks, implication on batch and stochastic gradient descent
2. Learning theory and Vapnick-Charvonenkis dimension
3. Evaluating performances of ML algorithms in different contexts (imbalanced, small-sized, etc)
4. Probabilistic framework for machine learning: Discriminative vs Generative learning, Empirical Risk Minimization, Risk Decomposition, Bias-Variance Tradeoff; Maximum Likelihood Estimation (MLE), MLE and OLS in regression, MLE and IRLS in softmax classification
5. Unsupervised Learning and Clustering: K-means, Mixture Models, EM algorithms,..)
6. Unsupervised Learning and Dimensionality reduction: PCA, Probabilistic PCA & EM, ICA,..
Recommendation of reading: An introduction to statistical learning (James, Witten, Hastie & Tibshirani - 2013). Pattern classification (Hart, Stork & Duda - 2000). Apprentissage artificiel: concepts et algorithmes (Cornuéjols & Miclet - 2011)
5 hours of lectures on direct and indirect impacts, and several tutorials including an IT equipment inventory, the analysis of CSR reports from major digital groups, assessment of the specific impacts of AI, and a poster session based on scientific articles on the subject.
Objectifs d'apprentissage
During this course, students will explore the topic of environmental damages linked to digital technology and various ways to assess and limit these damages. For those attending the course, the aim is to subsequently be able to think critically as computer engineers and know what steps to take to assess the impact of their hardware purchases and software developments and how to reduce this impact.