Aller au contenu principal

24.09.20 DATAIA Seminar | « Solving inverse problems with invertible neural networks » - Ulrich Köthe

2020-09-24 15:00 2020-09-24 16:00 24.09.20 DATAIA Seminar | « Solving inverse problems with invertible neural networks » - Ulrich Köthe

Ulrich Köthe (Université de Heidelberg) est l’animateur du séminaire DATAIA du mois de septembre, sur le theme « Solving inverse problems with invertible neural networks ».

Resumé :
Interpretable models are a hot topic in neural network research. This talk will focus on inverse problems, where one wants to infer backwards from observations to the hidden characteristics of a system. I will focus on three aspects of interpretability: reliable uncertainty quantification, outlier detection, and disentanglement into meaningful features. It turns out that invertible neural networks -- networks that work equally well in the forward and inverse direction -- are great tools for that kind of analysis: They act as non-linear generalizations of classical methods like PCA and ICA. Examples from physics, medicine, and computer vision demonstrate the practical utility of the new method.

Le séminaire aura lieu en ligne. Cliquez ici pour accéder au webinaire.

chez soi
Thematic : Doctorate, Education, Research

Dans le cadre de son animation scientifique, l'Institut DATAIA organise des séminaires mensuels visant à échanger autour de l'IA.

  • Public
    Réservé à certains publics
  • Event type
    Conférence / séminaire
  • Conditions

    Inscription gratuite mais obligatoire

  • Dates
    Thursday 24 September, 15:00
    03:00 pm - 04:00 pm
  • Location
    chez soi

Ulrich Köthe (Université de Heidelberg) est l’animateur du séminaire DATAIA du mois de septembre, sur le theme « Solving inverse problems with invertible neural networks ».

Resumé :
Interpretable models are a hot topic in neural network research. This talk will focus on inverse problems, where one wants to infer backwards from observations to the hidden characteristics of a system. I will focus on three aspects of interpretability: reliable uncertainty quantification, outlier detection, and disentanglement into meaningful features. It turns out that invertible neural networks -- networks that work equally well in the forward and inverse direction -- are great tools for that kind of analysis: They act as non-linear generalizations of classical methods like PCA and ICA. Examples from physics, medicine, and computer vision demonstrate the practical utility of the new method.

Le séminaire aura lieu en ligne. Cliquez ici pour accéder au webinaire.