Published on 16 February 2018
Research
Intelligence artificielle

Artificial intelligence arouses all kind or reactions, from worries to fantasies, from industrial appetites to academic vocations. In order to have a better understanding of the situation, here is a glance at research activities in AI in Université Paris-Saclay.

What is artificial intelligence ?

Elon Musk's and Stephen Hawking's declarations fuel the international debate; Cédric Villani runs a parliamentary commission on the subject; almost every web and technologies giants (Microsoft, IBM, Facebook, Alphabet, Apple, Fujitsu, Huawei,…) are building laboratories and AI research centers; and institutes entirely dedicated to the subject are created all over the world (LCFI in Cambridge, Swiss AI Lab in Lugano, Vector Institute for AI in Toronto, IAI in Bremen...). Artificial intelligence is definitely a rapidly changing subject.

Artificial intelligence is a general term covering very different and widely multidisciplinary subjects. Its field of study goes from philosophy to applied maths. Applications emerge in a lot of domains: medicine, robotics, automatic translation, autonomous driving, image or sound recognition, video games, decision-aid... Zoom on four research activities in artificial intelligence.

Machine learning and optimization

Marc Schönauer and Michèle Sebag are precursors when they found in 2004 the team-project TAO (Thème apprentissage et optimisation), which gathers researchers from the Laboratoire de recherche en informatique (LRI, Université-Paris-Sud), from the  CNRS and from the Inria Saclay. This team is then working on the links between machine learning and optimization.

This team gives birth in 2017 to two new teams, RandOpt (Randomized Optimization) et TAU (Tackling the Underspecified).

The RandOpt team from Inria Saclay is directed by Anne Auger, and focuses on optimization problems, in all its forms. These are very generic questions, with applications to path-finding, photovoltaic panel design, or in train routes management.

This team develops mathematical tools and algorithms allowing the optimization in large dimensions (including a lot of variables), optimization under constraint, and multi-objective optimization (cases where the variables are to be optimized in opposite directions), like searching to minimize the price of a product while maximizing its quality.

Furthermore, the members of the RandOpt team have developed the COCO (Comparing Continuous Optimizers) platform that allows standardized and automatic tests of optimization algorithms. Thus it is a lot simpler to evaluate and compare different algorithms, and to chose the best one for a particular application.

Chess and Go are done, Bridge is next

Research in AI is from its beginnings closely tied to games, because these are simple environments with simple rules and objectives that allow simple and quantitative evaluation of an AI abilities. Playing games also make the comparison with humans a lot easier, and this comparison is a traditional landmark in the field of AI.

Since DeepMind's algorithms outplays the best human Go players, the AI researchers look at other games in which humans are still far ahead. Bridge is one of them.

A team of researchers from te LRI and from the Laboratoire de mathématique d’Orsay (LMO) have upgraded an already existent bridge playing program by optimizing its randomness. Indeed, this program uses random numbers to evaluate the quality of a position or of a choice. These random numbers are generated from another number, which is called a seed.

The idea of these researchers is to test multiple seeds beforehand, and to keep the seed that gives the best results overall. This methodology has already been applied to several other games but it is the first time it is applied to bridge.

This bridge playing program stays inferior to human players, but it managed to beat every other programs, and won the 2016 World Computer Bridge Championship.

The era of Big data

The emergence of Big data is a strong theme in AI, in particular for the TAU (in LRI) team, whose research program deals with every aspects of the Big data paradigm. This program is composed of three major questions.

The first interrogation is the actual relevance for humans of the interpretation and treatment of data by an AI. The second point is the difficulty to administrate, control and use efficiently a perpetually changing system (a data set or an AI). Finally, the ethical problems raised by the Big data paradigm force us to better define the equilibrium between security, liberty and efficiency.

These ethical considerations are at the core of the work of Antonio A. Casilli (Télécom-ParisTech) and Paola Tubaro (LRI). They consider that the quantity of stored data and the new analysis tools give the governments and the private companies the technical possibility to apply general surveillance, stigmatization and mass censorship. In a nutshell, they fear for the apparition of a state or even a privately owned Big Brother.

They also warn against the exploitation of precarious workers that are building the huge data-sets that are used to train artificial intelligences.

Giving sight to the machines

Created in 2011, the Centre de la vision numérique (CVN) in CentraleSupélec has the objective of developing an equivalent of sight for computers. This means giving them the ability to treat, structure, interpret and understand massive visual data.

The applications are of course numerous, from robotics to autonomous driving, or even to the detection of tumors in medical images.

Amélioration graduelle de la qualité d’images d’archive par le déconvolution du bruit

(Application of the method to enhance the quality of an archive video)

One of the difficulties of the automatic treatment of videos comes from the bad quality of the video itself. It can be blurry for multiple reasons: movement of the camera or of the subject, bad focus, bad numerical compression...

Researchers from CVN developed a new technique that permits the deconvolution of the noise from the actual image, thus increasing the sharpness of the video. This makes the treatment and the interpretation of the video easier for a machine.

In other words, when a machine's vision is blurry, now you can install it a pair of glasses!

Further reading :

« COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting »

Nikolaus Hansen, Anne Auger, Olaf Mersmann, Tea Tusar, Dimo Brockhoff

 

« Boosting a Bridge Artificial Intelligence »

Veronique Ventos, Yves Costel, Olivier Teytaud, Solène Thépaut Ventos

 

« An Alternating Proximal Approach for Blind Video Deconvolution »

Feriel Abboud, Emilie Chouzenoux, Jean-Christophe Pesquet, Jean-Hugues Chenot, Louis Laborelli

 

« Enjeux sociaux des Big Data »

Paola Tubaro, Antonio Casilli