New technologies, sciences, transport, energy and public health: using data has become central in many sectors and it leads to important judicial and ethical questions.
“Vast quantities of data are used everywhere today” explains Balazs Kégl, who pilots the Centre for Data Science (CDS) of the University Paris-Saclay. “And the use is not limited to one field in particular”. At the heart of the CDS is the proof of this, as researchers perfect machine learning algorithms so that computers can learn to process data coming from fields such as health, law and economy… there are no sectorial restrictions in artificial intelligence!
Tracking propagation of virus
Nicolas Vayatis' work, a member of the research group Machine learning and massive data analysis (MLMDA), of the Ecole normale supérieure Paris-Saclay, involves for example, quantification of human behaviour: captors are placed during a clinical consultation and they record the posture and movements of the subject in order to assess his well-being and mobility. These are indicators of the evolution of the disease. “This technology opens the way for preventive measures in the medical field. Monitoring a certain number of physiological markers will enable us to predict potential risk incurred by the subject”, illustrates the researcher. Nicolas Vayatis is also interested in surveillance of systems in, for example, transport or the energy sector, but also in propagation on networks. “There are three types of applications”, he adds, “The first concerns public health where, what will spread, is a virus. The aim is to image the best strategy to fight it. The second pertains to rumours in information networks and aims to determine how to diffuse information and which strategies of control need to be set up. The third, in the transport sector, consists in envisaging a delay or congestion and to avoid it from spreading.”
Rendering drones autonomous
Jean-Loup Farges also works in the transport sector, but this time it is aerial and he is Head of the Thematic unit “Artificial intelligence and decision” at the National Office for Aerospace Studies and Research (Onera). “Currently, drones have navigation autonomy that enables them to get from a point A to a point B”, explains the researcher. “We are, however, trying to reach a higher level of autonomy so they are able to determine their navigation, themselves, according to general goals, such as observation of a given area”. Researchers face the problem of reliability of the systems: “The algorithms concerned are too extensive and cannot be checked over fully. They are created from a generic problem and control of each declination is impossible”. The method that is being considered today consists in letting the algorithm function and planning a fallback solution in case of failure.
This kind of obstacle shows how artificial intelligence does not yet lead to the Holy Grail as can be heard in some speeches. “Artificial intelligence does not mean autonomous intelligence”, summarizes Balazs Kégl. Nicolas Vayatis adds: “People sometimes believe that magical algorithms will be able to predict the future and replace humans. They will, however, only automate part of the process that was previously done manually. They are just bricks that are incorporated into a decisional chain.”
To fight against these claims, Laurence Devilliers, a researcher at the Computer Sciences Laboratory for Mechanics and Engineering Sciences (Limsi) at the CNRS, is studying ethics of artificial intelligence. She reminds us that robots do not have emotions; they only analyse human emotions and simulate a reaction. This performance is possible thanks to Laurence Devillers' personal work on recognition of human emotions by machines that has been ongoing since 1990. “In parallel, I have steered my research towards the ethical characteristics of what we are doing”, underlines the researcher.
Anticipating the digital revolution
To forestall problems, Laurence Devillers has taken an interest in ‘nudging', an emerging technique that worries about the consumer's behaviour and the possibility of guiding his choices according to cognitive biases. In a test you can see that he will tend to privilege a choice that doesn't require him to tick a box. “With economists and jurists at Paris-Saclay, we wish to create a communication system that will insidiously influence choices. It will ‘nudge' and we will then study the influence we can have on human behaviour”, reveals Laurence Devillers. “We wish to understand this tool so that when it arrives in Europe from the United States or Asia, we are able to alert and ask people to take responsibility”. She then concludes: “People mustn't be afraid of artificial intelligence but it needs to be understood, explained and taught”.
∙ Boche Adèle et al., Reconfiguration Control Method for Faulty Actuator on UAV, Advances in Aerospace Guidance, Navigation and Control, Springer, Cham, 2018.
∙ M. Tahon, L. Devillers, Towards a small set of robust acoustic features for emotion recognition: challenges, IEEE Transaction on Audio, Speech and Language Processing 2016, vol. 24.
∙ Argyris Kalogeratos et al., Chapter 24 : Information diffusion and rumor spreading, Cooperative and Graph Signal Processing, Principles and Applications, 1st Edition, 20th June 2018.
"Developing artificial intelligence commercially and ethics are not incompatible and this has been put forth by the Villani report. We must continue down this path."
Laurence Devillers is a computer scientist, Professor at Sorbonne University and member of the Computer Sciences laboratory for Mechanics and Engineering Sciences of the CNRS at Paris-Saclay. She started her career working on systems for recognising continuous speech then on the man-machine dialogue and its assessment. She then studied recognition of human emotions by machines and ethical characteristics specific to this research.