Aller au contenu principal

Architects are listening

Research Article published on 31 March 2021 , Updated on 31 March 2021

How can you hear how musicians sound or listen to ambient noise in a virtual space? How can you understand the acoustics of a room that does not yet exist? These are some of the questions at the forefront of new auralization technology which aims to provide new tools for architects and acousticians.

 

An orchestra, ambient noise, people talking... these are the situations an architect wants to be able to hear before finalising a project. In recent years, tremendous advances have been made in virtual reality from which architecture and research could benefit greatly. “In an architectural project, spaces are visualised in advance of the development stage with the help of 3D models. As regards acoustics, the space can undergo a process of auralization in the same way, i.e. its acoustics can be simulated to test various configurations and to help decide on design choices,” explains David Théry, a former PhD student at LIMSI (which merged with the LRI on 1 January 2021 to become the Laboratoire interdisciplinaire des sciences du numérique (Interdisciplinary Laboratory for Digital Sciences) LISN (Université Paris-Saclay, CNRS, CentraleSupélec, Inria) and whose thesis focused on the reliability of auralization technology.

 

Tried and tested technology

While the option of hearing the acoustics of a room before it has been designed seems particularly suited to architects, this technology is not yet widely used in architectural projects. It has been used more widely in research for the study of auditory perception or the archaeological reconstruction of historical sites. For example, a project is currently underway by researcher Brian Katz of the Institut Jean Le Rond d'Alembert (CNRS, Sorbonne University) for a reconstruction of the Notre-Dame cathedral in Paris which would best restore the building's acoustics. 

This technology requires professional skills and expertise, but sometimes internal resources are lacking. “At present, when an acoustician takes part in an architectural project, he or she creates a model and gives objective parameters such as the reverberation time, clarity indices and other more spatial indices based on the correlations between the signals which arrive into the two ears of the listeners and those which define the sound field,” explains David Théry. “The acoustician then presents an analysis of the results to the client, who must trust them without being able to experience the finished product.” 

This is where auralization comes in. Thanks to auralization, the acoustician is able to place the client in a situation where they can hear the sound performance of the architectural project in question in different configurations. In other words, the auralization allows the client to visualise the space acoustically.

 

From real spaces to virtual spaces

Different 3D and omnidirectional microphones and dummy heads, Cité de la musique Paris

In order to check the reliability of this technique, David Théry has worked together with acoustic design offices on how sound behaves in existing spaces such as the Nouveau Siècle concert hall in Lille and the Cité de la Musique in Paris. This study of sound behaviour uses impulse responses - the sound recorded in space after a hand clap for example. Direct sound is the first to reach the receiver. This is followed by scattered initial reflections on the walls and then the so-called diffuse sound resulting from multiple, denser reflections. This response is the space's fingerprint in terms of acoustics and establishes the objective parameters, such as the typical times of acoustic attenuation, known as reverberation times and clarity indices.

Based on the geometry of the space and the absorption and scattering properties of the walls, this impulse response can also be simulated and a virtual room given its acoustics. However, because of the simplification of hypotheses required for simulation - sound is considered as a ray and not as a wave - some parameters must be calibrated with the responses measured. “The aim is to check whether the differences are still noticeable. When two auralizations are said to be identical, it is because their differences are below a certain threshold of perception,” points out David Théry.

Once the acoustic signature of the space has been determined, whether measured or simulated, speech or musical sound can be played using a convolution between these signals and the response of the room. The recordings are made in what are known as anechoic chambers - a space devoid of any reverberation thanks to the foam sound panels which line the walls. The acoustician can then make it possible for the architect who designed the space to listen to this sound. Other aspects of complexity can also be added to this calculation phase to increase the realism. One example of this is changes in listening direction linked to tracking how a listener's head would move which gives the impression in virtual reality of navigating a space. The architect can then learn more about a project by listening to musicians, people talking or any other situation involving sound.

For example, David Théry has worked together with the Theatre Projects design office on the design of a multi-purpose atrium for orchestras, as well as banquets and conference receptions. 

 

From the room to the headphones

An anechoic chamber

Another point addressed by David Théry is the fact that the same auralization must be consistent depending on the system used for its rendition. This is because auditory perception is subject to a large number of subjective attributes - which have been reduced down to six key attributes in the young researcher's work such as reverberation, envelopment and perceived distance. “Subjective attributes allow the subjective evaluation of the simulation. Reverberation times, clarity indices and interaural correlation represent the objective part. The aim is to make the links between what is measurable and what people perceive. Typically, a long reverberation time creates a blurring, not visually but audibly.” 

David Théry brought together a panel of listeners who covered a wide range of expertise, from acousticians to potential clients, to test the perception of auralizations on a 3D network of loudspeakers, known as ambisonic, such as those in the projection spaces at LISN (ex-LIMSI), or binaurally on simple headphones. This work concluded that the attributes of perception were relatively stable in relation to these different situations and devices, noting however that the rendering system made an impact on the attributes related to sound spatialisation. Use of auralization on more user-friendly and ergonomic devices opens it up in a more tangible way to industrial applications. 

Today, David Théry, who is a founder of the Opera Sound & Space company, continues his work on virtual acoustics with the Institut de recherche et de coordination acoustique/musique IRCAM (Centre Pompidou, Ministère de la culture, Sorbonne Université, CNRS). He also continues to work with Theatre projects, which augurs well for further advances in the use of virtual and augmented reality in the field of architecture.

 

Référence :