Christophe d'Alessandro is a musician and a researcher, with a background in mathematics, computer science and signal processing, improvisation and musical composition. The theme cutting across his research is a multidisciplinary approach to music, sound, movement and language.
The singing voice synthesizer that he developed with his "Audio and Acoustics" research group at LIMSI has just been awarded the prestigious first prize in the Margaret Guthman Musical Instrument Competition. It is a singing voice synthesizer controlled in real time by hand movements.


What do we learn by analyzing speech?

Historically, speech analysis comes from telephony and telecommunications development. In order to have a conversation remotely, the speakers need to recognize each other, to recognize each other’s voice. With the current tidal wave of sound and images, data mining has also become a major issue in indexing and retrieving content. And the movement is growing: needs are becoming more sophisticated, with multilingualism and simultaneous translation for example.

What determines what is voice or not?

The voice has two characteristics, which cannot be imitated by a keyboard, that is, by pressing down a key: continuous intonation and timbre with immense possibilities, the variety and contrasts of which shape language.

Since the mid-80s and my thesis, my research combined with my experience as a musician has led me to a new scientific approach, based on the analogy between hand movements derived from writing and intonation. Cantor Digitalis, the instrument we have developed, is very different from electroacoustic music: the idea is not to record and play a sound with a computer, by transforming the acoustic signal into an electrical signal, but a new way of modulating the signal with the hand, to be able to produce a melody similar to the voice.

Today we are able to sing vowels. In other words, we are still at the stage of vocalizations, even though it is possible to sing any style of song. Singing all the consonants will require new developments, perhaps a different approach, because consonants are the place for articulation and are expressed vocally at a much faster pace. Try it!

Where will your future research take you?

The challenge clearly lies in understanding human expressiveness. For example, it is difficult to explain why hand movements are closest to vocal expression. In fact, at the moment, we can’t explain it. We will probably learn a lot from motor control studies and neurophysiology research in general. The scientific potential of the Université Paris-Saclay is also rich in opportunities for research: the Institute of Neuroscience, the Digital Society Institute (ISN) and proximity to our colleagues in the biology and behavioral sciences open up new opportunities.

We are fully aware that our basic research contains very important therapeutic potential in addition to its musical potential.

How will your musical instrument, the Cantor Digitalis, evolve?

There are several paths we can take to improve it. First of all, by using it; to do that, we have created a choir, Chorus Digitalis. It first performed in 2011, at a scientific conference in Vancouver, then at arts and sciences festivals such as Curiositas at Université Paris-Saclay and now at this intense experience that is the Margaret Guthman competition.

Looking to the future, we will also need to develop the technology to match perfectly the metaphor of singing by writing. I sense that the ideal tool is not a stylus but a more flexible tool, which will provide more scope to the voice; I’ve been thinking about this for a while and the ideal tool should be similar to a calligraphy brush: a more flexible instrument, which becomes thicker or thinner on the surface, contains an ink reservoir and might give a whole new amplitude to the voice of a singing synthesizer.


For more information: