New PhD – Andrey Anikin

By 21 - Published 28 February 2020

<style>img {border: 1px solid #BBBBBB;}</style> February 28:th, 2020, Andrey Anikin successfully defended his PhD thesis Human Nonverbal Vocalizations. Opponent: Prof. Tecumseh Fitch, Dept. of Cognitive Biology, University of Vienna, Vienna, Austria. PhD Thesis: LinkAbstract: Language is a very special ability, but human communication also includes a wealth of nonverbal signals: body language, facial expressions, and nonverbal vocalizations such as laughs, moans, and screams. Vocalizations are particularly interesting because they share the same modality as language but are more similar in function and structure to the calls of non-human animals. Accordingly, this thesis is an attempt to study human nonverbal vocalizations from a comparative and evolutionary perspective in order to explore the nonverbal repertoire and to understand how information is encoded in these signals.    While nonverbal vocalizations are typically obtained by asking participants to portray a particular emotion, a less structured observational approach is explored in Paper I. By collecting unscripted examples of nonverbal vocalizations from the social media, it may be possible to obtain a more representative sample of vocal behaviors, which are also judged to be more authentic compared to actor portrayals (Paper II). Moreover, when each sound is not intended to convey a single emotion, it becomes more obvious that the repertoire of nonverbal vocalizations consists of several perceptually distinct acoustic classes as well as intermediate variants (Paper III). This means that, like other mammals, humans have a limited number of species-typical call types. These fundamental acoustic categories are the building blocks of nonverbal communication, but their acoustic properties also inform the intonation and other prosodic features of spoken language.    Nonverbal vocalizations are interpreted flexibly in real-life interactions, taking into account the accompanying facial expression and other contextual information. To learn what information is available in the sound itself, it is desirable to be able to modify individual acoustic properties and to observe how the listeners’ responses change as a result. A new method of voice synthesis is proposed in Paper IV and then used to test the perceptual effects of manipulating two aspects of voice quality: nonlinear vocal phenomena (Paper V) and breathiness (Paper VI). In addition to shedding new light on the acoustic code involved in nonverbal vocalizations, Papers V and VI confirm the importance of distinguishing between call types because the meaning of the same acoustic property – for example, voice roughness – can vary depending on the type of vocalization in which it occurs.    A red thread going through this dissertation is that humans are mammals and vocalize like mammals despite being linguistic creatures. The structure of the vocal repertoire and the general principles of voice modulation are broadly similar across many animal species, including humans. One reason for this convergence may be the existence of wide-spread crossmodal correspondences such as the tendency to associate low frequencies with a large body size. In Paper VII, I propose another possible cognitive mechanism for some non-arbitrary acoustic properties associated with intense emotion in humans and other species. In the case of human nonverbal vocalizations, high-intensity calls possess all the acoustic properties associated with bottom-up auditory salience – that is, these sounds appear to be “designed” to attract the listeners’ attention. This may be the result of vocal production and perception coevolving, or it may mean that the acoustic structure of high-intensity vocalizations exploits preexisting perceptual biases.    To summarize, knowing the evolutionary history and cognitive mechanisms behind vocal behaviors, such as human nonverbal vocalizations studied in this dissertation, provides a deeper understanding of their role in communication.