We communicate not only by exchanging linguistic information via speech or writing. A large part of the communicative work is done by simply placing the exchange within a certain context, whether it be social, historical or visual, and figuring out the communicative intent of the other and what he/she is really trying to say. The human language processing system is actually very adaptive and economical, and thus switches modality depending on the relevance and cost of the information available. For example, if some loud sound interrupts our conversation, I look towards you lips to try to recover and reconstruct the speech signal, and if you are looking towards a gesture you are making, I will look at the same gesture as you have yourself signaled its relevance to me using your gaze.
In the Language & Vision Group, we try to discover, test and quantify mechanisms of how the visual modality serves as input to the human communication system. We work using eye-tracking, conversational analysis, text analysis, psychological ratings and latent semantic modeling.
We have a limited number of available mini projects for students interested in developing a keep understanding of language processing and communication away from the strictly linguistic models.
- Andersson, R. & Diderichsen, P. (2008). Eye movements as an indicator of spoken language processes. Gärdenfors, P. & Wallin, A. (Eds.) A smorgasbord of cognitive science (pp. 199-214). Bokförlaget Nya Doxa.
- Andersson, R., Ferreira, F. & Henderson, J. (2011). I See What You’re Saying: The integration of complex speech and scenes during language comprehension. Acta Psychologica, 137, 208-216. Elsevier.
- Andersson, R., Holsanova, J. & Holmqvist, K. (2011). Optional visual information affects conversation content. Artstein, R., Core, M., DeVault, D., Georgila, K., Kaiser, E. & Stent, A. (Eds.) SemDial 2011: Proceedings of the 15th Workshop on the Semantics and Pragmatics of Dialogue (pp. 194-195). ICT.