The Choice Blindness Lab
Current Research
Choice blindness and preference change
We are interested in the effects receiving false feedback about one’s choices affects future preferences. We have shown that individuals’ will change their preference, in light of their beliefs about their past choices (Johansson et al., 2014). Ongoing projects in the lab are investigating these effects with more than one participant (dyads choosing together), as well as, the effects of choice blindness manipulations on individuals’ memories of their past preferences.
Choice blindness and political attitudes
We have shown that peoples’ political and moral attitudes are susceptible to manipulations using self-transforming magical surveys (Hall, Johansson & Strandberg, 2012; Hall et al., 2013). Ongoing projects include conducting large scale surveys of political attitudes and meta-attitudes allowing us to probe political attitudes and different measures of attitude strength correlate and how false feedback about one’s attitude might affect one’s related meta-attitudes.
Choice blindness and implicit measures
To better understand what happens when participants accept false feedback about their choices ongoing projects are using a number of measures other than self-report to study choice blindness, including eye-movements, pupil dilation and mouse-arm movements.
Real time speech exchange
We have developed a novel method which allows us to study how auditory feedback is used by speakers to help specify the meaning of what they themselves are saying, and how feedback interacts with the sense of agency during language production (Lind et al., 2014). We are currently investigating the role of feedback for the self-attribution of emotions.
Gaze and moral choice
We have developed a novel method whereby we terminate participant’s deliberation on the basis of their viewing patterns, allowing us to influence peoples’ responses to difficult moral questions (Pärnamets et al., 2015). Ongoing projects investigate how moral decisions are formed in the moment using eye-movements and computational models.