[ Back to main page ]
 

Abstract

 
Abstract No.:C-G3195
Country:Canada
  
Title:CROSS-MODAL IDENTITY RECOGNITION IN A PATIENT WITH PROSOPAGNOSIA.
  
Authors/Affiliations:1 Adria E.N. Hoover*; 2 Jean-François Domonet; 1 Jennifer K.E. Steeves;
1 Centre for Vision Research, York University, Toronto, ON, Canada; 2 INSERM U455, Hopital Purpan, Toulouse, France.
  
Content:Objectives: Successful social interactions require the ability to extract various types of information about an individual’s identity. We are heavily dependent on visual cues but we can also identify individuals from personal attributes such as gait, touch and smell. Typically, we use the face to differentiate the identity and emotional expressions of people we encounter. This same keen ability has also been demonstrated with auditory cues to identity, namely the voice. This suggests that both face and voice information account for a large amount of person identity recognition. How unimodal, face and voice, information are functionally integrated or interact has been the subject of much discussion in the literature. An ideal way of testing the parallel nature of face and voice processing is to study individuals who have selective impairments to either of their face or voice processing streams. We tested the interaction of face and voice information in identity recognition in both healthy controls and a patient (SB) who is unable to recognize faces (prosopagnosia). We asked whether bimodal information would facilitate identity recognition in patient SB.

Materials and Methods: SB, a 38 year old male with acquired prosopagnosia, and neurologically intact controls (n=10) learned the identities of three individuals consisting of a face image paired with a voice sample. Subsequently, participants were tested on two unimodal stimulus conditions: 1) faces alone, 2) voices alone, and a bimodal stimulus condition, within which new/learned faces and voices were paired in five different combinations. We used a forced-choice paradigm.

Results: SB’s poor identity recognition with faces only information was contrasted by his excellent performance with voices only information. SB’s performance was better in the bimodal conditions compared to that for visual faces alone. However, his performance was worse in the bimodal conditions compared to that for voices alone. Controls demonstrated the exact opposite pattern.

Conclusions: These findings indicate that the control’s dominant stimulus modality was vision while that for SB was audition. Identity recognition was facilitated with a new stimuli from the participants dominant modality in the pairing but recognition was inhibited with a new stimuli from the non-preferred modality in the pairing. Most surprisingly, these results suggest that SB was unable to ignore visual face information even though he is prosopagnosic. In summary, it has been demonstrated that there is a perceptual interference from the non-dominant modality when vision and audition are combined for identity recognition and suggest interconnectivity of the visual and auditory identity pathways.
  
Back