[ Back to main page ]
 

Abstract

 
Abstract No.:115
Country:Canada
  
Title:USING VISUAL INFORMATION TO GUIDE ACTIONS: INSIGHTS FROM FUNCTIONAL IMAGING
  
Authors/Affiliations:1 Jody Culham*;
1 University of Western Ontario, London, ON, Canada
  
Content:Reach-to-grasp tasks such as reaching out to grab a cup of coffee are thought to involve two separate components: a transport component that brings the hand to the object location and a grip component that uses visual information to preshape the hand according to the shape, size and orientation of the object. My laboratory conducted two functional magnetic resonance imaging (fMRI) experiments in normal human participants to address the neural substrates of reaching and grasping actions. In Experiment 1, we required participants to either touch an object or grasp it in a location immediately adjacent to the hand or at a distance. We isolated the grip component by comparing grasping, which requires preshaping, with touching, which does not, and found activation in the anterior intraparietal sulcus (aIPS). We isolated the transport component by comparing actions to distant objects, which required arm transport, to actions to adjacent objects, which required no arm transport, and found activation in the superior parietal occipital cortex (SPOC). In Experiment 2, we investigated whether grip-selective activation in aIPS reflected 1) the degree of precision required; 2) the number of digits that must be positioned; or 3) the computation of grip and load forces in the case of object lifting. In five grasping tasks subjects were required to perform a) a precision grasp (PG) with the index finger and thumb without lifting; b) a precision grasp + lift (PGL); c) a precise three-digit “tripod” grip + lift (TPL); d) a precise five-digit whole-hand grip + lift (PWHL); and e) a coarse five-digit whole-hand grip + lift (WHL). Three reaching tasks were also employed: f) pointing in the direction of the object using the index finger without arm transport (FP); g) reaching to touch the object with the knuckles (RT); and h) reaching to point to the object with the index finger (RP). Left aIPS was more activated by all five grasping tasks than by all three reaching tasks; however, aIPS activation was highest for grasps requiring precision regardless of the number of digits employed and regardless of whether the subject lifted the object. Taken together, these experiments support the proposed distinction between the transport and grip components and suggest that the grip component coded by aIPS is most influenced by the precision required by grasping, perhaps because precise grips require a more detailed visual analysis of the object.
  
Back