The most fundamental question in sensorimotor physiology is, ‘how do neurons transform sensory information into motor commands’? Take for example the case of visuospatial transformations for gaze control (where gaze is the product of both eye and head movement). The input is clearly target location relative to the retina, and the output is clearly muscular commands for eye rotation relative to the head and head rotation relative to the body, but the intermediate transformations are difficult to pinpoint. Distinguishing between different intermediate codes is made difficult by their relative similarity (e.g., target angle, gaze angle, eye and head rotation) relative to the variability of behavior and neural noise. Our solution to this has been to develop a method that allows one to map visual and motor response fields (RFs) relative to different possible representations (i.e., target position, gaze, eye, and head displacement or final position) in various frames of reference, i.e., relative to eye, head, or space (see figures). This allows a number of ‘model fits’, whereby we determine the best fit to be the one that produces the least residuals of raw neural data to the model (Keith et al. 2009). In our first application of this method (DeSouza et al. 2011), we found that the bursts of neural activity in the superior colliculus (SC) during head unrestrained gaze shifts made directly to briefly flashed visual targets dominantly code the location of the target relative to initial eye orientation, rather than the detailed parameters of the gaze, eye, or head movement. Here, we performed similar experiments on the SC and frontal eye fields (FEF) in two rhesus monkeys, the key structures involved in outputting early visual signals and gaze commands to the brainstem. To isolate the question of visual-motor transformation we used a delayed memory task where animals were required to continue fixating briefly for a variable time between seeing the target and shifting their gaze. Target positions were varied throughout the RF of each neuron tested, and initial gaze fixations were varied (along with natural animal-selected variations of initial eye and head position) to allow us to separate the spatial frame of the neural code. Three-dimensional (3-D) eye and head orientations were recorded so that we could account for small torsional ‘twists’ of the eye around the line of site, and for correct mathematical analysis. For the SC (Sadeh et al. 2012), we found that the visual response (to the appearance of the target) of all of the neurons we tested gave the best fit to target relative to initial eye orientation (Te). Perhaps surprisingly, in ‘visuomotor’ neurons that showed both a visual response and a motor response during the gaze shift, the motor response also coded for Te. It was only in cells that did not have any visual response that the motor response fit best to a motor parameter: final gaze position relative to initial eye orientation (Ge; or the mathematically similar gaze displacement model Gd). Note that these gaze models were spatially separable from target models because we allowed monkeys to be somewhat ‘sloppy’ with their final gaze positions around each target. For the FEF (Sajad et al. 2012), a similar picture emerged, especially in the visual responses, but in FEF cells all motor responses preferred Ge over Te (see Figures). Since many FEF cells showed both visual responses (fitting Te) and motor responses (fitting Ge), we also looked at the memory delay activity between them, and found a gradual progression of activity fitting best along the continuum between Te and Ge, leading up to the gaze shift. In none of our 71 SC neurons or 64 FEF neurons did we observe a response that significantly preferred an individuated motor effector (eye or head) or a non-eye-centered frame of reference. These results show that -although the SC and FEF show subtle differences in our paradigm— they both employ similar eye-centered codes for vision and gaze. This finding suggests that such codes are only worked out a later stage of the visuomotor transformation- most likely in the brainstem. More interestingly, we have shown that both of these structures are involved with intrinsic transformations of target-based spatial codes to gaze motor codes, not only between different cell and response types, but within some cortical neurons, across time for different aspects of the task. Since these are fundamental properties of neural coding, one expects similar principles to apply to the early stages of other sensorimotor transformations.
37th Congress of IUPS (Birmingham, UK) (2013) Proc 37th IUPS, SA140
Research Symposium: Progression of target-to-gaze command coding in superior colliculus and frontal eye fields during head unrestrained gaze shifts
D. Crawford2,1
1. Psychology, York University, Toronto, Ontario, Canada. 2. Centre for Vision Research, York University, Toronto, Ontario, Canada.
View other abstracts by:
Where applicable, experiments conform with Society ethical requirements.