Natural acoustic scenes are composed of many sounds and it is behaviorally important to identify conspecific (and allospecific) communication signals within those scenes. The auditory system divides such a scene into streams corresponding to either the physical source of the sound or its temporal pattern, such as melodies or rhythms. This process of segmentation is a fundamental step in auditory scene analysis and, from a computational point of view, resembles the extensive image segmentation used in Computer Vision. While the main task in image segmentation is to find edges of visual objects caused by sudden changes in, for example luminance; in the auditory system, onsets and offsets of sounds form temporal edges that stand out against an existing background level of neuronal firing. Brain mechanisms for perception of sound onsets have been extensively investigated over the past decades, but surprisingly, mechanisms for perception of sound offsets have not (Kopp-Scheinpflug et al., 2018). In our study we aim to understand how sound-offset responses can be generated in the brain of mice, and which neuromodulatory mechanisms are in place to adapt the offset responses to changes in the acoustic environment.
There is increasing evidence for the importance of sound-offset responses throughout the auditory pathway, but a circuit mechanism of generating sound-offset responses from scratch, has so far only been described for neurons of the superior paraolivary nucleus (SPN) in the auditory brainstem. Although offset responses are reliably encoded by SPN neurons, we found a discrepancy between the percentages of SPN neurons with pure offset responses between current-clamp recordings in brain slices (87.5 ±1.5%, n= 189 neurons) and single-unit recordings in vivo (52 ±12%, n=35 neurons). While the pure offset responses can be generated from hyperpolarizing inhibition alone (Kopp-Scheinpflug et al. 2011), much of the difference between in vitro and in vivo was due to the additional influence of excitation and the occurrence of more complex firing patterns, consisting of on-off rather than off-only responses (Rajaram et al., 2019). Our data suggest that the delicate balance between excitatory and inhibitory inputs is susceptible to noise exposure and the related increase in nitric oxide (Coomber et al., 2015). Increased nitric oxide depolarized chloride reversal potentials from −83.7 ± 5.4mV to −67.3 ± 4.5mV (n = 10, p = 0.002), thus reducing the driving force for inhibition, which is a major prerequisite for generating strong offset responses. This decline in offset responses was associated with a decrease in the ability to detect short gaps in ongoing signals (Yassin et al., 2014). The activation of excitatory inputs, on the other hand, resulted in accelerated offset-response latencies from 8.84 ms (6.38/16.75 ms, n=9) to 5.55 ms (3.80/6.97 ms, n=8; p=0.018; Rajaram et al., 2019).
Taken together, we predict that changes in the acoustic environment such as noise pollution, can alter the excitation-inhibition-balance right at the start of the sound offset pathway. This may then reduce the ability to encode temporal edges and mask temporal patterns underlying vocal communication.
Sensory Signals (The Royal College of Physicians, London, UK) (2022) Proc Physiol Soc 50, SA05
Research Symposium: Detecting silence: Role of sound-offsets in auditory processing
Conny Kopp-Scheinpflug1
1LMU Munich, Munich, Germany
View other abstracts by:
Where applicable, experiments conform with Society ethical requirements.