, 2004 and Boumans et al., 2008). Individual auditory cortical neurons appear well suited to encode vocalizations presented in a distracting background, in part because the acoustic features to which individual cortical neurons respond are more prevalent in vocalizations than in other sound classes (deCharms et al., 1998 and Woolley et al., 2005). Futhermore, in response to vocalizations, auditory cortical neurons often produce sparse and selective trains of action potentials (Gentner and Margoliash,
2003 and Hromádka et al., 2008) that are theoretically well suited to extract and click here encode individual vocalizations in complex auditory scenes (Asari et al., 2006 and Smith and Lewicki, 2006). However, electrophysiology studies have found that single neuron responses to individual vocalizations learn more are strongly influenced by background sound (Bar-Yosef et al., 2002, Keller and Hahnloser, 2009 and Narayan et al., 2007). Discovering single cortical neurons that produce background-invariant spike trains and neural mechanisms for achieving these responses would bridge
critical gaps among human and animal psychophysics, population neural activity, and single-neuron coding. Here, we identify a population of auditory neurons that encode individual vocalizations in levels of background sound that permit their behavioral recognition, and we propose and test a simple cortical circuit that transforms a background-sensitive Edoxaban neural representation into a background-invariant representation using the zebra finch (Taeniopygia guttata) as a model system. Zebra finches are highly social songbirds that, like humans, communicate using complex, learned vocalizations,
often in the presence of conspecific chatter. We first measured the abilities of zebra finches to behaviorally recognize individual vocalizations (songs) presented in a complex background, a chorus of multiple zebra finch songs. We trained eight zebra finches to recognize a set of previously unfamiliar songs using a Go/NoGo task (Gess et al., 2011; Figure 1A), and we tested their recognition abilities when songs were presented in auditory scenes composed of one target song and the chorus (Figure 1B). We randomly varied the signal-to-noise ratio (SNR) of auditory scenes across trials by changing the volume of the song (48–78 dB SPL, in steps of 5 dB) while keeping the chorus volume constant (63 dB; Figure 1B). Birds performed well on high-SNR auditory scenes immediately after transfer from songs to auditory scenes (Figure S1 available online), indicating that they recognized the training songs embedded in the scene.