Ultra-fast speech comprehension in blind subjects engages primary visual cortex, fusiform gyrus, and pulvinar – a functional magnetic resonance imaging (fMRI) study
© Dietrich et al.; licensee BioMed Central Ltd. 2013
Received: 11 April 2012
Accepted: 17 July 2013
Published: 23 July 2013
Individuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl) per second - exceeding by far the maximum performance level of normal-sighted listeners (ca. 8 syl/s). To further elucidate the brain mechanisms underlying this extraordinary skill, functional magnetic resonance imaging (fMRI) was performed in blind subjects of varying ultra-fast speech comprehension capabilities and sighted individuals while listening to sentence utterances of a moderately fast (8 syl/s) or ultra-fast (16 syl/s) syllabic rate.
Besides left inferior frontal gyrus (IFG), bilateral posterior superior temporal sulcus (pSTS) and left supplementary motor area (SMA), blind people highly proficient in ultra-fast speech perception showed significant hemodynamic activation of right-hemispheric primary visual cortex (V1), contralateral fusiform gyrus (FG), and bilateral pulvinar (Pv).
Presumably, FG supports the left-hemispheric perisylvian “language network”, i.e., IFG and superior temporal lobe, during the (segmental) sequencing of verbal utterances whereas the collaboration of bilateral pulvinar, right auditory cortex, and ipsilateral V1 implements a signal-driven timing mechanism related to syllabic (suprasegmental) modulation of the speech signal. These data structures, conveyed via left SMA to the perisylvian “language zones”, might facilitate – under time-critical conditions – the consolidation of linguistic information at the level of verbal working memory.
KeywordsSpeech perception Compressed speech Late- and early-blind subjects Cross-modal plasticity Timing
So far, a variety of studies demonstrated superior auditory-perceptual abilities in blind individuals as compared to sighted controls , e.g., enhanced speech discrimination in a noisy environment , faster processing of simple sounds like tones [2, 3], sharper tuning of spatial attention towards noise bursts , higher recognition accuracy of the direction of pitch changes  and, finally, improved identification of voices as well as enlarged memory for vocal signatures . Furthermore, functional imaging studies indicate the central-visual system to contribute to the enhanced processing of non-visual stimuli in blind subjects. For example, striate cortex shows significant hemodynamic activation during Braille reading (e.g., [7–11]), auditory motion detection , syntactic and semantic speech processing  as well as cognitive language tasks such as verb generation, production of mental images based upon animal names, and retrieval of verbal-episodic memory contents [14–16]. Given their superior acoustic-perceptual abilities, the enhanced speech/language skills of blind individuals might be based upon “stimulus-driven” central-auditory mechanisms, operating across linguistic and non-linguistic domains, rather than supramodal “bottom-up” processes. However, activation of visual cortex in blind individuals concomitant with performance benefits has also been observed during verbal tasks in the absence of any sensory input . Such findings cannot easily be explained by, e.g., enhanced temporal resolution of acoustic signals.
As a further “feat” within the realm of acoustic communication, analogous, conceivably, to the fast-reading capabilities of sighted individuals, repeated exposure to accelerated verbal utterances may enable blind people to understand spoken language at speaking rates of up to 22 syllables (syl) per second – an accomplishment exceeding by far the upper limits of untrained subjects (ca. 8 syl/s) . Therefore, patients suffering from vision impairments may considerably benefit, e.g., during academic education, from screen-reading text-to-speech systems, operating at ultra-fast syllable rates . Using functional magnetic resonance imaging (fMRI), a preceding single-case study of our group first documented significant hemodynamic activation of right-hemispheric primary visual cortex (V1) and contralateral fusiform gyrus (FG) in a blind university student with high ultra-fast speech perception capabilities during application of compressed verbal utterances (16 syl/s) whereas, by contrast, similar responses did not emerge in a series of sighted control subjects .
As an extension of the previous single-case study, this subsequent investigation tries to confirm the association of ultra-fast speech perception with hemodynamic responses of right V1/left FG at the group-level and to provide first evidence for a specific causal engagement of those structures in enhanced spoken language comprehension. More specifically, it must be expected that hemodynamic activation of right V1/left FG covaries with behavioral measures of ultra-fast speech comprehension capabilities. In order to test this hypothesis, blind subjects varying in their capabilities to understand ultra-fast speech – and sighted individuals never exposed to accelerated spoken language – underwent fMRI measurements while listening to sentence utterances of a moderately (8 syl/s) or ultra-fast (16 syl/s) speaking rate. In addition, the same test materials were applied as time-reversed speech signals (backward played sentences) to the participants – a procedure rendering verbal utterances unintelligible. Those spectrally matched, but “phonologically incorrect” acoustic stimuli served as control items to the two forward-conditions of the experiment.
Functional reorganization of visual cortex – and its impact on an individual’s perceptual, attentional, and cognitive skills – appears to be critically constrained by the time of vision loss (see  for a recent review). Nevertheless, both subjects with early- as well as late-onset blindness have been found capable – though, eventually, to varying degrees - to acquire the capacity of ultra-fast speech comprehension . Against this background, the present study recruited a group of late-blind subjects varying in the onset of vision loss as well as three early-blind individuals, allowing for a preliminary, i.e., descriptive analysis of age effects.
Clinical and behavioral data of the vision-impaired and healthy subjects
Onset of blindness
Etiology of blindness
Characterization of visual deficits
Moderately fast utterances
no residual visual perception, visus 0 bilaterally
no residual visual perception, visus 0 bilaterally
light perception extant, visus 0.01 bilaterally
hereditary vision loss (gene mutation)
no residual visual perception, visus 0 bilaterally
visus 0.3 bilaterally
light perception extant
visus 0 bilaterally
no residual visual perception, visus 0 bilaterally
light perception, dim object recognition
visus 0 bilaterally
no residual visual perception, visus 0 bilaterally
eye cataract, glaucoma
close vision > 24.5 diopter, visus 0 left, 1.5 right
The test materials of this investigation – a subset of the stimulus corpus of the preceding single-case study  – encompassed 40 different text passages (sentences) obtained from newspapers and transformed into acoustic speech signals by means of a formant synthesizer (text-to-speech system) at the Institute of Phonetics of the Saarland University (screen reader software JAWS 2008; male voice; http://www.freedomsci.de). All utterances were first recorded at a normal speaking rate, amounting to 4–6 syl/s. Using the speech processing software Praat (version 4.5; http://www.fon.hum.uva.nl/praat/), 20 out of the total of 40 sentences were compressed to a moderately fast (8 syl/s) and the remaining 20 items to an ultra-fast syllabic rate (16 syl/s; see Additional files 1 and 2). In addition, both subsets of the test materials were stored as time-reversed speech signals (backward played sentences), serving as spectrally matched, but unintelligible and “phonologically incorrect” control items to the two forward-conditions (see Additional files 3 and 4). Duration of the various stimuli extended from ca. 3.5 to 4 s (moderately fast spoken sentences: mean length = 3.84 s, SD = 0.15 s; ultra-fast spoken sentences: mean = 3.78 s, SD = 0.12 s). Thus, the moderately fast tokens comprised 30.75 syllables per sentence, whereas the ultra-fast stimuli consisted on average of 60.55 syllables per utterance.
Behavioral data acquisition and analyses
To obtain a quantitative behavioral measure of an individual’s capability to understand moderately fast and ultra-fast speech utterances, each subject performed – outside the scanner and prior to the fMRI measurements – a sentence repetition task, encompassing a subset of the forward stimulus materials (ten sentences at both speaking rates each). These verbal utterances were truncated to the phrase-initial 9–10 words in order to limit memory load. In some instances, this approach yielded incomplete sentences, but those items, nevertheless, represented meaningful “stretches” of spoken language. The stimulus materials (see Additional file 5 for an example) were displayed to the participants via loudspeakers (Fostex, Personal monitor, 6301B) within a sound-attenuated room, subjects being asked to repeat them “as faithful as possible”, even in case not all the words had been “grasped”. The subjects’ repetitions were recorded on digital discs (M-audio Microtrack 2496). Subsequent evaluation included the computation of the percentage of correctly reproduced words at each rate condition, focusing upon word form irrespective of minor grammatical errors such as deviant singular or plural endings. Analysis of the behavioral data included comparison of the speech comprehension capabilities of blind and sighted subjects (two-tailed two-sample t-test) as well as correlation analyses within the blind group (two-tailed Pearson tests), addressing the relationship between repetition performance, on the one hand, and age at the onset of vision loss as well as disease duration at the time of fMRI measurements, on the other.
fMRI – data acquisition
All functional imaging sessions included two repetitions each of the 20 moderately fast (mf), 20 ultra-fast (uf), 20 time-reversed moderately fast (rev-mf), and 20 time-reversed ultra-fast (rev-uf) utterances (= altogether 160 stimuli) as well as 40 silent baseline intervals (scanner noise). The test materials – distributed across five runs – were presented in randomized order (event-related design) at an inter-stimulus interval of 9.6 s (jitter = ± 1.4 s, steps of 0.2 s) via headphones adapted to MRI measurements by removal of their permanent magnets (Sennheiser HD 570, binaural stimulus application). Since these headphones show sufficient dampening of environmental noise, it was not necessary to provide the subjects with earplugs during the experiment. Prior to scanning, participants were instructed to listen carefully to the applied auditory stimuli and to try to understand the displayed verbal utterances. Thus, the design did not allow for an explicit control of speech comprehension during the fMRI experiment. However, the brain structures sensitive to speech intelligibility have been found to “light up” even under listening-only conditions . Activation of language processing areas such as the inferior frontal gyrus (IFG) can be considered, thus, an indicator of actual speech comprehension . Subjects were asked to close their eyes during scanning and to report to the experimenters whether they could adequately hear the test materials in the presence of scanner noise, otherwise the sound amplitude of the stimuli was adjusted.
The experiment was run on a 3 Tesla MRI system (Magnetom TRIO; Siemens, Erlangen, Germany), using an echo-planar imaging sequence (echo-time = 30 ms, 64 × 64 matrix with a resolution of 3 × 3 mm2, 27 axial slices across the whole brain volume, TR = 1.6 s, slice thickness = 4 mm, flip angle = 90°, 270 scans per run). The scanner generated a constant background noise throughout fMRI measurements, serving as the baseline condition of the experimental design (null event). Anatomical images required for the localization of the hemodynamic responses were obtained by means of a GRAPPA sequence (T1-weighted images, TR = 2.3 s, TE = 2.92 ms, flip angle = 8°, slice thickness = 1 mm, resolution = 1 × 1 mm2) of a bi-commissural (AC-PC) orientation.
FMRI – data analyses
Preprocessing of the data encompassed slice time and motion correction, normalization to the Montreal Neurological Institute (MNI) template space, and smoothing by means of an 8 mm full-width half-maximum Gaussian kernel (SPM5 software package; http://www.fil.ion.ucl.ac.uk/spm). For the sake of statistical analysis, the blood oxygen level-dependent (BOLD) responses were modeled by means of a prototypical hemodynamic function within the context of a general linear model (event durations = 4 s). Any low-frequency temporal drifts were removed using a 128 s high-pass filter.
In order to delineate the brain regions engaged in the processing of the various stimulus categories considered (mf, uf, rev-mf, rev-uf), the contrast of hemodynamic activation versus baseline was computed separately for blind and sighted individuals (whole-head one-sample T-test, threshold at voxel level = p < .001 uncorrected, threshold at cluster level = p < .05 corrected; the Additional file 6 includes the respective SPM coordinates). Evaluation of the differences between the blind and sighted groups under the various conditions (mf, uf, rev-mf, rev-uf, versus baseline each) was based upon a whole-head two-sample T-test (threshold at voxel level = p < .001 uncorrected, at cluster level = p < .05 corrected; the respective results can be found in the Additional files 7 and 8).
A whole-head covariance analysis was conducted, allowing for the identification of hemodynamic responses correlated with ultra-fast speech comprehension capabilities as measured outside the scanner (ultra-fast speech versus baseline condition, pooled across blind and sighted participants). The whole-head random-effects model for group analyses was corrected for multiple comparisons across the entire brain (threshold at voxel level = p < .0005 uncorrected, threshold at cluster level = p < .05 corrected). In order to characterize the lateralization effects of the BOLD responses within V1 bound to the SPM T-contrast “ultra-fast versus baseline”, the signal changes (in percent) confined to a respective anatomical mask were calculated separately for the left and right hemisphere, using a repeated measures ANOVA (early- and late-blind individuals) with the intra-subject factor Hemisphere (left/right) and the between-subject factor Group (late-/early-blind).
The data from late- and early-blind subjects were analyzed separately to detect activation differences between those subgroups within brain regions contributing to ultra-fast speech processing, based upon the SPM between-group T-contrast “blind versus sighted” (all conditions pooled versus baseline; see Additional file 9). In addition, the late-blind group – excluding the three early-onset subjects – underwent a covariance analysis (SPM T-contrast “ultra-fast versus baseline”; see Additional files 10 and 11).
In order to resolve the confounding effects between blindness and ultra-fast speech perception within the initial covariance test, post hoc regions-of-interest (ROI) analyses were performed, testing the relation of behavioral performance and hemodynamic responses separately for the blind and sighted subgroups, considering both moderately fast and ultra-fast speech conditions (versus baseline). Because of a strong ceiling effect, behavioral performance – in terms of the understanding of moderately fast speech – was not taken into account here. Under these conditions, individual variation might reflect differences in memory load or the impact of attentional factors, rather than specific mechanisms engaged in ultra-fast speech comprehension.
Coordinates of the whole-head covariance analysis
Cluster size (voxel)
Cuneus, BA17 / 18
Inferior frontal gyrus, pars triangularis, BA44 / 45
Supplementary motor area
Posterior middle temporal gyrus / superior temporal sulcus
Anterior middle temporal gyrus / superior temporal sulcus
Posterior middle temporal gyrus / superior temporal sulcus
In order to further delineate within three exemplary ROIs, i.e., right V1, left FG, left IFG, the impact of the various experimental factors, i.e., Group (blind/sighted), Meaningfulness (forward/backward), and Speaking rate (ultra-fast/moderately fast), upon the BOLD responses, repeated measures ANOVAs were conducted (see Additional file 12). Determination of the relationship between the capability of ultra-fast speech comprehension, age at blindness onset, duration of vision loss, on the one hand, and the strength of hemodynamic responses (percent BOLD signal changes), on the other, relied upon correlation analyses of the data obtained from the group of blind subjects (two-tailed Pearson tests, significance threshold set at p < .05; see Additional file 13).
Comprehension of ultra-fast speech utterances (16 syl/s) – in terms of the percentage of correctly reproduced items of 10-word sentences outside the scanner – extended in blind listeners from 0 to 93% (mean = 59.7%, SD ± 24.0%) – a wide range of performance allowing for subsequent correlation analyses. In sighted individuals, performance level consistently fell below 16% (mean = 9.0%, SD ± 4.7%) (Table 1). Moderately fast utterances (8 syl/s) yielded comparable results in both subject groups (range across all participants: 61 to 100%; sighted controls: mean = 80.8%, SD ± 11.8%; blind listeners: mean = 84.1%, SD ± 11.0%). By contrast to the latter condition (two-sample t-test; T = 0.74, p = .470), blind and sighted individuals significantly differed with respect to speech comprehension of ultra-fast test materials (two-sample t-test; T = 7.74, p < .001). The enhanced perceptual skills of blind listeners did not show a significant correlation with either the onset of vision loss (r = -.347, p = .224) or with the duration of blindness at the time of the experiment (r = .152, p = .604).
Whole-head fMRI analyses
Differential activation effects in late- and early-blind patients
Region-of-interest (ROI) analyses
As concerns the impact of the various experimental factors upon the BOLD responses within right V1 and left FG (see Additional file 12), the group of blind individuals showed stronger responses to forward as compared to reversed speech (interaction Group × Meaningfulness, FG (1, 24) = 22.10, p < .001), right V1 (F (1, 24) = 12.63, p < .002). Furthermore, a significant main effect of Group emerged within V1 and FG (see Additional file 12) in terms of significant positive mean values in response to forward speech (mf: T = 2.73, p < .017; uf: T = 5.52, p > .000) and reversed moderately fast speech (T = 2.77, p < .016; rev-uf: T = 1.90, p = .079). By contrast, significantly negative values arose within the sighted group during both forward conditions (one-sample T-test; mf: T = -2.61, p < .024; uf: T = -2.65, p > .023). Finally, a significant three-way interaction Speaking rate × Group × Meaningfulness (see Additional file 12) could be observed at the level of left-hemispheric IFG (F (1, 24) = 18.51, p < .000) indicating this region to be sensitive to intelligibility, based on the skill of blind listeners to understand ultra-fast forward speech (post hoc analysis, blind > sighted (uf), IFG: T = 4.08, p < .001). By contrast, sighted controls showed significant hemodynamic activation of these areas only during the moderately fast forward speech.
Summary of results
Various left-hemispheric perisylvian structures known to support the perception of spoken language showed, as expected, significant hemodynamic responses to the test materials of this study both in sighted and blind subjects. Furthermore, the precentral gyrus of the same side and the cerebellum displayed significant BOLD signal changes under both forward speech conditions as well as reversed moderately fast speech. These observations are in line with clinical and functional imaging data pointing at a contribution of those structures – under specific circumstances – to auditory speech perception. For example, the cerebellum has been found to engage in the encoding of specific temporal-linguistic information during word identification tasks .
Similar to a preceding single-case study , individuals with vision loss exhibited significant hemodynamic activation both of right-hemispheric V1 and contralateral FG – covarying with capabilities of ultra-fast spoken language comprehension. More specifically, the BOLD signal changes within these two areas showed a positive correlation with individual ultra-fast speech perception skills and, obviously, depended upon semantic, syntactic, and/or phonological content since time-reversed (backward) speech stimuli were associated with reduced hemodynamic activation. Visual inspection of the data, furthermore, suggests a more extensive and a more bilateral occipital response pattern in the three early-blind as compared to the late-blind individuals. In addition, a positive correlation between BOLD signal magnitude and the level of ultra-fast speech understanding emerged at the level of Pv on either side.
Interactions between left FG and left perisylvian cortex in blind listeners during ultra-fast speech perception
FG is embedded into the so-called ventral route of the central-visual system which, especially, engages in object recognition (e.g., ), but may contribute to phonological operations as well (e.g., ). Repeatedly, first, functional imaging studies found this region to support pre-lexical stages of reading tasks and, more specifically, to “house” visual word forms . Among other things, FG has been observed to respond to spoken lexical items even in sighted people (e.g., ). Second, impaired speech sound processing in children with reading difficulties seems to be associated with diminished connectivity between FG and frontal language areas . Conceivably, thus, left FG cooperates with the posterior and anterior perisylvian “language zones” – more specifically, ipsilateral IFG and aSTS as well as bilateral pSTS – during ultra-fast speech comprehension. Clinical and functional imaging data indicate a contribution of left IFG to spoken language perception – at least in case more demanding segmentation processes and/or working memory operations are involved . Left pSTS has been found to respond to acoustic signals conveying phonetic-phonological information, irrespective of intelligibility, whereas hemodynamic activation of the anterior part of the same sulcus is restricted to meaningful verbal stimuli [22, 31]. Whereas both phonemic as well as non-phonemic sound structures elicit BOLD signal changes within bilateral pSTG, responses of left-hemispheric anterior and middle STS are restricted to familiar consonant-vowel syllables . Furthermore, pSTS at either side has been found associated with phonological aspects of speech recognition . Against this background, the observed temporal lobe activation pattern might be associated with the conveyance of information into higher-order supramodal cortical structures such as (i) the left temporo-parieto-occipital junction, supporting meaning-based representations (auditory-to-meaning interface), and (ii) left-hemispheric frontal areas, providing access to speech production units and phonological working memory (auditory-motor interface) (see  for a review). Presumably, left FG representing a secondary phonological area expands the phonological network to cope with the higher processing demands during ultra-fast speech perception.
Lateralized, i.e., predominantly right-hemispheric hemodynamic activation of V1 in blind listeners during ultra-fast speech perception
The present investigation supports the suggestion that, indeed, visual cortex contributes in a causal manner to enhanced auditory speech processing skills in blind subjects since, first, the capability of ultra-fast spoken language comprehension covaried with the strength of hemodynamic activation of right V1 and, second, the involvement of this structure was considerably reduced during listening to reversed, i.e., non-meaningful test materials.
A series of studies indicate age of blindness onset to significantly constrain the capacity for structural/functional reorganization of central-visual areas in humans. More specifically, only individuals suffering from congenital blindness – lacking any stimulus-driven elaboration of the visual system – appear to be able to “mold” occipital cortex in a fundamentally different manner as compared to sighted subjects . It is, furthermore, still a controversial issue in how far late-onset visual deficits may induce cortical reorganization in terms of functional cross-modal plasticity. For example, neuroimaging studies point at a decline of those capabilities after an age of 14–16 years [21, 36]. And Wan and colleagues  found early, but not late (≥ 14 years) vision loss to enhance auditory perception during non-speech tasks. The present investigation did not find any significant correlations between the time of onset or the duration of blindness, on the one hand, and the ability to understand ultra-fast speech as well as hemodynamic activation of visual cortex, on the other. Thus, the extent of the recruitment of the central-visual system appears primarily to correlate with behavioral performance rather than the age at vision loss (see [38, 39] for similar data). Nevertheless, a significant impact of this clinical parameter upon cross-modal fMRI effects cannot securely be excluded since both high-performing (performance > 60% correctly repeated words) early-blind participants, but only a single skilled late-blind individual (1 out of 5 subjects) displayed bilateral occipital responses. By contrast, a right-lateralized distribution emerged in most late-blind individuals. Similarly, Braille reading was reported to induce responses of the visual cortex at either side in early-blind subjects, whereas late-blind individuals display an activation pattern restricted to the hemisphere ipsilateral to the reading hand . Although the rather small and heterogeneous sample of blind subjects of the present study precludes any firm conclusions, ultra-fast speech perception does not appear to depend upon major rewiring of visual cortex – comparable to the reorganizational processes bound to congenital blindness. Rather, this perceptual capability seems associated with task-dependent cross-modal functional plasticity based, conceivably, on the engagement of existing anatomical structures.
Principally, recruitment of – predominantly right-hemispheric – occipital cortex during ultra-fast speech comprehension could either reflect early, i.e., signal-related computational operations or could be bound to higher-order processing stages, succeeding semantic speech encoding. Previous studies found speech- or language-related tasks such as verbal memory or verb generation tests to yield, as a rule, bilateral hemodynamic activation of occipital cortex – in the presence of more pronounced left-sided responses [14, 41]. Hemodynamic activation of primary visual areas at either side also could be documented in blind individuals listening to meaningful as well as meaningless sentences . Again, Braille reading yielded bilateral occipital responses, slightly enhanced within the hemisphere contralateral to the “reading” hand (, see also ). By contrast, the observed hemodynamic activation of V1 in blind listeners during ultra-fast speech perception displayed strong lateralization effects toward the right side. Thus, the distinct informational cues of the acoustic signal facilitating speech perception under time-critical conditions might be predominantly processed within the non-language-dominant hemisphere. Short spectro-temporal “segments” of the acoustic signal, extending across time intervals of a few tens of milliseconds, encode most of the information related to single speech sound categories such as the various consonants of a language system (e.g., ). Important acoustic features within this domain are, e.g., the formant transitions and the voice onset time of stop consonants. It is well established that the extraction of those segmental aspects of spoken language mainly depends upon left-hemispheric perisylvian “language zones”, including anterior and posterior aspects of the superior temporal lobe and posterior ventro-lateral frontal cortex [30, 44, 45]. Besides those segmental aspects, the acoustic speech signal conveys suprasegmental (prosodic) information such as the intonation of an utterance (“sentence melody”), related to the fundamental frequency contour of the speech signal. In addition, prosodic information also encompasses the specification of temporal structures such as rhythmic and metric patterns [46, 47]. By contrast to left-lateralized encoding of the segmental level of verbal utterances, various sources of evidence indicate primarily contralateral representation of suprasegmental/prosodic speech information (e.g., ). In case of formant-synthesized verbal utterances, such as the test materials used in the present study, the prosody of spoken language is more or less restricted to syllable timing (syllabic rhythm) as reflected in the speech envelope, i.e., the low-pass-filtered intensity contour of the acoustic signal. A recent whole-head magnetoencephalography (MEG) study, including stimulus materials (ultra-fast and moderately fast speech) similar to the present fMRI investigation, provides additional evidence for a direct translation of the acoustic correlates of syllable structure into electrophysiological brain activity . Most noteworthy, electrophysiological recordings found the speech envelope to be predominantly processed within the right hemisphere . Conceivably, the observed occipital lateralization effects during ultra-fast speech perception in blind subjects indicate V1 to engage in the analysis of the speech envelope or, more specifically, syllabic rhythm. Against this background, activation of the central-visual system might also be expected in case of unintelligible reversed speech. However, Ahissar and colleagues  reported a significant correlation between signal-driven syllable-related brain activity of auditory cortex and speech comprehension. This observation could be explained by top-down processes bound to expectations related to the sound structure of the incoming signal which interact with the initial processing of the auditory input. Assuming, thus, right-lateralized early prosodic processing, occipital pole responses correlating with ultra-fast speech comprehension might reflect signal-driven rather than higher-order comprehension processes.
Mechanisms of ultra-fast speech perception: facilitated verbal consolidation under time-critical conditions
Blind individuals have been found to outperform sighted subjects in tasks requiring temporal order judgments of backward-masked tone stimuli, particularly, in case of brief intervals (40 ms) between the respective auditory events . This condition resembles, by and large, ultra-fast speech since each syllable can be expected to act as a potential masker of the preceding one. Stevens and Weaver  assigned the increased temporal resolution of non-speech acoustic events in blind subjects to “perceptual consolidation”, i.e., higher-order processing stages such as auditory working memory, rather than the analysis of spectro-temporal signal characteristics. These suggestions might provide a basis for the explanation of the observed mesiofrontal engagement in the perception of accelerated verbal utterances. Besides visual cortex, hemodynamic activation of left SMA was found to covary with the ability to comprehend ultra-fast spoken language. Several studies indicate this mesiofrontal area to engage in the syllabic organization of verbal utterances during speech production [54–56]. On a broader scale, SMA appears to support timing processes across various sensorimotor and cognitive domains [57, 58]. Furthermore, clinical as well as experimental studies point at a contribution of SMA also to speech perception and verbal working memory [59–63]. Since the verbal encoding of longer stretches of speech such as the test materials of the present study must be expected to engage short-term memory processes (see ) and since SMA appears to act as a platform of timing operations, related, among other things, to verbal working memory functions, right V1 might provide a “fast track” channel conveying temporal information on syllable structure directly from primary auditory areas via left SMA into verbal working memory. More specifically, the cooperation of primary auditory areas, right V1, and left SMA could facilitate a signal-driven timing mechanism for the transformation of the acoustic signal into a stable (consolidated) verbal code under time-critical conditions.
The role of the pulvinar during ultra-fast speech perception: synchronization of central-visual and –auditory areas
Besides several cortical regions, ultra-fast speech comprehension capabilities also covaried with the hemodynamic responses of Pv at either side. Animal data obtained in tree shrews indicate those thalamic nuclei to project to V1 as well – in addition to higher-order areas of the central-visual system . As concerns primates, at least some Pv subcomponents are embedded into reciprocal connections with both striate and extrastriate areas (e.g., ). In consideration of this network architecture, the respective parts of the Pv have been assumed to support attentional processes operating within the visual domain. Furthermore, tract-tracing studies in monkeys found both the ascending auditory pathways as well as the optic tracts to send convergent collateral fiber tracts to deep layers of the superior colliculus, and the respective target neurons, in turn, project via Pv to auditory as well as visual cortex . Among other things, the pulvinar contributes to the detection of temporo-spatial coincidences of audiovisual signal configurations . In blind subjects, Pv might help to synchronize – driven by acoustic input – striate cortex with the central-auditory system during ultra-fast speech perception, based upon cross-modal subcortical pathways that in sighted individuals subserve audiovisual coincidence detection and the control of visual attention. Given, furthermore, direct anatomical connections between auditory and visual areas [69–71], early multisensory convergence processes at the cortical level must be assumed – as demonstrated, e.g., by means of transcranial magnetic stimulation . These considerations suggest the observed hemodynamic responses within bilateral Pv and primary visual areas, to reflect early (thalamo-cortical) rather than later (cortico-cortical) stages of ultra-fast speech processing.
Contribution of visual cortex to the perception of time-compressed speech in normal subjects
In principle, speech perception represents an audiovisual process, and under difficult acoustic conditions lip reading may considerably improve spoken language understanding. It must be expected, thus, that the visual system encompasses – to some extent – preconfigurated connections with the auditory system providing a basis for interactions between the two modalities. Indeed, a recent Diffusion Tensor Imaging study (DTI) – evaluating white matter parameters in children – found inter-subject differences in fractional anisotropy to correlate with the comprehension of time-compressed speech : Moderately manipulated signals (40% compression) yielded these effects in white matter areas adjacent to audiovisual association cortex and posterior cingulate gyrus while a greater degree of compression resulted in changes of tracts adjoining prefrontal areas (dorsal and ventral).
A previous fMRI study reported compressed as compared to normal speech to elicit a “convex” distribution pattern of hemodynamic responses within IFG, and BOLD signal changes paralleled the extent of this manipulation as long as intelligibility of the verbal utterances was preserved . Similarly, sighted subjects showed reduced IFG activation in the present investigation while listening to ultra-fast speech. A further fMRI experiment revealed learning to understand time-compressed speech to be associated with increased activation of left and right auditory association cortices as well as left ventral premotor cortex, suggesting speech perception to involve the integration of multi-modal data sets, mapping acoustic patterns onto articulatory motor plans . At very high syllable rates, sighted subjects, obviously, do not recruit the visual system in order to enhance speech comprehension. Furthermore, invasive electrophysiological measurements during application of time-compressed speech revealed the speech envelope – up to frequencies of 15 Hz – to be well-represented at the level of auditory cortex, suggesting that the time resolution of primary auditory cortex is not the limiting factor for ultra-fast speech comprehension . Similarly, our group found significant MEG phase locking to envelope features of ultra-fast verbal utterances (16 syl/s) [49, 76]. In this latter study, blind individuals showed an additional phase-locked component bound to right visual cortex – absent in sighted subjects. Although, principally, primary auditory cortex should be able to track the speech envelope, this extracted information might not suffice to trigger phonological processes during lexical encoding at the level of the working memory.
A recent fMRI study – delineating the “bottleneck” of time-compressed speech processing – found higher stages of language processing associated with “buffer regions” within left ventrolateral frontal cortex/anterior insula, precentral gyrus and mesio-frontal areas to represent the limiting factor of spoken language comprehension . Our data suggest that the visual cortex must also be considered an essential prerequisite to enhanced speech encoding at high syllable rates. Altogether, sighted subjects appear unable – or at least not to “attempt” – to recruit the central-visual system in order to speed up comprehension of spoken language. Occipital cortex, indeed, responds to auditory stimulation, given the negative values of percent signal change, but appears rather to be “actively suppressed” during attempts to understand ultra-fast speech. Against this background, the bottleneck within the frontal language network referred to should represent the upper limit of spoken language understanding. Blind subjects might be able to circumvent these constraints, based upon the recruitment of an additional timing mechanism bound to interactions between pulvinar, auditory/visual cortex, and SMA.
Limitations of the study
The present study did not find onset of vision loss to pose major constraints upon ultra-fast speech perception capabilities. However, larger well-documented subject groups are required to further corroborate these findings and to identify other clinical factors, such as disease duration, with an eventual impact upon the recruitment of central-visual structures during auditory language comprehension. Furthermore, intra-individual long-term studies are needed to track the time-course of the cerebral reorganization processes associated with the acquisition of ultra-fast speech perception skills and to determine in how far vision loss represents a necessary pre-condition for this capacity. In order to further delineate any differential task-dependent cross-modal reorganization patterns in subjects with early and late vision loss, a larger sample of early-blind individuals has to be recruited.
Besides the more or less expected responses of perisylvian “language zones”, hemodynamic activation of right-hemispheric V1, contralateral FG, and bilateral Pv was found to covary with ultra-fast speech comprehension capabilities. (i) FG, an area known to be engaged in phonological processing, appears to contribute to the extraction/representation of segmental information of the acoustic speech signal and, thus, to “extend” the left perisylvian network of spoken language processing. (ii) By contrast, right V1 might support, concomitant with Pv and auditory cortex, the encoding of early suprasegmental aspects of the speech signal, feeding, e.g., trigger signals derived from these data structures via left SMA into the perisylvian speech/language network that help to facilitate the consolidation of linguistic information. This model assumes that extant structures and pathways of the central-visual system, disconnected from modality-specific afferent input, are able to enhance behavioral performance within other domains via cross-modal pathways.
This study was supported by the German Research Foundation (DFG; SFB 550/B1, AC 55 09/01) and the Hertie Institute for Clinical Brain Research, Tübingen. The authors would like to thank Anja Moos and Jürgen Trouvain (Saarland University, Saarbrücken, Germany) for access to their stimulus corpus, for assistance in recruiting blind subjects, and for helpful discussions. Furthermore, we would like to thank Maike Borutta for excellent technical assistance.
- Hollins M: Perceptual abilities of blind people. Understanding Blindness: An Interrogative Approach. Edited by: Hollins M. 1989, Hillsdale, NJ: Lawrence Erlbaum Associates
- Niemeyer W, Starlinger I: Do blind hear better? Investigations on auditory processing in congenital early acquired blindness. II. Central functions. Audiology. 1981, 20: 510-515. 10.3109/00206098109072719.View ArticlePubMed
- Röder B, Rösler F, Hennighausen E, Näcker F: Event-related potentials during auditory and somatosensory discrimination in sighted and blind human subjects. Cogn Brain Res. 1996, 4: 77-93.View Article
- Röder B, Teder-Salejarvi W, Sterr A, Rösler F, Hillyard SA, Neville HJ: Improved auditory spatial tuning in blind humans. Nature. 1999, 400: 162-166. 10.1038/22106.View ArticlePubMed
- Gougoux F, Lepore F, Lassonde M, Voss P, Zatorre RJ, Belin P: Pitch discrimination in the early blind. Nature. 2004, 430: 309.View ArticlePubMed
- Bull R, Rathborn H, Clifford BR: The voice-recognition accuracy of blind listeners. Perception. 1983, 12: 223-226. 10.1068/p120223.View ArticlePubMed
- Büchel C, Price C, Frackowiak RSJ, Friston K: Different activation patterns in the visual cortex of late and congenitally blind subjects. Brain. 1998, 121: 409-419. 10.1093/brain/121.3.409.View ArticlePubMed
- Gizewski ER, Gasser T, de Greiff A, Boehm A, Forsting M: Cross-modal plasticity for sensory and motor activation patterns in blind subjects. NeuroImage. 2003, 19: 968-975. 10.1016/S1053-8119(03)00114-9.View ArticlePubMed
- Sadato N, Pascual-Leone A, Grafman J, Deiber MP, Ibanez V, Hallett M: Neural networks for Braille reading by the blind. Brain. 1998, 121: 1213-1229. 10.1093/brain/121.7.1213.View ArticlePubMed
- Sadato N: How the blind “see” Braille: Lessons from functional magnetic resonance imaging. Neuroscientist. 2005, 11: 577-582. 10.1177/1073858405277314.View ArticlePubMed
- Burton H, McLaren DG, Sinclair RJ: Reading embossed capital letters: An fMRI study in blind and sighted individuals. Hum Brain Mapp. 2006, 27: 325-339. 10.1002/hbm.20188.PubMed CentralView ArticlePubMed
- Poirier C, Collignon O, Scheiber C, Renier L, Vanlierde A, Tranduy D, Veraart C, De Volder AG: Auditory motion perception activates visual motion areas in early blind subjects. NeuroImage. 2006, 31: 279-285. 10.1016/j.neuroimage.2005.11.036.View ArticlePubMed
- Röder B, Stock O, Bien S, Neville H, Rösler F: Speech processing activates visual cortex in congenitally blind humans. Eur J Neurosci. 2002, 16: 930-936. 10.1046/j.1460-9568.2002.02147.x.View ArticlePubMed
- Amedi A, Raz N, Pianka P, Malach R, Zohary E: Early ‘visual’ cortex activation correlates with superior verbal memory performance in the blind. Nat Neurosci. 2003, 6: 758-766. 10.1038/nn1072.View ArticlePubMed
- Lambert S, Sampaio E, Mauss Y, Schreiber C: Blindness and brain plasticity: Contribution of mental imagery? An fMRI study. Cogn Brain Res. 2004, 20: 1-11. 10.1016/j.cogbrainres.2003.12.012.View Article
- Raz N, Amedi A, Zohary E: V1 activation in congenitally blind humans is associated with episodic retrieval. Cereb Cortex. 2005, 15: 1459-1468.View ArticlePubMed
- Moos A, Trouvain J: Comprehension of ultra-fast speech – blind vs. “normally hearing” persons. Proceedings of the 16th International Congress of Phonetic Sciences Volume 1. Edited by: Trouvain J, Barry WJ. 2007, Saarbrücken: University of Saarbrücken, 677-680.
- Nishimoto T, Sako S, Sagayama S, Ohshima K, Oda K, Watanabe T: Effect of learning on listening to ultra-fast synthesized speech. IEEE Conf Proc Med Biol Soc. 2006, 1: 5691-5694.View Article
- Hertrich I, Dietrich S, Moos A, Trouvain J, Ackermann H: Enhanced speech perception capabilities in a blind listener are associated with activation of fusiform gyrus and primary visual cortex. Neurocase. 2009, 15: 163-170. 10.1080/13554790802709054.View ArticlePubMed
- Cattaneo Z, Vecchi T: Blind Vision. The Neuroscience of Visual Impairment. 2011, Cambridge, MA: MIT PressView Article
- Cohen LG, Weeks RA, Sadato N, Celnik P, Ishii K, Hallett M: Period of succeptibility for cross-modal plasticity in the blind. Ann Neurol. 1999, 45: 451-460. 10.1002/1531-8249(199904)45:4<451::AID-ANA6>3.0.CO;2-B.View ArticlePubMed
- Scott SK, Blank CC, Rosen S, Wise RJS: Identification of a pathway for intelligible speech in the left temporal lobe. Brain. 2000, 123: 2400-2406. 10.1093/brain/123.12.2400.View ArticlePubMed
- Poldrack RA, Temple E, Protopapas A, Nagarajan S, Tallal P, Merzenich M, Gabrieli JDE: Relations between the neural bases of dynamic auditory processing and phonological processing: Evidence from fMRI. J Cogn Neurosci. 2001, 13: 687-697. 10.1162/089892901750363235.View ArticlePubMed
- Mathiak K, Hertrich I, Grodd W, Ackermann H: Cerebellum and Speech Perception: A Functional Magnetic Resonance Imaging Study. J Cogn Neurosci. 2002, 14: 902-912. 10.1162/089892902760191126.View ArticlePubMed
- Haxby JV, Grady CL, Horwitz B, Ungerleider LG, Mishkin M, Carson RE, Herscovitch P, Schapiro MB, Rapoport SI: Dissociation of object and patial visual processing pathways in human extrastriate cortex. Proc Natl Acad Sci U S A. 1991, 88: 1621-1625. 10.1073/pnas.88.5.1621.PubMed CentralView ArticlePubMed
- Cone NE, Burman DD, Bitan T, Bolger DJ, Booth JR: Developmental changes in brain regions involved in phonological and orthographic processing during spoken language processing. NeuroImage. 2008, 41: 623-635. 10.1016/j.neuroimage.2008.02.055.PubMed CentralView ArticlePubMed
- McCandliss BD, Cohen L, Dehaene S: The visual word form area: Expertise for reading in the fusiform gyrus. Trends Cogn Sci. 2003, 7: 293-299. 10.1016/S1364-6613(03)00134-7.View ArticlePubMed
- Vigneau M, Jobard G, Mazoyer B, Tzourio-Mazoyer N: Word and non-word reading: What role for the Visual Word Form Area?. NeuroImage. 2005, 27: 694-705. 10.1016/j.neuroimage.2005.04.038.View ArticlePubMed
- Cao F, Bitan T, Booth JR: Effective brain connectivity in children with reading difficulties during phonological processing. Brain Lang. 2008, 107: 91-101. 10.1016/j.bandl.2007.12.009.PubMed CentralView ArticlePubMed
- Burton MW, Small SL, Blumstein SE: The role of segmentation in phonological processing: An fMRI investigation. J Cogn Neurosci. 2000, 12: 679-690. 10.1162/089892900562309.View ArticlePubMed
- Rauschecker JP, Scott SK: Maps and streams in the auditory cortex: Nonhuman primates illuminate human speech processing. Nat Neurosci. 2009, 12: 718-724. 10.1038/nn.2331.PubMed CentralView ArticlePubMed
- Liebenthal E, Binder JR, Spitzer SM, Possing ET, Medler DA: Neural Substrates of Phonemic Perception. Cereb Cortex. 2005, 15: 1621-1631. 10.1093/cercor/bhi040.View ArticlePubMed
- Hickok G: The functional neuroanatomy of language. Phys Life Rev. 2009, 6: 121-143. 10.1016/j.plrev.2009.06.001.PubMed CentralView ArticlePubMed
- Hickok G, Poeppel D: Towards a functional neuroanatomy of speech perception. Trends Cogn Sci. 2000, 4: 131-138. 10.1016/S1364-6613(00)01463-7.View ArticlePubMed
- Büchel C: Cortical hierarchy turned on its head. Nat Neurosci. 2003, 6: 657-658. 10.1038/nn0703-657.View ArticlePubMed
- Sadato N, Okado T, Honda M, Yonekura Y: Critical period for cross-modal plasticity in blind humans: A functional MRI study. NeuroImage. 2002, 16: 389-400. 10.1006/nimg.2002.1111.View ArticlePubMed
- Wan CY, Wood AG, Reutens DC, Wilson SJ: Early but not late-blindness leads to enhanced auditory perception. Neuropsychologia. 2010, 48: 344-348. 10.1016/j.neuropsychologia.2009.08.016.View ArticlePubMed
- Gougoux F, Zatorre RJ, Lassonde M, Voss P, Lepore F: A functional neuroimaging study of sound localization: Visual cortex activity predicts performance in early-blind individuals. PLoS Biol. 2005, 3: 27-10.1371/journal.pbio.0030027.View Article
- Stilla R, Hanna R, Hu X, Mariola E, Deshpande G, Sathian K: Neural processing underlying tactile microspatial discrimination in the blind: a functional magnetic resonance imaging study. J Vis. 2008, 8: 1-19.View ArticlePubMed
- Burton H, Snyder AZ, Conturo TE, Akbudak E, Ollinger JM, Raichle ME: Adaptive changes in early and late blind: An fMRI study of Braille reading. J Neurophysiol. 2002, 87: 589-607.PubMed CentralPubMed
- Amedi A, Floel A, Knecht S, Zohary E, Cohen LG: Transcranial magnetic stimulation of the occipital pole interferes with verbal processing in blind subjects. Nat Neurosci. 2004, 7: 1266-1270. 10.1038/nn1328.View ArticlePubMed
- Burton H, Snyder AZ, Diamond JB, Raichle ME: Adaptive changes in early and late blind: An fMRI study of verb generation to heard nouns. J Neurophysiol. 2002, 88: 3359-3371. 10.1152/jn.00129.2002.PubMed CentralView ArticlePubMed
- Liberman AM: Special Code. 1996, Cambridge, MA: MIT Press
- Hickok G, Poeppel D: The cortical organization of speech processing. Nat Rev Neurosci. 2007, 8: 393-402. 10.1038/nrn2113.View ArticlePubMed
- Zaehle T, Geiser E, Alter K, Jäncke L, Meyer M: Segmental processing in the human auditory dorsal stream. Brain Res. 2008, 1220: 179-190.View ArticlePubMed
- Fletcher J: The prosody of speech: Timing and rhythm. The Handbook of Phonetic Sciences, 2nd edition. Edited by: Hardcastle WJ, Laver J, Gibbon FE. 2010, Oxford: Wiley-Blackwell, 523-602.
- Beckman ME, Venditti JJ: Tone and intonation. The Handbook of Phonetic Sciences, 2nd edition. Edited by: Hardcastle WJ, Laver J, Gibbon FE. 2010, Oxford: Wiley-Blackwell, 603-652.View Article
- Poeppel D: The analysis of speech in different temporal integration windows: Cerebral lateralization as ‘asymmetric sampling in time’. Speech Commun. 2003, 41: 245-255. 10.1016/S0167-6393(02)00107-3.View Article
- Hertrich I, Dietrich S, Trouvain J, Moos A, Ackermann H: Magnetic brain activity phase-locked to the envelope, the syllable onsets, and the fundamental frequency of a perceived speech signal. Psychophysiology. 2012, 49: 322-334. 10.1111/j.1469-8986.2011.01314.x.View ArticlePubMed
- Abrams DA, Nicol T, Zecker S, Kraus NJ: Right-hemisphere auditory cortex is dominant for coding syllable patterns in speech. J Neurosci. 2008, 28: 3958-3965. 10.1523/JNEUROSCI.0187-08.2008.PubMed CentralView ArticlePubMed
- Ahissar E, Nagarajan S, Ahissar M, Protopapas A, Mahncke H, Merzenich MM: Speech comprehension is correlated with temporal response patterns recorded from auditory cortex. Proc Natl Acad Sci U S A. 2001, 98: 13367-13372. 10.1073/pnas.201400998.PubMed CentralView ArticlePubMed
- Stevens AA, Snodgrass M, Schwartz D, Weaver K: Preparatory activity in occipital cortex in early blind humans predicts auditory perceptual performance. J Neurosci. 2007, 27: 10734-10741. 10.1523/JNEUROSCI.1669-07.2007.View ArticlePubMed
- Stevens AA, Weaver K: Auditory perceptual consolidation in early-onset blindness. Neuropsychologia. 2005, 43: 1901-1910. 10.1016/j.neuropsychologia.2005.03.007.View ArticlePubMed
- Riecker A, Mathiak K, Wildgruber D, Erb M, Hertrich I, Grodd W, Ackermann H: fMRI reveals two distinct cerebral networks subserving speech motor control. Neurology. 2005, 64: 700-706. 10.1212/01.WNL.0000152156.90779.89.View ArticlePubMed
- Brendel B, Hertrich I, Erb M, Lindner A, Riecker A, Grodd W, Ackermann H: The contribution of mesiofrontal cortex to the preparation and execution of repetitive syllable productions: An fMRI study. NeuroImage. 2010, 50: 1219-1230. 10.1016/j.neuroimage.2010.01.039.View ArticlePubMed
- Ziegler W, Kilian B, Deger K: The role of the left mesial frontal cortex in fluent speech: Evidence from a case of left supplementary motor area hemorrhage. Neuropsychologia. 1997, 35: 1197-1208. 10.1016/S0028-3932(97)00040-7.View ArticlePubMed
- Paz R, Natan C, Boraud T, Bergman H, Vaadia E: Emerging patterns of neuronal responses in supplementary and primary motor areas during sensorimotor adaptation. J Neurosci. 2005, 25: 10941-10951. 10.1523/JNEUROSCI.0164-05.2005.View ArticlePubMed
- Rubia K, Smith A: The neural correlates of cognitive time management: A review. Acta Neurobiol Exp. 2004, 64: 329-340.
- Chung GH, Han YM, Jeong SH, Jack CR: Functional heterogeneity of the supplementary motor area. Am J Neuroradiol. 2005, 26: 1819-1823.PubMed
- Geiser E, Zaehle T, Jäncke L, Meyer M: The neural correlate of speech rhythm as evidenced by metrical speech processing. J Cogn Neurosci. 2008, 20: 541-552. 10.1162/jocn.2008.20029.View ArticlePubMed
- Smith A, Taylor E, Lidzba K, Rubia K: A right hemispheric frontocerebellar network for time discrimination of several hundreds of milliseconds. NeuroImage. 2003, 20: 344-350. 10.1016/S1053-8119(03)00337-9.View ArticlePubMed
- Schirmer A, Alter K, Kotz SA, Friederici AD: Lateralization of prosody during language production: A lesion study. Brain Lang. 2001, 76: 1-17. 10.1006/brln.2000.2381.View ArticlePubMed
- Schirmer A: Timing speech: A review of lesion and neuroimaging findings. Cogn Brain Res. 2004, 21: 269-287. 10.1016/j.cogbrainres.2004.04.003.View Article
- Baddeley A: Working memory and language: An overview. J Commun Disord. 2003, 36: 189-208. 10.1016/S0021-9924(03)00019-4.View ArticlePubMed
- Lyon DC, Jain N, Kaas JH: The visual pulvinar in tree shrews II. Projections of four nuclei to areas of visual cortex. J Comp Neurol. 2003, 467: 607-627. 10.1002/cne.10940.View ArticlePubMed
- Casanove C: The visual functions of the pulvinar. The Visual Neurosciences. Volume 1. Edited by: Chalupa LM, Werner JS. 2004, Cambridge, MA: MIT Press, 592-608.
- Bernstein LE, Auer ET, Jr Moore JK: Audiovisual speech binding: Convergence or association?. The Handbook of Multisensory Processes. Edited by: Calvert G, Spence C, Stein BE. 2004, Cambridge, MA: MIT Press, 203-223.
- Burr D, Alais D: Combining visual and auditory information. Prog Brain Res. 2006, 155: 243-258.View ArticlePubMed
- Foxe J, Schröder CE: The case for feedforward multisensory convergence during early cortical processing. Neuroreport. 2005, 16: 419-423. 10.1097/00001756-200504040-00001.View ArticlePubMed
- Schröder CE, Foxe J: Multisensory contributions to low-level, unisensory processing. Curr Opin Neurobiol. 2005, 15: 454-458. 10.1016/j.conb.2005.06.008.View Article
- Schröder CE, Smiley J, Fu KG, McGinnis T, O’Connel MN, Hackett TA: Anatomical mechanisms and functional implications of multisensory convergence in early cortical processing. Int J Psychophysiol. 2003, 50: 5-17. 10.1016/S0167-8760(03)00120-X.View Article
- Bolognini N, Senna I, Maravita A, Pascual-Leone A, Merabet LB: Auditory enhancement of visual phosphene perception: The effect of temporal and spatial factors and of stimulus intensity. Neurosci Lett. 2010, 477: 109-114. 10.1016/j.neulet.2010.04.044.PubMed CentralView ArticlePubMed
- Schmithost VJ, Holland SK, Plante E: Diffusion tensor imaging reveals white matter microstructure correlations with auditory processing ability. Ear Hear. 2011, 32: 156-167. 10.1097/AUD.0b013e3181f7a481.View Article
- Adank P, Devlin JT: On-line plasticity in spoken sentence comprehension: Adapting to time-compressed speech. NeuroImage. 2010, 49: 1124-1132. 10.1016/j.neuroimage.2009.07.032.PubMed CentralView ArticlePubMed
- Nourski KV, Reale RA, Oya H, Kawasaki H, Kovach CK, Chen H, Howard MA, Brugge JF: Temporal envelope of time-compressed speech represented in the human auditory cortex. J Neurosci. 2009, 29: 15564-15574. 10.1523/JNEUROSCI.3065-09.2009.PubMed CentralView ArticlePubMed
- Hertrich I, Dietrich S, Ackermann H: Tracking the speech signal: Time-locked brain activity during perception of ultra-fast and moderately fast speech in blind and in sighted listeners. Brain Lang. 2013, 124: 9-12. 10.1016/j.bandl.2012.10.006.View ArticlePubMed
- Vagharchakian L, Dehaene-Lambertz G, Pallier C, Dehaene S: A temporal bottleneck in the language comprehension network. J Neurosci. 2012, 32: 9089-9102. 10.1523/JNEUROSCI.5685-11.2012.View ArticlePubMed
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.