Participants
Fifteen Deaf native signers (≥85db Hearing level (HL) in each ear except for one participant who had a decibel loss of ≥ 70 db HL in the left ear) participated in the experiment. Three participants had to be excluded from further analyses since they did not reach the criterion of at least 60% correct responses in all experimental conditions. Additionally, one participant was excluded because the EEG data set was contaminated by excessive artifacts. Of the analyzed sample (6 female, 5 male; mean age: 28 years, range: 20–40 years), four participants had “mittlere Reife” (correspondent approximately to an O-level), seven had “Abitur” (A-level), and one had an university degree. The first three excluded participants had “mittlere Reife”, the fourth excluded participant did not report his highest degree of education.
None of the participants had any known neurological impairments and all of them had normal or corrected-to-normal vision. They gave written informed consent before their participation and received a monetary compensation. All of the native signers were right-handed according to self-report and the Edinburgh Handedness Inventory. The participants had learned DGS from birth from their Deaf parents.
The sign language proficiency of the participants was assessed by using a DGS comprehension test (Gebärdensprach-Sinnverständnis Test (GSV) of the ATBG (“Aachener Testverfahren zur Berufseignung von Gehörlosen”; English: “Aachen’s vocational testing for the deaf”). On average, the selected participants were 87% (SE 3.7, range: 60% to 100%) correct in the DGS comprehension test. The study had been approved by the ethical committee of the German Society of Psychology (Nr: BRBHF 07022008).
Material
A set of 300 experimental sentences was constructed by two Deaf native signers, one Deaf near-native signer of DGS, and one sign language linguist. The sentences were signed by a Deaf native signer of DGS, videotyped, digitized, and presented at the rate of natural signing. Written informed consent for the publication of images was obtained from the signer.
The stimulus set was evaluated by 12 congenitally and profoundly deaf individuals (mean age: 36 years, range: 27–64 years; ≥ 85db HL in each ear) who were all native signers of DGS. Upon presentation, participants had to judge whether or not the sentence was an appropriate DGS sentence. Sentences with less than 80% agreement among the native signers were disregarded. The final stimulus set consisted of 46 sentences from which 138 sentences were derived: (a) 46 sentences were correct, (b) 46 sentences were morphosyntactically incorrect comprising a verb that was incorrectly inflected (incorrect direction of movement), and (c) 46 sentences were semantically incorrect comprising a selectional restriction violation. For example, sentence (1b) violates the person agreement rule between subject verb and object verb via the wrong movement from neutral space to the first person:
(1a) BOY POINTa GIRL POINTb aNEEDLEb REASON POINTb SLOW SWIM
“the boy needles the girl because she is slowly swimming”
(1b) * BOY POINTa GIRL POINTb cNEEDLE1 REASON POINTb SLOW SWIM
“*the boy needle the girl because she is slowly swimming”
Instead, sentence (1c) is an example of a selectional restriction violation (semantic violation), since the object-verb relation is not semantically plausible:
(1c) BOY POINTa COAT POINTb aNEEDLEb REASON POINTb SLOW SWIM
“*the boy needles the coat because it is slowly swimming”
All sentences were constructed in a comparable SOV structure up to the critical sign (Figure 4).
Since DGS is a subject-object-verb (SOV) language, the semantically violated sentences became implausible at the verb (e.g., NEEDLE in the example shown in 1c). Thus, the verb is the critical sign to which ERPs were averaged.
Sentences had a mean length of 10 signs (median: 9, range: 7 – 13 signs) and a mean duration of 10457 ms (median: 10440 ms, range: 5680 ms–14480 ms, SD 1596). Additionally, 74 different filler sentences were presented. Sixty filler sentences were correct, 14 sentences had different morphosyntactic and semantic violations on varying sentence positions.
The stimulus onset of each sign was defined by a Deaf native signer, a Deaf delayed signer, and a DGS interpreter. Sign languages have rather long transition phases between one sign and the next [52] and it is a matter of debate when exactly a sign starts. According to Liddell & Johnson a sign begins when the handshape is completed and the hand is hold in its correct first location (‘Movement-Hold-Model’; [53]). Note, however, that the timing of comprehending a sign varies depending a) of what signing parameters has to be changed and b) of which signing parameter is linguistically crucial. Therefore, in our paradigm we distinguished two time points which were used as trigger positions (event codes):
-
1.
In sentences with semantic violations we timelocked the sign onset – according to the Movement-Hold-Model – when handshape and hold were completed: To judge the semantic value of the object the target sign has to be perceived entirely in order to judge its appropriateness.
-
2.
In sentences with morphosyntactic violations the location change (note that in sign language, syntax is expressed in space) of the sign is more crucial than the target sign itself: while moving the hand to the location of the beginning of the next sign the handshape changes and the morphosyntactic violation (incorrect location) is most likely recognized. Therefore, for morphosyntactically violated sentences the trigger position was set to the first handshape change that could be detected towards the target sign or – if earlier – the change of the lip movement (sign onset code II).
Procedure
The experiment comprised two sessions that were run mostly within one day. In the first session, the participants completed a language history questionnaire and a subtest of the ATBG (“Aachener Testverfahren zur Berufseignung von Gehörlosen”; English: “Aachen’s vocational testing for the deaf”). The test comprises a number of different modules to test aspects of memory, attention, spatial imagery, problem solving, general knowledge, arithmetic, and language. We only employed the subtest GSV (“Gebärdensprach-Verständnis-Test”; English: “Sign Language comprehension test”).
The experimental session consisted of 212 trials and was divided into five blocks with short breaks of a duration defined by the participants. The experiment lasted for about 90 minutes. Prior to the experimental blocks, 13 practice sentences were presented (which were not used in the analysis). Instructions were given in DGS: Since signers are familiar with a wide range of variations in DGS within the German signing community, they are extremely tolerant for language variation. For this reason participants were told to only accept “very well-formed” sentences as “correct”.
Participants were seated in a comfortable chair in front of a LCD monitor. Stimuli were presented on this monitor with a vertical visual angle of 13.12° and a horizontal visual angle of 16.48°. The size of the presented video footage was chosen to be readily identifiable.
Please note that the visual angels refer to the complete size of the shown footage. Thus, the visual angles within which the relevant signing was presented were smaller. In addition, during sign language comprehension, signers fixate primarily on the signers face (see results from eye tracking studies e.g., [54, 55]).
The different trial types were presented in a random order, holding the first picture of the video/signed sentence for 1000 ms with the signer in initial position to fixate the participant’s eyes on the screen. Six hundred ms after the end of the sentence a happy and a sad smiley appeared on the screen and participants were prompted to decide whether or not the sentence had been correct by pressing one of two buttons with their left and right index fingers (which hand was used to indicate correct and incorrect sentences, respectively, was randomized across participants). To start the next trial, the participants had to press one of the response buttons. In the second session, participants’ processing of written German sentences was examined (see [51], [56]).
ERP recording and data analysis
The electroencephalogram (EEG) and the electro-oculogram (EOG) were recorded using Ag/AgCl electrodes. Seventy-four electrodes were mounted according to the international 10/10 system into an elastic cap (Easy Cap; FMS, Herrsching-Breitbrunn, Germany) (see Figure 2). The vertical EOG (VEOG) was recorded with an electrode below both eyes against the right earlobe reference. The horizontal eye movements were monitored using electrodes F9 and F10 (bipolar recording defined offline). An averaged right/left earlobe reference was calculated offline. Electrode impedance was kept below 5 kΩ. The electrode signals were amplified using 3 BrainAmp DC amplifiers (Brain Products GmbH, Gilching, Germany) and digitally stored using the BrainVision Recorder software (Brain Products GmbH, Gilching, Germany). The analog EEG signal was sampled at 5000 Hz, filtered online with a bandpass of 0.1 to 250 Hz and then downsampled online to 500 Hz to be stored on a disk. The signal was low pass filtered offline with a high cut-off at 40Hz, 12 dB/oct.
Since language related ERPs have a rather broad topography, four adjacent electrodes were pooled, resulting in 7 electrode clusters for each hemisphere (see Figure 5).
The behavioural data (percentage of correct judgements) were analyzed with a repeated measurements ANOVA with the within participant factor Condition (correct, semantically incorrect, and morphosyntactically incorrect).
Trials with ocular artifacts (with an individually adjusted criterion of a maximum peak to peak amplitude between 80–120 μV within the time epoch of −100–1500 ms), artifacts from muscle movements, alpha waves or drifts,which had an individually adjusted criterion up to 150 μV within the time epoch of −100–1500 ms for each participant, were identified and rejected offline.
The remaining segments were baseline corrected with respect to a 100 ms period preceding the onset of the critical word. Separate averages were calculated for the four conditions (1a) correct (sign onset code I), (2) semantically incorrect (sign onset code I), (1b) correct (sign onset code II), (3) morphosyntactically incorrect (sign onset code II) for the time segment starting 100 ms before and ending 1500 ms after the critical words.
Based on results from running t-tests and a visual inspection of the data, we ran analyses on the mean voltage of the following time epochs: 550–750 ms (N400) for semantic violations and 400–600 ms (LAN) and 1000–1300 ms (P600) for morphosyntactically violated sentences.
Time epochs were separately analyzed with an ANOVA comprising the repeated measurement factors Condition (correct vs. incorrect), Hemisphere (left vs. right), and Cluster (1–7). Sums of squares of Type II were calculated. To compensate for violations of the assumption of sphericity in multi-channel electroencephalographic data, the Huynh and Feldt correction was applied. Corrected degrees of freedom and corrected p-values, as well as the Huynh and Feldt epsilons (eps) are reported for the F-tests in the result section. Statistically significant effects without the factor Condition are not reported. The difference of the incorrect and the correct condition was tested with one tailed t-tests at each cluster. To correct for unequal variances, the degrees of freedom of the t-tests were corrected using the Welch algorithm [57]. The open source statistical programming language “R” was used for statistical analyses.
Regarding the participants, only trials followed by a correct response were included in the analysis. If a participant made more than 40% mistakes in at least one condition, he or she was excluded from the analysis. As described in the section participants, three Deaf native signers were excluded due to low performance.