It has long been claimed that the effect of inverting faces from their canonical upright orientation constitutes a diagnostic marker for the processing differences between faces and other seemingly complex and monoriented objects . In his seminal paper, Yin  showed that while most objects (houses, airplanes, dogs, etc.) are somewhat harder to recognize upside down than right-side up, face recognition is more drastically reduced by stimulus inversion. The disproportionate inversion effect for faces, termed as face-inversion effect (FIE), has not only been replicated by numerous behavioral studies [3, 4] but has also been linked to spatiotemporal brain mechanisms as revealed by electrophysiological and brain imaging studies (e.g. [5, 6]). Nevertheless the putative mechanisms underlying FIE are still a matter of considerable ongoing controversy.
More specifically, two prevailing but diverging hypotheses (i.e., qualitative vs. quantitative) have been proposed to account for performance decrement due to face inversion. The qualitative or dual-process view posits that qualitatively distinct processing modes are used to process upright and inverted faces; a more configural- and holistic-based processing mode being the default system for processing upright faces and a part-based processing mode which is at work when faces are inverted [3, 4, 7–14]. Under this view, perceptual encoding and memory representation of upright faces rely in some special way on configural (i.e., spatial relations among facial features) and/or holistic information (i.e., in which faces are perceived as an integrated and indecomposable whole), and in a lesser extent on face parts (e.g., isolated features such as eyes, nose, mouth). Numerous behavioral studies have consistently demonstrated that turning faces upside-down dramatically disrupts the processing of configural information while leaving intact local feature processing [9, 11]. On the contrary, the quantitative hypothesis suggests that FIE does not cause a shift from one type of processing to another but would rather reflect a quantitative difference in processing facial information, being either configural , featural [15–18] or both . For example, Sekuler et al.  found that the same discriminative regions, namely the eyes and the eyebrows, are used to process upright and inverted faces. Under the quantitative view, upright and inverted faces are processed in a similar fashion albeit less effectively in the upside-down orientation .
One method for considering the question of whether face inversion causes a qualitative or a quantitative change in processing mode has been to investigate the curve of performance decay as faces were gradually rotated from upright to upside-down orientation. Some findings do go some way in favor of the qualitative view, by showing a steeper decay of configural processing by approximately 90° to 120° rotations [20–25]. Studies isolating configural face processing from part-based contributions have shown that while rotation had a linear effect  or no effect at all on featural processing , configural processing was found to falloff in a curvilinear fashion. For example, in Stürzel and Spillman's study , the method of limits was used to determine at which angle of rotation Thatcherized faces lost their grotesque appearance. It was found that the shift in perception from grotesque to non-grotesque occurred somewhere between 97° (i.e., normal to grotesque) and 118° rotations (i.e., grotesque to normal). Similarly, in one series of experiments, Murray et al.  found a steeper reduction in perceived bizarreness of thatcherized faces after 90°. This was true for only the thatcherized faces (i.e. spatial-relational distortion), while bizarreness ratings of component distorted faces (i.e. whitened eyes and blackened teeth) increased almost linearly with orientation. In addition, findings from a sequential matching task  indicated that while featural changes were detected accurately at all rotations, the number of errors when detecting configural changes differed depending on the angle of rotation, with a peak in errors at intermediate angles of rotation (90°–120°). More recent studies reported a similar range of orientation tuning of configural processing by using either pairs of overlapping transparent faces in upright and misoriented views , Mooney faces  or aligned and misaligned composite faces . However, other studies do support a quantitative effect of inversion by demonstrating a linear relationship between subjects' performance and rotation [27–30], consistent with the idea that rotation taps on a single and common process. Valentine and Bruce  have proposed that mental rotation could be responsible for the systematic detrimental effect of orientation on face processing, as it is the case for several other objects . According to these authors, misoriented faces need to be first prealigned to upright (e.g., via mental rotation) before entry to the face identification system.
In brain-imaging studies using the functional magnetic resonance imaging technique (fMRI), researchers have mainly investigated the activity modulation of the face cortical network induced by face inversion . This included a circumscribed region in the lateral fusiform gyrus known as Fusiform Face area (or FFA, cf. ), the superior temporal sulcus (STS, ), and the occipital face area (OFA, [34, 35]). Reduced levels of activity in the FFA [6, 36–38], STS [6, 37, 39] and OFA  have recently been reported for inverted as compared to upright faces [but see 40–43]. More specifically, it was found that among the three face-responsive regions, only the FFA activity modulation by face inversion exhibited a positive correlation with the behavioral FIE . Decreased activity in face-selective regions has been interpreted in terms of failure to engage dedicated mechanisms to process inverted faces, namely holistic and configural processing. Additional activations in regions known to be involved in processing non-face objects (e.g., the lateral occipital complex, LOC) have also been reported in response to upside-down images of faces [40, 41, 44], a finding that is consistent with the dual-processing/qualitative hypothesis. It has been proposed that the recruitment of additional resources from the object processing system when faces are inverted may reflect a switch in processing strategy, such as a change from a holistic to a part-based processing mode [41, 45].
Electrophysiological studies in humans have shown that face inversion affects the latency and/or the amplitude of scalp recorded event-related potentials (ERPs) sensitive to face perception [5, 46–55]. However, two debates still prevail about these ERP components. The first debate strives to determine which ERP component is the electrophysiological correlate of face processing and is thus specifically affected by face inversion. Early studies have identified a positive component peaking around 160 to 180 ms over central scalp sites (Vertex Positive Potential, VPP) that was larger in response to faces than to other visual objects and peaked about 10 ms later to upside-down compared to upright faces [53, 56]. More recent studies revealed the existence of an occipito-temporal negative potential around 170 ms (N170) that has been linked to the early stages of face encoding. Similar to the VPP, several scalp ERP studies [49, 55] have reported a peak latency delay of the N170 to inverted faces, which was often accompanied by an amplitude enhancement [5, 47, 51, 54]. However, some authors showed that face inversion has an earlier onset (around 100 ms) affecting a posterior positive ERP component known as P1 [50, 51, 57] and its magnetic correlate, M1 . These latter results thus suggest that P1 is probably the earliest ERP component that best reflects configural encoding of faces. However, a recent review of the electrophysiological literature provides strong arguments in favor of the specificity of the N170 FIE . More importantly, a recent study has clearly demonstrated that N170 FIE is functionally tied to the behavioral FIE , by showing that the effect of face rotation on N170 correlated significantly with the behavioral rotation effects. No such relationship was found between rotation effects on P1 and behavioral measures.
Furthermore, the second debate concerns the functional significance of face inversion effect on the N170. N170 FIE has been interpreted in different ways. For some authors, amplitude and/or latency enhancement of the N170 reflects the difficulty in processing configural and holistic information when inverting faces , and also when scrambling facial features , removing or masking a face feature , and diminishing the visibility of faces by adding visual noise [62, 63]. Another possible interpretation is that the N170 amplitude increase for inverted faces might be a result of the recruitment of additional processing resources in object perceptual systems , a hypothesis that is supported by some fMRI evidence [40, 41]. Finally, considering that isolated features, eyes in particular, evoke a larger N170 than a whole face, Bentin et al.  and Itier et al.  proposed that the increase in N170 amplitude to inverted faces might be due to the processing of the eye region, which would rather support the qualitative account.
In the present study, we recorded ERPs while participants viewed face and house images parametrically rotated away from upright orientation in order to determine whether the N170 FIE reflects a quantitative and/or a qualitative change in face processing mode. This experimental design extends the parametric approach used in previous behavioral studies [21, 22, 28] and represents a step further in documenting the electrical brain responses that reflect the processing mechanisms that are allegedly occurring. This design also overcomes the limitations of previous ERP investigations often restricted to upright and inverted orientations by using intermediate levels of rotation [59, 62]. Jacques and Rossion  also used a similar stimulus manipulation as in our study. Nonetheless, their study's goal was to relate P1 and N170 measures with participants' behavioral performances in a face-matching task, while the main purpose of the present study was to characterize the pattern of ERP responses to different orientations, and compare these results to those of house images. Our guiding hypotheses were the following: if FIE reflects a qualitative shift in processing mode (i.e., qualitative hypothesis), then changes in amplitude and latency of face-sensitive ERP components would exhibit a discontinuity function as faces were rotated away from upright orientation. On the contrary, if FIE results from a general difficulty processing configural and/or featural facial information (quantitative hypothesis) one would expect a rather linear increase in the amplitude and latency of these components with face rotation.
However, one cannot unequivocally disentangle the quantitative/qualitative accounts based only on the linear/nonlinear pattern of ERP changes with face rotation. Indeed, a nonlinear effect of rotation may simply suggest that the involved process(es) operate non-linearly rather than reflecting differences in processing mode. Therefore, to put tighter constraints on the qualitative/quantitative hypotheses listed above, we performed topographical and dipole source analyses, which will provide some insights about the neuro-anatomical loci of face rotation effects on scalp ERPs. Accordingly, if a discontinuity in face rotation functions reflects a qualitative difference in processing mode, one can expect to find topographical ERP changes of the ERP components sensitive to face rotation effects, reflecting the involvement of different neural sources between the two face orientations. Alternatively, the quantitative hypothesis would predict a complete spatial overlap of the neural sources involved in processing upright and inverted faces, the activity of which is expected to show an incremental increase as face orientation departs from upright position.