Coding of shape from shading in area V4 of the macaque monkey
© Arcizet et al; licensee BioMed Central Ltd. 2009
Received: 27 February 2009
Accepted: 30 November 2009
Published: 30 November 2009
The shading of an object provides an important cue for recognition, especially for determining its 3D shape. However, neuronal mechanisms that allow the recovery of 3D shape from shading are poorly understood. The aim of our study was to determine the neuronal basis of 3D shape from shading coding in area V4 of the awake macaque monkey.
We recorded the responses of V4 cells to stimuli presented parafoveally while the monkeys fixated a central spot. We used a set of stimuli made of 8 different 3D shapes illuminated from 4 directions (from above, the left, the right and below) and different 2D controls for each stimulus. The results show that V4 neurons present a broad selectivity to 3D shape and illumination direction, but without a preference for a unique illumination direction. However, 3D shape and illumination direction selectivities are correlated suggesting that V4 neurons can use the direction of illumination present in complex patterns of shading present on the surface of objects. In addition, a vast majority of V4 neurons (78%) have statistically different responses to the 3D and 2D versions of the stimuli, while responses to 3D are not systematically stronger than those to 2D controls. However, a hierarchical cluster analysis showed that the different classes of stimuli (3D, 2D controls) are clustered in the V4 cells response space suggesting a coding of 3D stimuli based on the population response. The different illumination directions also tend to be clustered in this space.
Together, these results show that area V4 participates, at the population level, in the coding of complex shape from the shading patterns coming from the illumination of the surface of corrugated objects. Hence V4 provides important information for one of the steps of cortical processing of the 3D aspect of objects in natural light environment.
A fundamental issue of visual perception is to understand how the brain represents the 3D shape of an object from the 2D patterns that project onto the retina . While it is clear that stereopsis and motion parallax are potent sources of 3D information, human observers routinely extract 3D shape from static monocular cues and flawlessly recognize 2D images or drawings in which these features are the only ones available. To achieve this, humans rely upon many factors such as texture gradients, the presence of particular junctions and edges, or the pattern of shadows. In natural situations, variations of illumination direction produce large variations of shading patterns that complicate the recognition of a 3D object. Several studies have revealed deficits in recognizing faces or objects under various shadow conditions [2, 3], and in estimating surface curvature based on shading [4–6] or perceptual ambiguities . If one introduces a display change in matching experiments , the recognition of objects, with the exception of human faces, does not appear to depend on the direction of illumination [9, 10]. So, humans are able to recognize shapes within a highly variable environment and are able to use 2D pictorial cues, like shading, to form vivid 3D percepts [1, 11, 12]. The question then arises: What neuronal mechanisms underlie such a process of shape recognition?
The precise mechanisms by which the brain extracts the different sources of monocular 3D information and combines them to identify an object remain unknown. In particular, few studies have investigated the question of 3D shape from shading . fMRI studies on humans indicate a participation of both dorsal and ventral pathways [14–16]. More recently, the question was remarkably well approached by Georgieva et al (2008) , who used more controls and avoided other 3D cues (edges, vertices) than shading alone. The results of this study underlined the importance of the caudal inferotemporal gyrus and ruled out the intraparietal sulcus as a site for the extraction of 3D shape from shading.
Single-unit studies have suggested that V4 neurons play an important role in shape from shading. For instance, Hanazawa and colleagues [17, 18] showed that V4 neurons are selective to shading orientation with a vertical bias. Furthermore, curvature was critically represented in V4 [19–21] in the form of 'volumetric primitives'. Hence, the representation of curvature in V4 might reflect a necessary processing step of shape from shading before achieving invariance to shading variations that occur in higher level regions . The main psychophysical counterpart is that shading is particularly important for the analysis of curved surfaces and, according to Todd , perceptual constancy of objects can be achieved through a curvature-based representation of shapes. We thought it was important to examine further the selectivity of V4 neurons to shapes defined by shadings. We expect single unit studies, which stand at a different level of analysis, to potentially reveal shape from shading-related mechanisms in V4. Finally and importantly, it should be stressed that macaque monkeys are a valuable model for the study of 3D shape from shading at the single cell level as it has been demonstrated that they can perceive depth from shading cues in behavioral tasks .
The aim of our study was to explore the encoding of 3D shape from shading in area V4 of the awake macaque monkey. The particularity of shape from shading implies that shape and illumination are intimately intertwined to create a 3D percept. A light source illuminating the surface of an object containing irregularities such as hollows and bumps, inescapably creates a pattern of dark and light regions that is specific to the shape of the object. If other cues are unavailable, the brain needs to use this pattern of shading to infer the 3D aspect of the surface. We first aimed to test if V4 cells are selective to 3D shapes defined by illumination that creates different patterns of shading. In order to assess this, we used a set of 8 different naturalistic 3D shapes illuminated from 4 directions (from below, the left, the right and above). Because the pattern of shading varies markedly when the direction of illumination varies, we computed several indices to check if the V4 cells responded invariantly to the same 3D shape illuminated from different directions or if their selectivity was biased towards vertical illumination directions. Finally, we tested the selectivity to 3D shape from shading per se by using 3 different types of 2D controls. These controls share low-level parameters with the 3D stimuli and by changing the spatial organization of the shading patterns, they loose their 3D aspect.
Our results show that most individual V4 neurons do not show a strong selectivity to individual 3D shapes defined by shading. We also noticed a weak selectivity to illumination directions with no preference for vertical axes. Furthermore V4 neurons do not prefer systematically the 3D version of the stimuli with respect to the 2D controls. However, 3D stimuli and 2D controls could be clearly separated by a cluster analysis of V4 single cell responses, suggesting that shape from shading is a cue encoded at the population level.
Animals and setup
Two adult rhesus monkeys, one female (monkey T) and one male (monkey Z), weighing 3 and 6 kg respectively, were implanted with head fixation devices (Crist Instruments, Hagerstown, MD). Surgical operations were performed under general anesthesia and sterile conditions. Anesthesia was induced by ketamine (16 mg/kg IM). Maintenance of anesthesia was achieved with a mixture of alphadolone/alphaxolone (Saffan, 15 mg/kg/h IV or slightly more if required). A pain reliever, ketoprofen (Ketofen, 1 mg/kg IM) and systemic antibiotics (extencilline 600000 UI IM) were administrated at the beginning of the surgery.
Once monkeys were trained to perform a simple visual fixation task, we performed a second surgery to implant a recording chamber over a 2 cm diameter craniotomy. The surgery was performed under the same conditions, except for an additional injection of methylprednisolone (solumedrol, 1 mg/kg IM) to prevent brain edema. Although we cleaned within the chamber daily, guide tubes were required because we did not scrape the thickening dura. Animals were sacrificed by an overdose of pentobarbital and fluorescent dyes were injected to localize the recording sites and confirmed the location of recordings in V4. Histological analyses on both monkeys confirmed that we recorded cells in the anterior part of dorsal V4. An anatomical description of the region of recordings can be found in Arcizet et al. 2008 . All animal procedures complied with guidelines of the European Ethics committee on Use and Care of Animals.
To perform the task, the animals were seated in a primate chair, with their head restrained. An ISCAN infrared eye-tracking system (120 Hz) monitored eye positions by tracking the corneal reflection of a focused infrared LED through a CCTV camera with a 250-mm lens. The experiments were run using CORTEX software (courtesy of NIMH), which controlled stimulus presentation and data acquisition. Tungsten-in-glass microelectrodes (Thomas Recording, Germany) were used to record extracellular neuronal activity. Action potentials from single units were sorted online (MSD, AlphaOmega, Israel).
Stimuli and protocol
Stimuli consisted of pictures of randomly deformed spheres similar to those used in studies  and . The illumination falling on concavities and convexities of the spheres produced patterns of shading that made the stimuli look like vivid pictures of realistic 3D objects. We used 8 different distorted spheres (termed 3D shapes). These stimuli were illuminated with a Lambertian light source (with no specular component) coming from 4 different directions (below, right side, left side or above).
We had a total of 96 different stimuli (8 outlines * 4 directions of illumination * 3 contents [3D shapes, Blob, (Random or Posterized)]). The stimuli were gamma corrected on a 21" CRT monitor (Iiyama vision master pro512) placed at 57 cm from the eyes of the monkeys. We adapted stimulus size to eccentricity rather than precisely matching stimuli to measured receptive field (RF). Practically, during the recording sessions, stimuli were chosen among 4 identical but scaled sets of 2, 3, 4 or 5 degrees of visual angle and presented at the center of the receptive field.
The monkeys were first trained to maintain fixation within a 2-degree square window. The monkey had to keep fixating a 0.1 degree gray central spot for a variable delay (400 to 600 ms) before the stimulus was flashed for 250 ms. After the stimulus was turned off, the fixation spot remained on for a variable delay (350 to 400 ms). Only trials completed without breaking fixation were rewarded with a drop of water and kept for off-line analysis
For each isolated neuron, we first roughly mapped the receptive field with dark, light or colored hand-moved bars. In order to quickly find the RF center, we recorded the neuronal responses to small squares (dark or light) flashed for 25 ms at 36 positions selected pseudo-randomly in a square grid. RF sizes and eccentricities were in agreement with previous studies . Once the RF mapping was achieved, we recorded 5 to 10 trials for each stimulus. Stimuli were presented in pseudo-random order.
We defined two 250 ms epochs, one corresponding to the baseline and the other to response activities of the neurons. The baseline epoch began during the initial fixation period, 400 ms before stimulus onset. The response epoch began 50 ms after stimulus onset. Mean response rates (spikes/s) were computed for both epochs. Baseline rates were generally low (average +/- SD: 6.2 +/- 0.6 spikes/s). Data analysis on response rates with or without subtraction of the baseline activity yielded similar results. Thus, results reported in the paper are from the recorded response rates, without subtraction of the baseline activity.
All 8 3D shapes have a marked different aspect because of the presence of surface concavities or convexities. V4 neurons are expected to be selective to the smooth curves enhanced by shading since previous studies have demonstrated that they are sensitive to contour elements [19, 21]. An interesting question is the influence of illumination direction on 3D shape selectivity. To assess neuronal selectivity to 3D shape and illumination direction, we computed a two-way non-repeated factorial ANOVA with 3D shape and illumination direction (ID) as independent factors. The threshold of significance was fixed at 5%.
where SS is the sum of squares, MS the mean squares and df the degree of freedom. This index ranges between 0 and 1; a value of 1 indicates a strong selectivity whereas a value of 0 indicates no selectivity. Neurons were considered to be highly selective to 3D stimuli or to illumination direction when ω2 was above a threshold of 0.10 .
In addition, we performed a ranking analysis  to test how the controls affect the tuning to 3D shapes and a cluster analysis to evaluate to what extent 3D stimuli could be segregated from controls by the V4 population. We performed this analysis to assess the preservation of selectivity to the 3D shape stimuli across modification of the content. For each neuron, responses to each 3D stimulus were normalized and ranked in descending order (the best 3D stimulus had the rank of 1). Then, for the same neuron, the obtained rank was used as a reference to rank the responses to the different corresponding types of control stimuli (Blobs, Random and Posterized). The procedure was repeated for each neuron and then, for each class of stimuli, we averaged the responses for each rank across all neurons. Since the reference ranking comes from the 3D stimuli, a flat ranking curve for a given control class would mean that the cell population preference for that control and the 3D shapes is markedly different. Conversely, a superimposed or parallel curve means that the shape preference is preserved across stimulus classes.
Finally, we used a hierarchical cluster analysis to obtain a visual representation of the neuronal responses at the population level. The purpose of cluster analysis is to gather the stimuli into successively larger clusters, using a measure of distance between neuronal responses. Results are illustrated with a hierarchical tree or dendrogram. We used the Ward's linkage method on Euclidean distances obtained from standardized responses (Statistica software) to perform the analysis. This method uses an analysis of variance approach to evaluate the distances between clusters. Hence it seeks to choose the successive clustering steps so as to minimize the increase in the error sum of squares found at each level (see  for details concerning this method).
We recorded 124 V4 neurons in the right hemispheres of two monkeys (Monkey T, 93; Monkey Z, 31). A vast majority of neurons (119/124 or 96%) showed a mean firing rate that increased significantly during stimulus presentation (T-test, p < 0.05). All the cells were tested with 3D shapes and Blobs, but among the 119 responsive cells (Monkey T, 90; Monkey Z, 29), only 46 cells were tested with the Random controls, and 73 cells with the Posterized controls (all cells in this last sample were tested with only four of the eight shapes). We used the population of 119 responsive cells for subsequent analysis.
A better quantitative measure of the tuning is provided by the ω2 index from the ANOVA (see Methods). We computed this index for each neuron for both the shape of the stimuli (ω2S) and the illumination of the stimuli (ω2ID). Figure 5B shows the scatter plot of both indices. The median values of ω2S and ω2ID were 0.028 and 0.036 respectively, which, along with SSI and ISI indices, confirms that the tuning for both features was weak. Only 23 cells (19%) have a ω2S above 0.10, the threshold value above which a neuron is considered selective. Similarly, 23 cells have a ω2ID above 0.10 but only 6 have both indices above threshold. This is reinforced in the scatter plot of Figure 5B that shows an absence of correlation between both indices (Pearson correlation; r = 0.005, p = 0.960). However, when we focused on the pool of neurons selective to the 3D shape (according to the threshold of 0.10, right side of the dashed line perpendicular to the x-axis), we observed a weak negative correlation between tuning to 3D shape and illumination (Pearson correlation; r = -0.421, p = 0.0007). This could mean that the shape selectivity is associated with a tendency to the invariance to the direction of illumination in a few cells, like the one in Figure 4A. Furthermore, we observed no correlation between tuning to shape and direction of illumination for the pool of illumination direction selective cells (Pearson correlation; r = 0.039, p = 0.826, upper side of the dashed line parallel to the x-axis).
In the first part of the analysis, we characterized 2 subpopulations of neurons that were selective to 3D shapes or illumination direction according to their ω2 indices (ω2 > 0.10). In these subpopulations, a vast majority of cells are also 'content' selective (20/23 for shape selective cells and 20/23 for illumination selective cells). Following our definition of 'content' cells, the selectivity to both factors should be affected by the presentation of the 2D controls. To assess this point, we computed the selectivity indices ω2 for the responses to 2D controls (Blob and Random or Posterized). A majority of cells selective to 3D shape remained selective to the shape of Blob controls (18/23, ω2S > 0.10) and fewer cells were also selective (10/23) to Random or Posterized controls (1 and 9 cells respectively). On the other hand, the selectivity to illumination direction is more disrupted by control stimuli as fewer cells remained selective for illumination direction in Blobs stimuli (ω2ID, n = 7/23) and 9 for Random and Posterized stimuli (1 and 8 cells respectively).
This analysis gives strong indications that 3D and 2D stimuli are well separated by the responses within the V4 population. This separation cannot be explained in terms of mean luminance and power spectrum differences between 3D and Blob stimuli. Nevertheless one could claim that the spatial distribution of grey levels is a determinant factor in the differential clustering of 3D vs. 2D because of the known sensitivity of V4 neurons to the phase of visual stimuli . Indeed, although 3D stimuli are easily distinguishable from 2D Blob stimuli by a vivid 3D aspect because of the shading, the spatial distribution of dark and light patches is very different in both types of stimuli. This is the reason why we designed the Posterized control stimuli that respect more the polarity of the 3D stimuli. Figure 9B shows the result of the hierarchical cluster analysis performed on the 53 'content' cells tested with Posterized stimuli. As for the subpopulation of cells displayed in Figure 9A, most stimuli have a strong tendency to be clustered by V4 cells according to their type. The tree splits at the first level (d = 43) in two distinct clusters (A and B), where A contains all 3D and Posterized stimuli and B contains exclusively Blobs stimuli (12 out of 16). At a lower level (d = 29), cluster A splits in 2 subgroups A1 and A2, each containing 18 stimuli. A1 is composed of all but two 3D stimuli (14/16) in addition to Blob controls of shape #6 whereas all Posterized stimuli are found in cluster A2. Interestingly, directions of illuminations have a marked tendency to be grouped in this cluster of Posterized stimuli. Considering the distances between clusters, we found that 3D shapes are closer to Posterized stimuli than to 2D Blob stimuli whereas in both trees the distance between 3D shapes and Blob stimuli are similar (38 and 43 for Figure 9A and 9B, respectively). Responses to Random stimuli are markedly different from responses to other types of controls with a cluster separated from the 3D cluster by a long distance of 170.
The main result of our study is that 3D stimuli defined by shape from shading are distinct from 2D controls by population coding in V4. This reflects the importance of this mid-level area of the "object information processing pathway" in the elaboration of this complex visual attribute.
First, our results show that single cell selectivity to the 3D shapes used in this study is broad as determined by the SSI and ω2 indices. Although a vast majority of the cells are efficiently driven by the pool of stimuli, only 45% are statistically modulated by 3D shapes and even fewer can individualize a given 3D shape (23 neurons according to the ω2 criterion). One possibility of the rather weak occurrence of tuned units is that some parts (similarly oriented curved ridges or prominent bumps) are common to different stimuli, albeit placed in different positions.
Next, whereas the direction of illumination modulates 55% of the cells responses, the selectivity indices (ISI and ω2) also show that few individual cells are selective to the direction of illumination. One possible reason of the relatively sparse occurrence of such selectivity is that the complex pattern of shading varies a lot from shape to shape for a given direction of illumination. Another interesting result is that the distribution of Direction indices does not reveal any preference for a given direction of illumination. Hanazawa and Komatsu (2001) demonstrated that a majority of V4 neurons exhibited sensitivity to the direction of luminance gradients in 3D texture patterns that was biased towards the vertical gradient . We suggest that, because our stimuli contained several complex curves, the source of illumination may not be as obvious as it would be with Hanazawa's textures.
Since most individual cells are broadly tuned to illumination direction, one could expect they achieve invariance to illumination. Our results show that the few cells that are strongly shape selective according to the ω2 criterion (> 0.10) have a tendency to be invariant to illumination direction (ie. there was a negative correlation between ω2 indices). The invariance of neuronal discharge according to the different illumination directions is a crucial step in the shape from shading process. Indeed, humans have remarkable abilities to achieve object recognition under different illumination directions and one can assume that macaque monkeys have a similar visual skill. For example, lesion work in the macaque monkey indicated that the inferior temporal cortex is critical for object recognition under varying conditions of illumination . However, the question of invariance in terms of illumination direction is controversial in the literature. The structural theory of recognition suggests that the visual system extracts illumination-invariant features from the scene [33, 34]. Psychophysical results are consistent with this theory as humans can recognize objects and, in some cases, faces effortlessly when the direction of illumination varies . On the other hand, image-based theory proposes that direction of illumination is encoded in internal face and object representations [35, 36]. This theory is supported by psychophysical data showing that recognition of faces and objects varies with illumination [10, 37, 38]. The results from the individual cells could well support either theory as we reported the presence of few individual cells that were invariant to direction of illumination but selective to 3D. However, the population analysis did not reflect a counterpart of the 'structural theory': we observed no clustering of individual 3D shapes in the dendrograms obtained from the population responses, suggesting that more computational steps beyond V4 are required to individualize 3D objects lit under different directions. Nevertheless, the results of the cluster analysis showed a tendency for the same illuminations to be grouped together at the last branches of the dendrogram for 3D stimuli in Figure 9A, suggesting a tendency to code the direction of luminance in complex shapes. However, we note that, in Figure 9B, the effect is not present for 3D stimuli but for Posterized stimuli only. Although one cannot rule out a sampling bias issue, this may also reflect the fact that the polarity of dark and light regions is more obvious in Posterized than in the 3D shape stimuli. Hence, the cluster analysis may be revealing the mechanism that underlies the extraction of illumination direction in complex shading patterns. Such mechanisms would fit predictions of image-based theory. However, our results are limited in the sense that the monkeys performed a passive fixation task. It would be an interesting development of this study to demonstrate that invariance to a broader range of angles of illumination can be obtained in an active recognition task. To accomplish this, an experiment would have to be designed in which monkeys would be trained to recognize individual objects (of the kind we used) under various illuminations. This generalization of object recognition to 'difficult' illumination is plausible in V4 since neurons of this area have been demonstrated to be prone to perceptual learning .
At this stage, it is difficult to argue in favor of real 3D coding in V4. The 3D rendering of our stimuli is very vivid because of the strong shading gradients. Thus, illumination direction and 3D shape are strongly linked by construction of the stimuli and, as such, they are unavoidably intermixed. The controls we used for the 3D shape from shading stimulus were created by disorganizing the structure of the image while trying to keep the same low-level parameters. Whenever a neuron (or a population of neurons) is selective to the 3D stimuli and not to (or separated from) Posterized and Blob stimuli, it means that the gradient of tones alone or the pattern of dark and light patches alone are not sufficient to drive the cell. This would suggest that this cell could be an important step of processing shape from shading. Our results show that a vast majority (78%) of V4 neurons responded differently to 3D stimuli and these 2D control versions. However, the ANOVA and the Tukey test show that there is a comparable number of cells that prefer 2D controls (Blobs and Posterized) as prefer the 3D stimuli. This point needs to be emphasized in regards to the results of Georgieva and colleagues in fMRI . In humans, many regions sensitive to 3D shapes were also responsive to 2D shapes, and this was likely the case in the area equivalent to V4 of the macaque monkey. If the respective global responses of two separate but intermixed neuronal populations (in the present case our 3D- and 2D-biased neurons) have the same strength, the resulting pattern in fMRI will not be able to identify a 3D selective region . We recorded a subset of only 24 neurons that displayed a clear individual preference for 3D stimuli. The presence of this subpopulation is consistent with the results of Georgieva and colleagues who report that activation related to shape from shading can be found in ventral areas , although, besides the quite complex problem of homologies between species , the main focus of activity is likely to correspond to a more anterior region in the macaque.
When responses of V4 cells are analyzed at the population level, we obtain better evidence that neurons differentially encode 2D stimuli and 3D stimuli defined by their shading. This is shown firstly in the rank analysis where responses to 2D stimuli were ranked according to the 3D stimuli preference. We first observed that the ranking of 3D stimuli is clearly different from that of 2D Blobs and Random stimuli. This rules out the possibility that the selectivity to 3D shapes is based only on low level parameters and suggests that the disposition of the dark and light regions, very different in each type of stimuli, is important for V4 cells. We then observed that the ranking of 3D stimuli does not match that of Posterized stimuli either. This suggests that the gradient of grey levels, absent in the two-tones Posterized stimuli, is also important. A better visualization of the respective coding of each stimulus type is provided by the cluster analysis. This suggests that the V4 population is able to accurately discriminate between the different types of stimuli. The 3D stimuli and the different classes of 2D controls mostly belonged to different clusters, suggesting that the population response gives a separate status to the 3D stimuli. The different clusters cannot be explained by low-level parameters only; although 2D Blob stimuli had the same first order parameters as the 3D stimuli (this may explain the correlation between the rank plots), we did observe two clusters corresponding to each class of stimulus. Similarly, Random stimuli, which have the same mean luminance and contrast but differ markedly from 3D and Blobs in their spatial frequencies, belong to a cluster that was separated from the other two by a distance four times longer than that between the 3D shapes and blob stimuli. The question remains: Why are the 3D stimuli separated? Although each 3D stimulus is easily recognizable from the others as they are defined by a different inner content, all stimuli also could be considered as being covered by the same texture or material (here a kind of glossy metal). Our recent work  has stressed the fact that V4 cells can classify natural textures, and others have shown the significance of the human equivalent for attention to surface properties . If the 3D stimuli were treated by V4 as a texture, we should expect them all to have the 'special status' revealed by the cluster analysis. But we think that our results show more than a mere coding of a particular texture. The cluster analysis shows that the Posterized stimuli are closer to 3D stimuli than 3D are to Blob stimuli. This suggests that the polarity of the dark and bright patterns on the stimuli (similar in 3D and Posterized only) matters more than low-level parameters in the classification.
Hence, both rank and cluster analysis point to the significance of the disposition of dark and light patches together with a gradient of grey levels. This double selectivity is an important stage to perceive shape from shading as a given direction of illumination on an irregular surface results in a unique shading pattern. However, the question remains open as to whether our results reflect a genuine coding of 3D shape (from shading) per se. Many groups have shown the importance of depth encoding in V4. For instance, V4 cells are selective to disparity or 3D orientation of bars [43, 44]. However individual V4 cells may not explicitly represent orientation of curvature in depth when depth is coming from disparity . In our study, the monocular depth cue was brought about by illumination and similarly, and the V4 cells could not achieve complete shape invariance. Using stimuli similar to ours while recording in TEO, Vangeneugden and colleagues  did not observe striking differences to our results except that a majority of cells prefer 3D-shapes over the controls. However a previous study reports depth-invariant shape selectivity in area the infero-temporal cortex . It may be that the complex percept of 3D shape from shading needs to build up through V4 and TEO stages before reaching invariance in IT. In this case, area V4 could encode 3D cues like shading, texture gradient or disparity and send this information to infero-temporal cortex [48, 49]. However, it is not yet completely understood how shape and surface selectivities build up through early levels, V4 (as a putative intermediate stage) and different IT subregions [50–52]. One very important point that remains unclear is what areas contribute to the vivid naturalness of the phenomenological percept of 3D. This remains to be tested with behavioural tasks  while focusing in the regions corresponding to human posterior LOC, which is a region of high convergence of 3D cues .
This study shows that area V4 of the monkey plays a significant role in the cortical processing steps leading to perception of 3D objects defined by shape from shading. The shape from shading selectivity that is not obvious at the level of the single cell is suggested at the population level.
We thank F. Lefevre and S. Aragones for husbandry and care. C. Marlot for her precious work on the bibliography database. We thank Rufin Vogels, James Todd, Karoly Koteles, James Bisley and Koorosh Mirpour for technical help and scientific discussion.
This work was supported by grants from the Information Society Technologies (INSIGHT2+, #2000 29688, Neuronal basis of coding of 3D shape and material properties for recognition) and the Fondation de France.
- Todd JT: The visual perception of 3D shape. Trends Cogn Sci. 2004, 8 (3): 115-121.View ArticlePubMedGoogle Scholar
- Braje WL, Kersten D, Tarr MJ, Troje NF: Illumination effects in face recognition. Psychobiology. 1998, 26 (4): 371-380.Google Scholar
- Tarr MJ, Kersten D, Bulthoff HH: Why the visual recognition system might encode the effects of illumination. Vision Res. 1998, 38 (15-16): 2259-2275.View ArticlePubMedGoogle Scholar
- Todd JT, Mingolla E: Perception of surface curvature and direction of illumination from patterns of shading. J Exp Psychol Hum Percept Perform. 1983, 9 (4): 583-595.View ArticlePubMedGoogle Scholar
- Mamassian P, Kersten D: Illumination, shading and the perception of local orientation. Vision Res. 1996, 36 (15): 2351-2367.View ArticlePubMedGoogle Scholar
- Nefs HT, Koenderink JJ, Kappers AM: The influence of illumination direction on the pictorial reliefs of Lambertian surfaces. Perception. 2005, 34 (3): 275-287.View ArticlePubMedGoogle Scholar
- Ramachandran VS: Perception of shape from shading. Nature. 1988, 331 (6152): 163-166.View ArticlePubMedGoogle Scholar
- Biederman I, Bar M: One-shot viewpoint invariance in matching novel objects. Vision Res. 1999, 39 (17): 2885-2899.View ArticlePubMedGoogle Scholar
- Nederhouser M, Mangini MC, Biederman I, Subramaniam S, Vogels R: Is object recognition invariant to direction of illumination and direction of contrast. 2001, Society PoVS. Sarasota, FloridaGoogle Scholar
- Braje WL: Illumination encoding in face recognition: effect of position shift. J Vis. 2003, 3 (2): 161-170.View ArticlePubMedGoogle Scholar
- Cavanagh P, Leclerc YG: Shape from shadows [published erratum appears in J Exp Psychol Hum Percept 1990 Nov;16(4):910]. J Exp Psychol Hum Percept Perform. 1989, 15 (1): 3-27.View ArticlePubMedGoogle Scholar
- Braje WL, Legge GE, Kersten D: Invariant recognition of natural objects in the presence of shadows. Perception. 2000, 29 (4): 383-398.View ArticlePubMedGoogle Scholar
- Georgieva SS, Todd JT, Peeters R, Orban GA: The Extraction of 3D Shape from Texture and Shading in the Human Brain. Cereb Cortex. 2008, 18 (10): 2416-2438.PubMed CentralView ArticlePubMedGoogle Scholar
- Taira M, Nose I, Inoue K, Tsutsui K: Cortical areas related to attention to 3D surface structures based on shading: an fMRI study. Neuroimage. 2001, 14 (5): 959-966.View ArticlePubMedGoogle Scholar
- Moore C, Engel SA: Neural response to perception of volume in the lateral occipital complex. Neuron. 2001, 29 (1): 277-286.View ArticlePubMedGoogle Scholar
- Kourtzi Z, Erb M, Grodd W, Bulthoff HH: Representation of the perceived 3-d object shape in the human lateral occipital complex. Cereb Cortex. 2003, 13 (9): 911-920.View ArticlePubMedGoogle Scholar
- Hanazawa A, Komatsu H: Influence of the direction of elemental luminance gradients on the responses of V4 cells to textured surfaces. J Neurosci. 2001, 21 (12): 4490-4497.PubMedGoogle Scholar
- Hanazawa A: Coding of texture and shading in monkey area V4. Int Congr Ser. 2004, 1269: 89-92.View ArticleGoogle Scholar
- Pasupathy A, Connor CE: Responses to contour features in macaque area V4. J Neurophysiol. 1999, 82 (5): 2490-2502.PubMedGoogle Scholar
- Pasupathy A, Connor CE: Shape representation in area V4: position-specific tuning for boundary conformation. J Neurophysiol. 2001, 86 (5): 2505-2519.PubMedGoogle Scholar
- David SV, Hayden BY, Gallant JL: Spectral Receptive Field Properties Explain Shape Selectivity in Area V4. J Neurophysiol. 2006, 96 (6): 3492-3505.View ArticlePubMedGoogle Scholar
- Vogels R, Biederman I: Effects of illumination intensity and direction on object coding in macaque inferior temporal cortex. Cereb Cortex. 2002, 12 (7): 756-766.View ArticlePubMedGoogle Scholar
- Zhang Y, Weiner VS, Slocum WM, Schiller PH: Depth from shading and disparity in humans and monkeys. Vis Neurosci. 2007, 24 (2): 207-215.View ArticlePubMedGoogle Scholar
- Arcizet F, Jouffrais C, Girard P: Natural textures classification in area V4 of the macaque monkey. Exp Brain Res. 2008, 189 (1): 109-120.View ArticlePubMedGoogle Scholar
- Norman JF, Todd JT: The perception of 3-D structure from contradictory optical patterns. Percept Psychophys. 1995, 57 (6): 826-834.View ArticlePubMedGoogle Scholar
- Gattass R, Sousa AP, Gross CG: Visuotopic organization and extent of V3 and V4 of the macaque. J Neurosci. 1988, 8 (6): 1831-1845.PubMedGoogle Scholar
- Komatsu H, Ideura Y: Relationships between color, shape, and pattern selectivities of neurons in the inferior temporal cortex of the monkey. J Neurophysiol. 1993, 70 (2): 677-694.PubMedGoogle Scholar
- Koteles K, De Maziere PA, Van Hulle M, Orban GA, Vogels R: Coding of images of materials by macaque inferior temporal cortical neurons. Eur J Neurosci. 2008, 27 (2): 466-482.View ArticlePubMedGoogle Scholar
- Mysore SG, et al.: Shape selectivity for camouflage-breaking dynamic stimuli in dorsal V4 neurons. Cereb Cortex. 2008, 18 (6): 1429-43.View ArticlePubMedGoogle Scholar
- Ward JH: Hierarchical Grouping to Optimize an Objective Function. J Am Statist Assoc. 1963, 58: 236-244.View ArticleGoogle Scholar
- Desimone R, Schein SJ: Visual properties of neurons in area V4 of the macaque: sensitivity to stimulus form. J Neurophysiol. 1987, 57 (3): 835-868.PubMedGoogle Scholar
- Weiskrantz L, Saunders RC: Impairments of visual object transforms in monkeys. Brain. 1984, 107 (4): 1033-1072.View ArticlePubMedGoogle Scholar
- Biederman I: Recognition-by-components: a theory of human image understanding. Psychol Rev. 1987, 94 (2): 115-147.View ArticlePubMedGoogle Scholar
- Marr D, Nishihara HK: Representation and recognition of the spatial organization of three-dimensional shapes. Philos Trans R Soc Lond. 1978, B (200): 269-294.Google Scholar
- Poggio T, Edelman S: A network that learns to recognize three-dimensional objects [see comments]. Nature. 1990, 343 (6255): 263-266.View ArticlePubMedGoogle Scholar
- Ullman S: Aligning pictorial description; an approach to object recognition. Cognition. 1989, 32: 193-254.View ArticlePubMedGoogle Scholar
- Gauthier I, Tarr MJ: Orientation priming of novel shapes in the context of viewpoint- dependent recognition. Perception. 1997, 26 (1): 51-73.View ArticlePubMedGoogle Scholar
- Troje NF, Bulthoff HH: How is bilateral symmetry of human faces used for recognition of novel views?. Vision Res. 1998, 38 (1): 79-89.View ArticlePubMedGoogle Scholar
- Rainer G, Lee HK, Logothetis NK: The Effect of Learning on the Function of Monkey Extrastriate Visual Cortex. PLoS Biol. 2004, 2 (2): 275-283.View ArticleGoogle Scholar
- Logothetis NK: What we can do and what we cannot do with fMRI. Nature. 2008, 453 (7197): 869-878.View ArticlePubMedGoogle Scholar
- Denys K, Vanduffel W, Fize D, Nelissen K, Peuskens H, Van Essen D, Orban GA: The processing of visual shape in the cerebral cortex of human and nonhuman primates: a functional magnetic resonance imaging study. J Neurosci. 2004, 24 (10): 2551-2565.View ArticlePubMedGoogle Scholar
- Cant JS, Goodale MA: Attention to form or surface properties modulates different regions of human occipitotemporal cortex. Cereb Cortex. 2007, 17 (3): 713-731.View ArticlePubMedGoogle Scholar
- Hinkle DA, Connor CE: Three-dimensional orientation tuning in macaque area V4. Nat Neurosci. 2002, 5 (7): 665-670.View ArticlePubMedGoogle Scholar
- Watanabe M, Tanaka H, Uka T, Fujita I: Disparity-selective neurons in area V4 of macaque monkeys. J Neurophysiol. 2002, 87 (4): 1960-1973.PubMedGoogle Scholar
- Hegde J, Van Essen DC: Role of primate visual area v4 in the processing of 3-d shape characteristics defined by disparity. J Neurophysiol. 2005, 94 (4): 2856-2866.View ArticlePubMedGoogle Scholar
- Vangeneugden J, Koteles K, Orban GA, Vogels R: The coding of 3-D shape from shading in macaque areas TE and TEO. Perception. 2006, 35 (ECVP abstract supplement).
- Janssen P, Vogels R, Orban GA: Three-dimensional shape coding in inferior temporal cortex. Neuron. 2000, 27 (2): 385-397.View ArticlePubMedGoogle Scholar
- Boussaoud D, Desimone R, Ungerleider LG: Visual topography of area TEO in the macaque. J Comp Neurol. 1991, 306 (4): 554-575.View ArticlePubMedGoogle Scholar
- Ungerleider LG, Galkin TW, Desimone R, Gattass R: Cortical Connections of Area V4 in the Macaque. Cereb Cortex. 2008, 18 (3): 477-499.View ArticlePubMedGoogle Scholar
- Pasupathy A: Neural basis of shape representation in the primate brain. Prog Brain Res. 2006, 154: 293-313.View ArticlePubMedGoogle Scholar
- Orban GA: Higher order visual processing in macaque extrastriate cortex. Physiol Rev. 2008, 88 (1): 59-89.View ArticlePubMedGoogle Scholar
- Hegde J, Van Essen DC: A comparative study of shape representation in macaque visual areas v2 and v4. Cereb Cortex. 2007, 17 (5): 1100-1116.View ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.