Volume 10 Supplement 1

Eighteenth Annual Computational Neuroscience Meeting: CNS*2009

Open Access

A correspondence-based neural mechanism for position invariant feature processing

  • Yasuomi D Sato1Email author,
  • Jenia Jistev1,
  • Philipp Wolfrum1 and
  • Christoph von der Malsburg1
BMC Neuroscience200910(Suppl 1):P366

DOI: 10.1186/1471-2202-10-S1-P366

Published: 13 July 2009

Introduction

We here focus on constructing a hierarchical neural system for position-invariant recognition, which is one of the most fundamental invariant recognition achieved in visual processing [1, 2]. The invariant recognition have been hypothesized to be done by matching a sensory image of a particular object stimulated on the retina to the most suitable representation stored in memory of the higher visual cortical area. Here arises a general problem: In such a visual processing, the position of the object image on the retina must be initially uncertain. Furthermore, the retinal activities possessing sensory information are being far from the ones in the higher area with a loss of the sensory object information. Nevertheless, with such recognition ambiguity, the particular object can effortlessly and easily be recognized. Our aim in this work is an attempt to resolve such a general recognition problem.

Mechanisms

A first resolution to the problem mentioned above is that we have to show information flow preservation of the object image in the input layer to the higher model layer even through some intermediate layers. This should be achieved even though some object information (here, positions of the object on the input) has been losing. For this, we employ marginalization of feature components over the corresponding positional region on each layer. The advantage of this marginalization is that the features extracted from an input image are being preserved to project through the higher intermediate layers to the model layer, keeping only necessary positional information of the input. The second problem about positional uncertainty should be resolved by establishing most appropriate projections of earlier layer to the next higher layer. To find the most appropriate projections, a similarity is measured between the model reference features and marginalized features for each layer. Then, taking a maximum operation of the similarity measures, the most appropriate projections are detected to establish a whole connection between the input and model layers, specifying the object position on the input image. Finally, employing a dynamic model of cortical columns [3], we propose a position-invariant object recognition system in a dynamic routing circuit, without any loss of concepts about the position-specific marginalized features mentioned above. Then, we will test and discuss the ability of our proposed system for recognition performance, specifying a correct position of a particular object. Figure 1.
https://static-content.springer.com/image/art%3A10.1186%2F1471-2202-10-S1-P366/MediaObjects/12868_2009_Article_1551_Fig1_HTML.jpg
Figure 1

A main concept of the feature hierarchical network system.

Declarations

Acknowledgements

This work was supported by the Hertie Foundation, by the EU project "Daisy", FP6-2005-015803 and by the German Federal Ministry of Education and Research (BMBF) within the "Bernstein Focus: Neurotechnology" through research grant 01GQ0840.

Authors’ Affiliations

(1)
Frankfurt Institute for Advanced Studies (FIAS), Johann Wolfgang Goethe-University

References

  1. Wiskott L: How does our visual system achieve shift and size invariance. 23 Problems in Systems Neuroscience. Edited by: van Hemmen JL, Sejnowski TJ. 2004, Oxford University PressGoogle Scholar
  2. Olshausen B, Anderson C, Van Essen D: A multiscale dynamic routing circuit for forming size- and position-invariant object representations. J Computational Neuroscience. 1995, 2: 45-62. 10.1007/BF00962707.View ArticleGoogle Scholar
  3. Wolfurm P, Wolff C, Lücke J, Malsburg von der C: A recurrent dynamic model for correspondence-based face recognition. J Vision. 2008, 8: 1-18. 10.1167/8.7.34.Google Scholar

Copyright

© Sato et al; licensee BioMed Central Ltd. 2009

This article is published under license to BioMed Central Ltd.

Advertisement