Efficient coding correlates with spatial frequency tuning in a model of V1 receptive field organization
© Wiltschut and Hamker; licensee BioMed Central Ltd. 2009
Published: 13 July 2009
Efficient coding has been proposed to play an essential role in early visual processing. While several approaches used an objective function to optimize a particular aspect of efficient coding (e.g. minimization of mutual information or the maximization of sparseness), we explore here how different estimates of efficient coding in a model with non-linear dynamics and Hebbian learning determine the similarity of model receptive fields to V1 data with respect to spatial tuning. Our simulation results indicate that most measures of efficient coding correlate with the similarity of model receptive field data to V1 data, optimizing the estimate of efficient coding increases the similarity of the model data to experimental data.
We developed a two-layer model of rate-coded neurons. The first layer (LGN) gains input from low-pass filtered image patches and are gain modulated via an attentional feedback signal. Each cell from the second layer (V1) gains excitatory input from each cell from the previous layer. The layer II cells compete with one another via anti-Hebbian connections and their activation underlies non-linear processing. All weight connections are learnt in an unsupervised fashion according to the Hebbian law. We compared our learnt receptive fields (RFs) to electrophysiological data from macaques V1  and determined the learning success dependent on the level of coding efficiency. Additionally, we compared our model results to the results of ICA, a standard linear method to learn RFs from natural scenes.
We conclude that Hebbian/anti-Hebbian learning is consistent with the framework of efficient coding. Particularly, nonlinear lateral interactions lead to more independence, which also increases the similarity between the model and experimental data. Too strong lateral inhibitory connections, however, impair the coding quality. Linear ICA does not ensure that the resulting codes are largely independent and decorrelated. If independence ought to be a guiding principle of efficient coding for vision, linear ICA is probably not the ideal solution. Additional non-linearity inhibition eliminates a number of dependencies. We have shown that this concept can be generalized to an anti-Hebbian learning of lateral weights.
This article is published under license to BioMed Central Ltd.