Volume 9 Supplement 1

Seventeenth Annual Computational Neuroscience Meeting: CNS*2008

Open Access

A simple spiking retina model for exact video stimulus representation

BMC Neuroscience20089(Suppl 1):P130

DOI: 10.1186/1471-2202-9-S1-P130

Published: 11 July 2008

A computational model for the representation of visual stimuli with a population of spiking neurons is presented. We show that under mild conditions it is possible to faithfully represent an analog video stream into a sequence of spike trains and provide an algorithm that recovers the video input by using only the spike times of the population.

In our model an analog, bandlimited in time, video stream approaches the dendritic trees of a neural population. At each neuron, the multi-dimensional video input is filtered by the neuron's spatiotemporal receptive field, and the one-dimensional output dendritic current enters the soma of the neuron (see Figure 1). The set of the spatial receptive fields is modeled as a Gabor filterbank. The spike generation mechanism is threshold based: Each time the dendritic current exceeds a threshold a spike is fired and the membrane potential is reset by a negative potential through a negative feedback loop that gets triggered by the spike. This simple spike mechanism has been shown to accurately model the responses of various neurons in the early visual system [1].
Figure 1

Encoding and decoding mechanisms for video stimuli: The stimulus is filtered by the receptive fields of the neurons and enters the soma. Spike generation is threshold based and a negative feedback mechanism resets the membrane potential after each spike. In the decoding part each spike, represented by a delta pulse, is weighted by an appropriate coefficient and then filtered from the same receptive field for stimulus reconstruction. The total sum is passed from a low pass filter to recover the original input stimulus.

We prove and demonstrate that we can recover the whole video stream based only on the knowledge of the spike times, provided that the size of the neural population is sufficiently big. Increasing the number of neurons to achieve better representation is consistent with basic neurobiological thought [2].

Although very precise, the responses of visual neurons show some variability between subsequent stimulus repeats, which can be attributed to various noise sources [1]. We examine the effect of noise on our algorithm and show that the reconstruction quality gracefully degrades when white noise is present at the input or at the feedback loop.



This work is supported by NIH grant R01 DC008701-01 and NSF grant CCF-06-35252. EA Pnevmatikakis is also supported by Onassis Public Benefit Foundation.

Authors’ Affiliations

Department of Electrical Engineering, Columbia University, New York


  1. Keat J, Reinagel P, Reid RC, Meister M: Predicting every spike: A model for the responses of visual neurons. Neuron. 2001, 30: 803-817. 10.1016/S0896-6273(01)00322-1.View ArticlePubMedGoogle Scholar
  2. Lazar AA, Pnevmatikakis E: Faithful representation of stimuli with a population of integrate-and-fire neurons. Neural Computation. 2008, to appear.Google Scholar


© Lazar and Pnevmatikakis; licensee BioMed Central Ltd. 2008

This article is published under license to BioMed Central Ltd.