Skip to main content
  • Poster presentation
  • Open access
  • Published:

A simple spiking retina model for exact video stimulus representation

A computational model for the representation of visual stimuli with a population of spiking neurons is presented. We show that under mild conditions it is possible to faithfully represent an analog video stream into a sequence of spike trains and provide an algorithm that recovers the video input by using only the spike times of the population.

In our model an analog, bandlimited in time, video stream approaches the dendritic trees of a neural population. At each neuron, the multi-dimensional video input is filtered by the neuron's spatiotemporal receptive field, and the one-dimensional output dendritic current enters the soma of the neuron (see Figure 1). The set of the spatial receptive fields is modeled as a Gabor filterbank. The spike generation mechanism is threshold based: Each time the dendritic current exceeds a threshold a spike is fired and the membrane potential is reset by a negative potential through a negative feedback loop that gets triggered by the spike. This simple spike mechanism has been shown to accurately model the responses of various neurons in the early visual system [1].

Figure 1
figure 1

Encoding and decoding mechanisms for video stimuli: The stimulus is filtered by the receptive fields of the neurons and enters the soma. Spike generation is threshold based and a negative feedback mechanism resets the membrane potential after each spike. In the decoding part each spike, represented by a delta pulse, is weighted by an appropriate coefficient and then filtered from the same receptive field for stimulus reconstruction. The total sum is passed from a low pass filter to recover the original input stimulus.

We prove and demonstrate that we can recover the whole video stream based only on the knowledge of the spike times, provided that the size of the neural population is sufficiently big. Increasing the number of neurons to achieve better representation is consistent with basic neurobiological thought [2].

Although very precise, the responses of visual neurons show some variability between subsequent stimulus repeats, which can be attributed to various noise sources [1]. We examine the effect of noise on our algorithm and show that the reconstruction quality gracefully degrades when white noise is present at the input or at the feedback loop.


  1. Keat J, Reinagel P, Reid RC, Meister M: Predicting every spike: A model for the responses of visual neurons. Neuron. 2001, 30: 803-817. 10.1016/S0896-6273(01)00322-1.

    Article  CAS  PubMed  Google Scholar 

  2. Lazar AA, Pnevmatikakis E: Faithful representation of stimuli with a population of integrate-and-fire neurons. Neural Computation. 2008, to appear.

    Google Scholar 

Download references


This work is supported by NIH grant R01 DC008701-01 and NSF grant CCF-06-35252. EA Pnevmatikakis is also supported by Onassis Public Benefit Foundation.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Eftychios A Pnevmatikakis.

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution 2.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Lazar, A.A., Pnevmatikakis, E.A. A simple spiking retina model for exact video stimulus representation. BMC Neurosci 9 (Suppl 1), P130 (2008).

Download citation

  • Published:

  • DOI: