- Poster presentation
- Open Access
Intrinsic neural response properties are sufficient to achieve time coding
- Thomas Voegtlin1Email author and
- Sam McKennoch1
https://doi.org/10.1186/1471-2202-10-S1-P133
© Voegtlin and McKennoch; licensee BioMed Central Ltd. 2009
- Published: 13 July 2009
Keywords
- Learning Rule
- Response Property
- Generalization Capability
- Spike Time
- Neural Computation
A fundamental question relative to time coding is how action potentials, which arrive from different synapses at different times, are integrated at the soma. In order to achieve interesting generalization capabilities, it has long been proposed that neural networks should use distributed representations, where the activities of different neurons can be combined in a meaningful way. In the context of time coding, this means that spikes arriving from different synapses at different times need to be integrated in a meaningful way.
Classically, computational models of spiking neural networks have achieved this combinatorial capability by exploiting the shape of post-synaptic currents [1, 2]. In this approach, post-synaptic potentials (PSPs) arriving from different synapses at different times are combined linearly at the soma; a neuron's spike time depends on when this linear combination of PSPs crosses the firing threshold. This approach, however, has serious limitations. For example, the effective coding interval is limited by the lengths of the rising segments of PSPs, which are very small in practice [2].
We have developed a different method for combining spike times based on the fact that a neuron's response to synaptic currents depends on its internal state [1]. In general, this dependency is expressed by the Phase Response Curve (PRC) of the neuron. In PSP-based models, this dependency has either been neglected, or has only been considered as a possible refinement of the PSP-based approach. However, we have shown that it is possible to perform rich computations by considering only the response properties of the neurons (PRC) and by completely neglecting the shape of PSPs (synaptic currents are modeled as Dirac pulses).
We have derived a learning rule for theta neurons that is adapted to their PRC. The result is a network with learning and generalization capabilities similar to that of a non-linear perceptron. In addition, our approach does not suffer the limitations of PSP-based models. The coding interval is much larger than the length of PSPs; it is as long as the monotonic segments of the PRC. Our results suggest that the response properties of neurons are more relevant to neural computation than the exact shape of synaptic currents.
Authors’ Affiliations
References
- Gütig R, Sompolinsky H: The tempotron: a neuron that learns spike timing-based decisions. Nature Neuroscience. 2006, 9: 420-428. 10.1038/nn1643.PubMedView ArticleGoogle Scholar
- McKennoch S, Voegtlin T, Bushnell L: Spike-timing error backpropagation in theta neuron networks. Neural Computation. 2009, 21: 9-45. 10.1162/neco.2009.09-07-610.PubMedView ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd.