- Meeting abstracts
- Open access
- Published:
26th Annual Computational Neuroscience Meeting (CNS*2017): Part 2
BMC Neuroscience volume 18, Article number: 59 (2017)
P1 Potential functions of different temporal patterns of intermittent neural synchronization
Leonid L. Rubchinsky1,2, Sungwoo Ahn3
1Indiana University Purdue University Indianapolis, Indianapolis, IN 46032, USA; 2Stark Neurosciences Research Institute, Indiana University School of Medicine, Indianapolis, IN 46032, USA; 3Department of Mathematics, East Carolina University, Greenville, NC 27858, USA
Correspondence: Leonid L. Rubchinsky (lrubchin@iupui.edu)
BMC Neuroscience 2017, 18(Suppl 1):P1
Synchronization of neural activity has been associated with several neural functions and abnormalities of neural synchrony are associated with different neurological and neuropsychiatric diseases. Neural synchrony in the brain is usually intermittent rather than perfect, even on a very short time-scales. Temporal patterning of synchrony may impact neural function even if the average synchrony strength is fixed (few long intervals of desynchronized dynamics may be functionally different from many short asynchronous intervals even if the average synchrony is the same). Thus, it is of interest to explore network effects of different temporal patterns of neural synchrony.
Detection and quantification of the temporal patterning of synchronization is possible on the very short time-scales (up to one cycle of oscillations, provided that the data episode under analysis possesses some statistically significant synchrony level on the average [1,2]). These techniques allowed for exploration of the fine temporal structure of synchronization of neural oscillations. Experimental studies of neural synchrony in different neural systems report a feature, which appears to be universal: the intervals of desynchronized activity are predominantly very short (although they may be more or less numerous, which affects average synchrony). These observations have been found in different brain areas (cortical and subcortical), different species (humans and rodents), different brain rhythms (alpha, beta, theta), and different disease and behavioral status [3–5].
These observations may suggest that these quick numerous desynchronization events may potentially facilitate creation and break-up of functional synchronized neural assemblies, because both synchronized and desynchronized states are already present in the neural activity. This in turn may promote adaptability and quick reaction of neural systems. Other highly adaptable physiological systems may express short desynchronization dynamics too [6].
We use a minimal network of simple conductance-based model neurons to study how different patterning of intermittent neural synchrony affects formation of synchronized states in response to the common synaptic input to the network. We found that the networks with short desynchronization dynamics are easier to synchronize with the input signal and consider this phenomenon in the context of the experimental observations of neural synchrony patterning.
References
1. Ahn S, Park C, Rubchinsky LL: Detecting the temporal structure of intermittent phase locking. Physical Review E 2011, 84: 016201.
2. Rubchinsky LL, Ahn S, Park C: Dynamics of synchronization-desynchronization transitions in intermittent synchronization. Frontiers in Computational Physics 2014, 2:38.
3. Ahn S, Rubchinsky LL: Short desynchronization episodes prevail in the synchronous dynamics of human brain rhythms. Chaos 2013, 23: 013138.
4. Ahn S, Rubchinsky LL, Lapish CC: Dynamical reorganization of synchronous activity patterns in prefrontal cortex - hippocampus networks during behavioral sensitization. Cerebral Cortex 2014, 24: 2553–2561.
5. Ratnadurai-Giridharan S, Zauber SE, Worth RM, Witt T, Ahn S, Rubchinsky LL: Temporal patterning of neural synchrony in the basal ganglia in Parkinson’s disease. Clinical Neurophysiology 2016, 127:1743–1745.
6. Ahn S, Solfest J, Rubchinsky LL: Fine temporal structure of cardiorespiratory synchronization. American Journal of Physiology - Heart and Circulatory Physiology 2014, 306: H755–H763.
P2 NestMC: A morphologically detailed neural network simulator for modern high performance computer architectures
Wouter Klijn1, Ben Cumming2, Stuart Yates2, Vasileios Karakasis3, Alexander Peyser1
1Jülich Supercomputing Centre, Forschungszentrum Jülich, Jülich, 52425, Germany; 2Future Systems, Swiss National Supercomputing Centre, Zürich, 8092, Switzerland; 3User Engagement & Support, Swiss National Supercomputing Centre, Lugano, 6900, Switzerland
Correspondence: Wouter Klijn (w.klijn@fz-juelich.de)
BMC Neuroscience 2017, 18(Suppl 1):P2
NestMC is a new multicompartment neural network simulator currently under development as a collaboration between the Simulation Lab Neuroscience at the Forschungszentrum Jülich, the Barcelona Supercomputing Center and the Swiss National Supercomputing Center. NestMC will enable new scales and classes of morphologically detailed neuronal network simulations on current and future supercomputing architectures.
A number of “many-core” architectures such as GPU and Intel Xeon Phi based systems are currently available. To optimally use these emerging architecture new approaches in software development and algorithm design are needed. NestMC is being written specifically with performance for this hardware in mind (Figure 1); it aims to be a flexible platform for neural network simulation while keeping interoperability with models and workflows developed for NEST and NEURON.
The improvements in performance and flexibility in themselves will enable a variety of novel experiments, but the design is not yet finalized, and is driven by the requirements of the neuroscientific community. The prototype is open source (https://github.com/eth-cscs/nestmc-proto, https://eth-cscs.github.io/nestmc/) and we invite you to have a look. We are interested in your ideas for features which will make new science possible: we ask you to think outside of the box and build this next generation neurosimulator together with us.
Which directions do you want us to go in?
-
Simulate large morphological detailed networks for longer time scales: Study slow developing phenomena.
-
Reduce the time to solution: Perform more repeat experiments for increased statistical power.
-
Create high performance interfaces with other software: Perform online statistical analysis and visualization of your running models, study the brain at multiple scales with specialized tools, or embed detailed networks in physically modeled animals.
-
Optimize dynamic structures for models with time-varying number of neurons, synapses and compartments: simulate neuronal development, healing after injury and age related neuronal degeneration.
Do you have other great ideas? Let us know!
Figure 1. Strong Scaling for NestMC. Time to solution for two models of fixed size as a function of the number of compute nodes. Increasing the number of processors reduces the time to solution with little increase in the compute resources required measured in node hours (nh)
P3 Automatically generating HPC-optimized code for simulations using neural mass models
Marmaduke Woodman1, Sandra Diaz-Pier2, Alexander Peyser2
1Institut de Neurosciences des Systèmes, Aix Marseille Université, Marseille, France; 2Simulation Lab Neuroscience, Forschungszentrum Jülich, Jülich, Germany
Correspondence: Sandra Diaz-Pier (s.diaz@fz-juelich.de)
BMC Neuroscience 2017, 18(Suppl 1):P3
High performance computing is becoming every day a more accessible and desirable concept for researchers in neuroscience. Simulations of brain networks and analysis of medical data can now be performed on larger scales and with higher resolution. However, many software tools which are currently available to neuroscientists are not yet capable of utilizing the full power of supercomputers, GPGPUs and other computational accelerators.
The Virtual Brain (TVB) [1] software is a validated and popular choice for the simulation of whole brain activity. With TVB the user can create simulations using neural mass models which can have outputs on different experimental modalities (EEG, MEG, fMRI, etc.). TVB allows the scientists to explore and analyze simulated and experimental signals and contains tools to evaluate relevant scientific parameters over both types of data [2]. Internally, the TVB simulator contains several models for the generation of neural activity at the region scale. Most of these neural mass models can be efficiently described with groups of coupled differential equations which are numerically solved for large spans of simulation time. Currently, the models simulated in TVB are written in Python and have not been optimized for parallel execution or deployment on High Performance Computing architectures. Moreover, several elements of these models can be abstracted, generalized and re-utilized, but the design for the right abstract description for the models is still missing.
In this work, we want to present the first results of porting several workflows from The Virtual Brain into High Performance Computing accelerators. In order to reduce the effort required by neuroscientist to utilize different HPC platforms, we have developed an automatic code generation tool which can be used to define abstract models at all stages of a simulation. These models are then translated into hardware specific code. Our simulation workflows involve different neural mass models (Kuramoto [3], Reduced Wong Wang [4], etc.), pre-processing and post-processing kernels (balloon model [5], correlation metrics, etc.). We discuss the strategies used to keep the code portable through several architectures but optimized to each platform. We also point out the benefits and limitations of this approach. Finally, we show initial performance comparisons and give the user an idea of what can be achieved with the new code in terms of scalability and simulation times.
Acknowledgements
We would like to thank our collaborators Lia Domide, Mihai Andrei, Vlad Prunar for their work on the integration of the new software with the already existing TVB platform as well as Petra Ritter and Michael Schirner for providing an initial use case for our tests. The authors would also like to acknowledge the support by the Excellence Initiative of the German federal and state governments, the Jülich Aachen Research Alliance CRCNS grant and the Helmholtz Association through the portfolio theme SMHB and the Initiative and Networking Fund. In addition, this project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 720270 (HBP SGA1).
References
1. Sanz Leon P, Knock SA, Woodman MM, Domide L, Mersmann J, McIntosh AR, Jirsa V: The Virtual Brain: a simulator of primate brain network dynamics. Front. Neuroinform. 2013, 7: 10.
2. Sanz-Leon, Paula et al.: Mathematical framework for large-scale brain network modeling in The Virtual Brain. Neuroimage 2015, 111: 385–430.
3. Kuramoto Y: Phase-and center-manifold reductions for large populations of coupled oscillators with application to non-locally coupled systems. Int. J. Bifurcat. Chaos 1997, 7: 789–806.
4. Wong, Kong-Fatt, and Xiao-Jing Wang. A recurrent network mechanism of time integration in perceptual decisions. Journal of Neuroscience 2006, 26.4:1314–1328.
5. Buxton, Richard B, Eric C Wong, and Lawrence R Frank: Dynamics of blood flow and oxygenation changes during brain activation: the balloon model. Magnetic resonance in medicine 1998, 39.6: 855–864
P4 Conjunction or co-activation? A multi-level MVPA approach to task set representations
James Deraeve1, Eliana Vassena2, William Alexander1
1Department of Experimental Psychology, Ghent University, Ghent, 9000, Belgium; 2Donders Center for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525HR, Netherlands
Correspondence: James Deraeve (james.deraeve@ugent.be)
BMC Neuroscience 2017, 18(Suppl 1):P4
In cognitive experiments, participants are often required to perform tasks where they must apply simple rules, such as “if target is a square, press left”. In everyday life, however, behavior is more complex and may be governed by collections of rules - task sets - that need to be selectively applied in order to achieve a goal. While previous research has demonstrated the involvement of dorsolateral prefrontal cortex (dlPFC) in representation and maintenance of relevant task sets, the nature of this representation remains an open question. One possibility is that task sets are represented as the co-activation of multiple neurons, each of which codes for a single rule. An alternative possibility is that the activity of individual neurons encodes the conjunction of simple rules. In order to answer this question, subjects performed a delayed match-to-sample task while undergoing fMRI. On each trial, subjects were shown a cue indicating one of three possible task sets. Each task set was composed of two out of three possible rules: color/orientation, orientation/shape or shape/color. Following a maintenance period, subjects were presented with a sample stimulus and were asked to memorize the cued task set dimensions. Subsequently, a target stimulus was shown and the subjects had to respond how many cued dimensions of the sample stimulus matched the target stimulus. A control condition was also included in which subjects indicated whether the direction of an arrow (left/right) matched a cued direction. Critically, each task set had one rule in common with another task set and the other rule in common with the remaining task set, allowing us to ascertain through feature selection and multivariate decoding the nature of the underlying neural representations. Under the co-activation hypothesis, voxels important in classifying between task set A and task set B should be those coding for the rules these task sets do not have in common. Since these are the rules that constitute the remaining task set C, classifying task set A and B against C and control using these important voxels as input features should yield classification accuracies at chance. Under the conjunction hypothesis, important voxels code for a specific conjunction of rules and classification of task sets A and B against C and control is possible. A whole brain searchlight analysis reveals a distributed network of regions, including dlPFC, ventrolateral PFC, and parietal cortex with maintenance period activity consistent with the co-activation hypothesis. Conversely, activity in visual cortex during maintenance appears to be consistent with the conjunction hypothesis.
Acknowledgements
This research was supported by FWO-Flanders Odysseus II award #G.OC44.13N to WHA.
P5 Understanding Adaptation in Human Auditory Cortex with Modeling
David Beeman1, Pawel Kudela2, Dana Boatman-Reich3,4, William S. Anderson2
1Department of Electrical, Computer, and Energy Engineering1, University of Colorado, Boulder, CO 80309, USA; 2Department of Neurosurgery, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA; 3Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA; 4Department of Otolaryngology, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
Correspondence: David Beeman (dbeeman@colorado.edu)
BMC Neuroscience 2017, 18(Suppl 1):P5
Neural responses in sensory cortex decrease with stimulus repetition, known as adaptation or repetition suppression. In auditory cortex, adaptation is thought to facilitate detection of novel sounds and improve perception in noisy environments. However, the neural mechanisms of adaptation in the human brain remain poorly understood. Here, we combine computational modeling with intracranial electrocorticographic (ECoG) recordings acquired directly from human auditory cortex of epilepsy patients undergoing pre-surgical evaluation.
The model was based on a large layer IV model of primary auditory cortex [1] with multi-compartmental pyramidal cells and fast spiking inhibitory basket cells, implemented in GENESIS 2.4. Thalamic inputs target rows along a tonotopic axis. We extended the model to include short term depression (STD) on excitatory synapses within and between multi-compartmental pyramidal cells and inhibitory basket cells. Model simulations were then compared with human ECoG recordings. Figure 1A shows population auditory evoked potentials (AEP) derived from ECoG recordings using an established 300-trial passive oddball paradigm to measure adaptation [2]. The repetitive stimulus was a 1000 Hz tone (200 ms duration; 82% trials); the non-adapting stimulus was an infrequently presented 1200 Hz tone. Stimuli were presented binaurally at 1.4 s intervals. All patients had electrodes covering posterior temporal cortex auditory areas. AEP results show adaptation to the frequent (repetitive) stimulus for the N1-P2 peaks at about 100–200 ms post-stimulus. Results from model simulations performed using stimulus parameters from the ECoG recordings are shown in Figure 1B and are consistent with patterns of adaptation observed in ECoG recordings.
In the model, as well as cortex, the spatial separation between the location of thalamic afferents for the two tones along the tonotopic axis is large enough that they excite distinct groups of neurons. On average, frequent tone stimuli will follow each other at 1.4 s intervals. The probability of two sequential infrequent tones is low, and the average interval between these stimuli is much larger. Fits of the STD model parameters to measurements in mouse auditory and somatosensory cortex reveal multiple time scales of adaptation. A significant time constant, on the order of one second, governs the time that it takes for the depressed synaptic weight to recover to its original value. Consequently, repeated presentation of the same tone would maintain some synaptic depression between pulses, but responses to infrequent stimuli will have recovered from depression. This is shown in Figure 1B for both the N1 peak, which arises from excitatory synaptic currents in pyramidal cells, and the P2 peak from the subsequent inhibitory currents. These results are consistent with primary auditory depth recordings from one patient. We have also observed similar results across three variations of our model: the single layer IV model with pyramidal and basket cells, a single layer version augmented with facilitating Martinotti cells, and most recently a detailed multilayer model that includes layers II/III and receives depressing synaptic input from layer IV.
Figure 1. A. Average AEPs from [2]. B. Results from the model
Acknowledgements
Supported by U.S. Army Research Grants W911NF-14-1-0491 and W911NF-10-2-0022
References
1. Beeman D: A modeling study of cortical waves in primary auditory cortex. BMC Neurosci 2013, 14(Suppl 1):23.
2. Eliades SJ, Crone NE, Anderson WS, Ramadoss D, Lenz FA, Boatman-Reich D: Adaptation of high-gamma responses in human auditory association cortex. J Neurophysiol, 2014 112:2147–2163
P6 Silent and bursting states of Purkinje cell activity modulate VOR adaptation
Niceto R. Luque1,2,3, Francisco Naveros4, Richard R. Carrillo4, Eduardo Ros4, Angelo Arleo1,2,3
1INSERM, U968, Paris, France; 2Sorbonne Universités, UPMC University Paris 06, UMR_S 968, Institut de la Vision, Paris, France; 3CNRS, UMR_7210, Paris, France; 4Department of Computer Architecture and Technology, University of Granada (CITIC), Granada, Spain
Correspondence: Niceto R. Luque (niceto.luque@inserm.fr), Angelo Arleo (angelo.arleo@inserm.fr)
BMC Neuroscience 2017, 18(Suppl 1):P6
Within the cerebellar cortex, the inhibitory projections of Purkinje cells to deep cerebellar nuclei mediate fine motor coordination. Understanding the dynamics of Purkinje cell discharges can thus provide insights into sensorimotor adaptation. It is known that Purkinje cells exhibit three firing modes, namely tonic, silent, and bursting. However, the relation between these firing patterns and cerebellar-dependent behavioural adaptation remains poorly understood. Here, we present a spiking cerebellar model that explores the putative role of the multiple Purkinje operating modes in vestibular ocular reflex (VOR) adaptation. VOR stabilizes images on the retina during head ipsilateral rotations using contralateral eye-movement compensations. The model captures the main cerebellar microcircuit properties and incorporates multiple synaptic plasticity mechanisms at different cerebellar sites (parallel fibres - Purkinje cells, mossy fibres - deep cerebellar nuclei, and Purkinje cells - deep cerebellar nuclei). An analytically reduced version of a detailed Purkinje cell model is at the core of the overall cerebellar function. This allows us to examine the impact of different Purkinje firing modes on VOR adaptation performance. We show that the Purkinje silent mode, through transient disinhibition of targeted cells, gates the neural signals conveyed by mossy fibres to deep cerebellar nuclei. This gating mechanism accounts for early and coarse VOR, prior to the late consolidation of the reflex. In turn, properly timed and sized Purkinje bursts can finely shape the balance between long-term depression and potentiation (LTD/LTP) at mossy fibre - deep cerebellar nuclei synapses. This fine tuning of LTD/LTP balance increases the rate of VOR consolidation. Finally, the silent mode can facilitate the VOR reversal phase by reshaping previously learned synaptic weight distributions. Altogether, these results predict that the interburst dynamics of Purkinje cell activity is instrumental to VOR learning and reversal phase adaptation.
Acknowledgements
This study was supported by the European Union NR (658479-Spike Control), the Spanish National Grant NEUROPACT (TIN2013-47069-P) and by the Spanish National Grant PhD scholarship (AP2012-0906).
P7 The “convis” framework: Population Simulation of the Visual System with Automatic Differentiation using theano
Jacob Huth1, Timothée Masquelier2, Angelo Arleo1
1Sorbonne Universités, UPMC Univ Paris 06, INSERM, CNRS, Institut de la Vision, Paris, France; 2CERCO UMR5549, CNRS, University Toulouse 3, Toulouse, France
Correspondence: Jacob Huth (jahuth@uos.de)
BMC Neuroscience 2017, 18(Suppl 1):P7
In our effort to extend a mechanistic retina model [1] to use arbitrary non-separable spatio-temporal filters, we created a Python toolbox that is able to formulate a variety of visual models which can then be efficiently computed on a graphics card and even efficiently fitted to electrophysiological recordings using automatic differentiation.
The model for which we were developing this framework is a linear-nonlinear cascade model with contrast gain control and LIF spiking neurons. Previous implementations run very efficiently even on single core systems due to recursive filtering, which restricts temporal filters to a difference of exponentials and spatial filters to 2d Gaussians which are either circular, or have to be aligned with the x and y axis of the simulation. For our research, we required the shape of receptive fields to be arbitrary in space and time and explored the possibilities of using 3d convolution to model finite impulse response linear filters, while still being able to simulate large numbers of cells in reasonable time.
We chose theano [2], a python package developed for deep learning, to construct a computational graph of mathematical operations, including 3d convolutions and alternatively recursive filtering, which can then be compiled via C/C++, CUDA or OpenCL language bindings depending on availability on the specific machine. The graph is optimized before compile time, removing redundancy, choosing appropriate implementation details and replacing some numerically unstable expressions with stable algorithms.
By using this package, we get automatic differentiation, and thus much more efficient optimization, for free. The retina model we are using, while being biologically plausible through its local contrast gain control mechanism, has notoriously many parameters, which are additionally not all independent. But since we implemented the model as an abstract computational graph, rather than the specific simulation code, the model can be analyzed with Computer Algebra System techniques, and gradients of an output with respect to a specific input can be derived automatically. This allows for a range of efficient optimization algorithms to be used, such as the nonlinear conjugate gradient method, Newton method, or Hessian-free optimization.
We examined the error function of the retina model during fitting with respect to different parameters and found that “almost-linear” input parameters, which have essentially a linear effect on the output, but occur before a non-linearity, keep their convex shape and can be sufficiently fitted assuming a quadratic error function. Independent parameters, which each have quadratic error functions, can be optimized with minimal exploration of the parameter space, leading to very fast convergence. In the case of interdependent parameters, the model can be reformulated to eliminate identical solutions. We distinguished areas of convex and concave error functions and found that for non-linear parameters descent to the global minimum is much faster if information of a gradient can be used.
The package is available via Pypi or github [3].
Acknowledgements
This research was supported by ANR – Essilor SilverSight Chair ANR-14-CHIN-0001.
References
1. Wohrer, A., & Kornprobst, P. Virtual Retina: A biological retina model and simulator, with contrast gain control. Journal of Computational Neuroscience 2009, 26(2): 219–249. http://doi.org/10.1007/s10827-008-0108-4
2. Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I., Bergeron, A., Bouchard, N., Warde-Farley, D., Bengio, Y. Theano: new features and speed improvements. 2012 Retrieved from http://arxiv.org/abs/1211.5590
3. Convis github repository [https://github.com/jahuth/convis/]
P8 Why does neural activity in ASD have low complexity: from a perspective of a small-world network model
Koki Ichinose1, Jihoon Park1, Yuji Kawai1, Junichi Suzuki1, Hiroki Mori2, Minoru Asada1
1Department of Adaptive Machine Systems, Osaka University, Osaka, Japan; 2Department of Computer Science, University of Cergy-Pontoise, Cergy-Pontoise, France
Correspondence: Koki Ichinose (koki.ichinose@ams.eng.osaka-u.ac.jp)
BMC Neuroscience 2017, 18(Suppl 1):P8
Autistic spectrum disorder (ASD) is a neurobiological developmental disorder, and many studies have shown abnormality of connectivity structures or neural activities in the brain of ASD. The most typical example of the abnormality is local over-connectivity characterized by increased short range connectivity [1]. Furthermore, it was reported that neural activity in ASD measured by electroencephalography (EEG) have low complexity (multiscale entropy: MSE) [2] and enhanced high frequency oscillation [3]. However, mechanism of the abnormal connectivity and neural activity is not well understood. We aim to comprehend the relation between connectivity and neural activity in the brain of ASD from a perspective of a small-world network model. Our network model consisted of 100 neuron groups, and each neuron group has 1000 spiking neurons. The connectivity of the neuron groups was determined according to the Watts and Strogatz method. The degree of local over-connectivity was modified by the rewiring probability. In our model, the regular and small-world networks denote ASD and typical brains, respectively. We analyzed the complexity and frequency spectrum of the neural activities. Figure 1 shows the relation between graph-theoretical properties (clustering coefficient and degree centrality) and complexity. The regular network has local over-connectivity (high clustering coefficient and high degree centrality) corresponding to the connectivity of ASD. These local over-connected neuron groups of the regular network exhibited higher frequency oscillation and lower complexity than those of other networks.
Figure 1. Each dot corresponds to each neuron group in the network and its color indicates the complexity (MSE) of the activity in the neuron group. Panel A shows the graph-theoretical properties of neuron groups. Clustering coefficient indicates how many closed triangle connections each neuron group has. Degree centrality is the sum of the connection strengths into each neuron group. Panel B shows the frequency components of the activities in the neuron groups. The x-axis indicates the peak frequency of neural activity and the y-axis indicates the amplitude of the peak frequency
Conclusion: Our results show that ASD brain model which has local over-connectivity (high clustering coefficient and high degree centrality) enhances high frequency oscillation and decreases complexity in neural activity. This implies that local over-connectivity induces the abnormality of neural activity in ASD.
References
1. Courchesne, E., and Karen P.: Why the frontal cortex in autism might be talking only to itself: local over-connectivity but long-distance disconnection. Current Opinion in Neurobiology 15.2 (2005): 225–230.
2. Bosl, W., et al.: EEG complexity as a biomarker for autism spectrum disorder risk. BMC Medicine 9.1 (2011): 18.
3. Cornew, L., et al.: Resting-state oscillatory activity in autism spectrum disorders. Journal of Autism and Developmental Disorders 42.9 (2012): 1884–1894.
P9 Phase-locked mode prediction with generalized phase response curve
Sorinel A. Oprisan1 and Austin I. Dave1
1Department of Physics and Astronomy, College of Charleston, Charleston, SC 29424, USA
Correspondence: Sorinel A. Oprisan (oprisans@cofc.edu)
BMC Neuroscience 2017, 18(Suppl 1):P9
Introduction: The simplest possible synchronization mechanism among neurons is based on unidirectional coupling in which a driving neuron drives the activity of postsynaptic neurons. Phase response curve (PRC) method assumes that the only effect of a presynaptic input is a transient change in the firing phase of the postsynaptic neuron(s) and it was successfully used to predict phase-locked modes and synchrony in neural networks setting methodology has been successfully used for predicting one-to-one entrainment in networks where the receiving population always follows the driving population [1]. It has been analytically proven and numerically verified that time-delayed feedback can force coupled dynamical systems onto a synchronization manifold that involves the future state of the drive system, i.e. ``anticipating synchronization’’ [2]. Such a result is counterintuitive since the future evolution of the drive system is anticipated by the response system despite the unidirectional coupling.
Method: The phase response curve (PRC) tabulates the transient change in the firing period Pi (Fig. 1A) of a neural oscillator in response to one external stimulus at time t sa per cycle of oscillation [3]. The term PRC has been used almost exclusively in regard to a single stimulus per cycle of neural oscillators. Recently, we suggested a generalization of the PRC that allowed us to account for the overall resetting when two (t sa , t sb ) or more inputs are delivered during the same cycle [4]. We previously investigated the existence and stability of phase-locked modes in a neural network with a fixed delay feedback [1]. The novelty of this manuscript is the application of the generalized PRC to a realistic neural network with a dynamic feedback loop (Fig. 1B).
Results: Based on generalized PRC, we predicted a stable phase-locked pattern with t2sa* = 18.0 ms and t2sb* = 44.1 ms. The actual phase-locked values for the fully coupled network were t2sa* = 15.2 ms and t2sb* = 40.2 ms (Fig. 1C).
Figure 1. A. The first stimulus at phase φa = tsa/Pi modifies the intrinsic firing period Pi to Pa = Pi(1 + F(φa)), while the second stimulus at phase φb = tsb/Pa further resets the firing period to Pb = Pa(1 + F(φb)). ({\bf B}) A typical two stimuli phase response surface for a class I excitable cell. B. Neural network with driver (1), driven (2), and feedback look (3), The coupling between the neurons is excitatory (empty triangle) or inhibitory (solid circle). C. A typical stable phase-locked mode of a fully coupled network. The network’s firing period was P1i = 60 ms
Acknowledgements
SAO acknowledges support for this research from NSF-CAREER award IOS 1054914.
References
1. Oprisan SA, Canavier CC: Stability analysis of entrainment by two periodic inputs with a fixed delay. Neurocomputing 2003, 52–54:59–63.
2. Voss HU: Anticipating chaotic synchronization. Physical Review E 2000, 61(5):5115–5119.
3. Perkel DH, Schulman JH, Bullock TH, Moore GP, Segundo JP: Pacemaker neurons: Effects of regularly spaced synaptic input. Science 1964, 145(3627):61–63.
4. Vollmer MK, Vanderweyen CD, Tuck DR, Oprisan SA: Predicting phase resetting due to multiple stimuli. Journal of the South Carolina Academy of Science 2015, 13(2):5–10.
P10 Neural Field Theory of Corticothalamic Prediction and Attention
Tahereh Babaie1,2, Peter Robinson1,2
1School of Physics, Faculty of Science, University of Sydney, Sydney, 2006, NSW, Australia; 2Center of Excellence for Integrative Brain Function, Australian Research Council, Sydney, Australia
Correspondence: Tahereh Babaie (tahereh.babaie@sydney.edu.au)
BMC Neuroscience 2017, 18(Suppl 1):P10
In order to react to the world and achieve survival-relevant outcomes, the brain must attend to those stimuli that are salient, predict their future course, and make use of the results in its responses. In part, this involves combining multiple sensory streams, each of which has a different variance. Experimental evidence shows that the fusion of sensory information is approximately Bayesian. Many theoretical proposals have been made as to how this fusion is achieved, some highly abstract, and some partly based on brain architecture – notable examples include Kalman filters and various predictive coding schemes. However, a common feature is that all proposals to date invoke mathematical operations that the brain must perform at some point, without demonstrating explicitly how neural tissue can accomplish all these tasks, some of which are as complex as matrix inversion and integration over multidimensional probability distributions.
Instead of deciding on a favored mathematical formulation and assuming that it works in the brain, the present work takes the reverse approach of first analyzing realistic corticothalamic responses to simple visual stimuli using neural field theory. This yields system transfer functions that are found to embody key features in common with those of engineering control systems. The finding of analogous quantities in the corticothalamic system enables interpretation of its dynamics in data fusion terms, and assists in localizing the structures in which gain control is possible (see Fig. 1). In particular, these features assist in finding signals within the system, which represent input stimuli and their changes, and these are exactly the types of quantities that are used in control systems to enable prediction of future states, and adjustment of gains to implement attention. The response properties can then be used to drive attention, prediction, decision, and control.
Figure 1. The physiologically based corticothalamic model (top) and its analogous control system interpretation (bottom). φn represents the external stimuli while φs and φe are the feedforward and feedback projections, respectively. The arrows in the model represent excitatory effect while the circles depict inhibitory effect
P11 Top-down dynamics of cortical pitch processing explain the emergence of consonance and dissonance in dyads
Alejandro Tabas1,2, Martin Andermann3, André Rupp3†, Emili Balaguer-Ballester2,4†
1Max Planck Institute for Human Cognitive and Brain Sciences, Saxony, Leipzig, Germany; 2Department of Computing and Informatics. Faculty of Science and Technology, Bournemouth University, Bournemouth, England, UK; 3Biomagnetism Section, Heidelberg University, Baden-Württemberg, Heidelberg, Germany; 4Bernstein Centre for Computational Neuroscience Heidelberg-Mannheim, Heidelberg University, Heidelberg, Germany
Correspondence: Emili Balaguer-Ballester (eb-ballester@bournemouth.ac.uk)
†Join last authorship
BMC Neuroscience 2017, 18(Suppl 1):P11
Pitch is the perceptual correlate of sound’s periodicity and a fundamental attribute of auditory sensation. A dyad consists of a combination of two simultaneous harmonic complex tones that elicit different pitch percepts. Periodicity interactions in dyads give rise to an emergent sensation, described as some degree of consonance or dissonance, strongly correlated to the ratio between the fundamental periods of the involved tones. Simple ratios result in consonant sensations, that become more and more dissonant as the ratio complexity increases.
Consonance and dissonance play a fundamental role in music processing; however, the underlying neural mechanisms associated with the emergence of these percepts are yet poorly understood. In this work, we describe a general mechanism for pitch processing that explains, for the first time to our knowledge, the mechanistic relationship between cortical pitch processing and the sensations of consonance and dissonance. The N100 m is a transient neuromagnetic response of the auditory evoked fields observed in antero-lateral Heschl’s gyrus during MEG recordings. The N100 m latency is strongly correlated to perceived pitch [1]. In our study, we first studied the connection between pitch processing and consonance by measuring the dynamics of the N100 m elicited by six different dyads built from two iterated rippled noise (IRN) [2]. We found a strong and significant correlation between the dissonance percept reported by human listeners and the latency of the N100 m.
Next, we used a hierarchical ensemble model of cortical pitch processing in order to understand the observed correlation. The model receives inputs from a realistic model of the auditory periphery [3], followed by an idealised model of subcortical processing based on the autocorrelation models of pitch [4]. Subcortical input is further processed by a cascade of two networks comprising balanced excitatory and inhibitory ensembles endowing realistic neural and synaptic parameters [5], that effectively transforms the input patterns into a receptive-field-like pitch representation in cortex. Pitch is extracted in the first network due to harmonic connectivity structures inspired by recent findings on the organisation of mammal auditory cortex [6]. The second network further processes extracted pitch, modulating the dynamics of the first network through top-down efferents. The aggregated activity along pyramidal excitatory cells on the first network was predictive for the morphology of the N100 m, and accurately explained the N100 m latency dependence with pitch of iterated rippled noises [1]. Moreover, the model is able to resolve individual pitch values of tones comprised in dyads, quantitatively explain the observed N100 m dependence with consonance, and extends this correlation to further dyads not included in the experiments. Our results introduce a mechanistic explanation of consonance perception based on harmonic facilitation. Subcortical harmonic patterns associated with tones comprised in consonant dyads present a large number of common lower harmonics that, by means of the connectivity structure on the cortical model, facilitate pitch extraction during processing, reducing the processing time for consonant tone combinations. We suggest that these differences in processing time which are reflected in MEG responses are responsible for the differential percept elicited by dissonant and consonant dyads.
References
1. Krumbholz, K., Patterson, R. D., Seither-Preisler, A., Lammertmann, C., & Lütkenhöner, B: Neuromagnetic evidence for a pitch processing center in Heschl’s gyrus. Cereb. Cortex 2003, 13(7):765–772.
2. Bidelman, G. M., & Grall, J: Functional organization for musical consonance and tonal pitch hierarchy in human auditory cortex. NeuroImage 2014, 101:204–214.
3. Zilany, M. S. A., Bruce, I. C., & Carney, L. H: Updated parameters and expanded simulation options for a model of the auditory periphery. JASA 2014, 135(1), 283–286.
4. Meddis R, O’Mard LP: Virtual pitch in a computational physiological model. JASA 2006, 6:3861–3869.
5. Wong, K-F, Wang, X-J: A recurrent network mechanism of time integration in perceptual decisions. J Neurosci 2006, 26(4):1314–1328.
6. Wang, X: The harmonic organization of auditory cortex. Front Syst Neurosci 2013, 7:114.
P12 Modeling sensory cortical population responses in the presence of background noise
Henrik Lindén1, Rasmus K. Christensen1, Mari Nakamura2, Tania R. Barkat2
1Center for Neuroscience, University of Copenhagen, Copenhagen, 2200, Denmark; 2Brain and Sound Lab, Department of Biomedicine, Basel University, Basel, 4056, Switzerland
Correspondence: Henrik Lindén (hlinden@sund.ku.dk)
BMC Neuroscience 2017, 18(Suppl 1):P12
The brain faces the difficult task of maintaining a stable representation of key features of the outside world in highly variable sensory surroundings. How does sensory representation in the cortex change in the presence of background ‘noise’ and how does the brain make sense of it?
Here we address this question in the context of the auditory cortex where cells are known to respond in a tuned fashion to the frequency of auditory pure-tone stimuli. We first measured population spike responses using multi-channel extracellular electrodes in awake mice in response to tones with varying frequency, while adding a background noise. Interestingly, we found that the tuning properties of cells changed in the presence of background noise so that they responded more narrowly around their preferred frequency. How does that influence the ability to discriminate between sound stimuli that are close in frequency?
We consider a simple model of the cortical population response profile and assume that the brain compares the weighted read-out of the spike responses in populations corresponding to different pure-tone stimuli in order to discriminate between their frequency. Assuming a fixed width of the read-out profile we vary the width of the cortical activity in the presence of noise activity. Somewhat counter-intuitive, our model predicts that when making the cortical activations narrower (as we experimentally found) the discriminability actually improves with background noise for tones with small frequency differences at the expense of a somewhat degraded performance for tones with larger intervals. Preliminary analysis of behaving mice trained in a go/nogo tone discrimination task largely confirms our theoretical predictions.
Taken together, our results indicate that the sensory representation in auditory cortex varies with background noise in such a way that discriminability between sound stimuli is maintained, and the circuitry may even be optimized for somewhat noisy conditions.
P13 Cortical circuits from scratch: A metaplastic rule for inducing lognormal firing rates in a cortical model
Zach Tosi1, John Beggs2
1Cognitive Science, Indiana University, Bloomington, IN 47405, USA; 2Physics, Indiana University, Bloomington, IN 47405, USA
Correspondence: Zach Tosi (ztosi@indiana.edu)
BMC Neuroscience 2017, 18(Suppl 1):P13
In science one key way of demonstrating the validity of our theories is by simulating a model using the relevant mathematical constructs and seeing if that model produces results consistent with the real-world phenomena in question. If the model matches the data (usually on a phenomenological level in the case of complex systems) then it can be said that our theories are complete with respect to what’s being modeled or more importantly that we understand or have gained insight into the system in question. However, if in such models we must hand tune certain variables relevant to the subject of the simulation it is generally taken as a sign that our theories are incomplete.
Previous work on models like the SORN (self-organizing recurrent network) have made significant headway in demonstrating that the key to a model capable of replicating the distinct nonrandom features of cortical behavior and topology through self-organization is the combination of multiple mechanisms of plasticity [1]. However, the SORN and other models like it have had to hand-set one or more relevant properties of the model to do so [1]. We introduce a novel model which like the SORN combines spike-timing dependent plasticity and firing rate homeostasis mechanisms, but which unlike the SORN includes a metaplastic mechanism for self-organizing the target firing rates of neurons to obtain a lognormal distribution. We further add to this work by using inhibitory STDP and incorporating inhibitory plasticity into the network’s homeostatic mechanisms. Collectively this allows us to begin a simulation with a network of excitatory and inhibitory neurons with no synaptic connections and uniform target firing rates and end the simulation with a network with highly complex, biologically plausible synaptic structure and lognormally distributed target firing rates. This metaplastic artificial neural architecture (MANA) not only reproduces key known features of synaptic topology, but also replicates the known relationships between high and low firing rate neurons in cortex (Fig. 1). The result of this self-organization appears to refine the network’s internal representations of its inputs, and when the resulting graph is subjected to community detection algorithms produce modules with distinct dynamical regimes. In sum, we introduce what could be called the first complete generic cortical network model in that we provide a means of broadly replicating known network level features through mechanism alone.
Figure 1. Relationships between high and low firing rate neurons in MANA where excitatory (inhibitory) neurons are in orange (blue). Consistent with [2] After self organization high firing rate neurons did not receive stronger connections on average (A) however they did receive more excitatory connections (B) and less inhibition (C &D)
References
1. Miner D, Triesch J: Plasticity-Driven Self-Organization under Topological Constraints Accounts for Non-random Features of Cortical Synaptic Wiring. PLOS Computational Biology 2016, 12(2): e1004759. doi: 10.1371/journal.pcbi.1004759
2. Benedetti BL, Takashima Y, Wen JA, Urban-Ciecko J, Barth AL: Differential Wiring of Layer 2/3 Neurons Drives Sparse and Reliable Firing During Neocortical Development. Cerebral Cortex (New York, NY) 2013, 23(11), 2690–2699. http://doi.org/10.1093/cercor/bhs257
P14 Investigating the effects of horizontal interactions on RGCs responses in the mice retina with high resolution pan-retinal recordings
Davide Lonardoni1, Fabio Boi1, Stefano Di Marco2, Alessandro Maccione1†, Luca Berdondini1†
1Neuroscience and Brain Technology Department, Fondazione Istituto Italiano di Tecnologia, Genova, Italy, 16163; 2Scienze cliniche applicate e biotecnologiche, Università dell’Aquila, L’Aquila, Italy, 67100
Correspondence: Davide Lonardoni (davide.lonardoni@iit.it)
†Co-senior authors
BMC Neuroscience 2017, 18(Suppl 1):P14
Processing of visual information in the cortex relies on a cascade of complex neuronal circuits that receive spike-trains conveyed by the axons of retinal ganglion cells (RGCs). The RGCs are the output neurons of the retinal circuit and are organized in different sub-types that output distinct features of the visual sensory input. Visual information processing in the retina occurs through a mosaic of vertical microcircuits (photoreceptors-bipolar-ganglion cell chain) that are additionally modulated by local and long range lateral connections (e.g. horizontal cells and amacrine cells in the outer and inner plexiform layer, respectively) carrying the contribution of spatially distinct areas of the visual scene [1,2]. However, the study of these horizontal interactions at pan-retinal scale was so far hampered by the lack of large-scale recording neurotechnologies.
To investigate how RGCs encode visual information in such a distributed and parallel manner we took advantage of high-density CMOS multielectrode array sensors [3] and of a visual stimulator developed to provide sub-millisecond and micrometric spatiotemporal precision. This offers the possibility of simultaneously record spontaneous and light-evoked spiking activity from thousands of single RGCs (4096 electrodes, 7.12 mm2 active area, 42 µm electrode pitch, 7 kHz sampling rate/electrode). Further, in the mice explanted retina it allows to sample RGCs activity at pan-retinal scale.
Here, we used this platform to investigate whether, and to which extent, horizontal interactions may contribute in shaping RGCs responses in local regions. To do so, we compared the population responses of ON and OFF RGCs in regions of the retina (about a quarter of the active area of the MEA), when the retina was subjected to whole retina stimuli (full-field condition, FF) and when the stimuli were confined to the region in which the recorded RGCs were located (masked condition, M). The visual stimuli consisted in white and black flashes and in moving bars of different spatial gratings (spatial frequency range: 0.026–0.75 cycle/deg). Additionally, we presented stimuli at four different levels of contrast. The recorded signals were spike-sorted and single-units (about 400 in each of the considered region, n = 4 retina) were classified into the main ON and OFF functional categories. Data were analyzed with the aim of identifying differences in the responses of RGCs located in regions that were always subjected to the same local visual stimulus under the two conditions.
Our results reveal that ON-/OFF-cells responses were significantly delayed (~30 ms) when the stimulus was masked in comparison with full-field evoked response. This holds for all the single-units within the considered regions, even for units located far apart from the border of the stimulation mask. We also found that the masked condition mostly affects ON-cell responses, while OFF-cell responses are significantly affected only at low contras conditions. Under pharmacological manipulation (bicuculline), we observed for both ON- and OFF-cells a recovery of the response delay. Overall, our results reveal that horizontal long-range interactions can contribute in shaping the response dynamics of ON and OFF-cells in the retina, thus highlighting the importance of studying the contribution of these interactions.
Acknowledgements
This study received financial support from the 7th Framework Program for Research of the European Commission (Grant agreement no 600847: RENVISION, project of the Future and Emerging Technologies (FET) program Neuro-bio-inspired systems (NBIS) FET-Proactive Initiative)).
References
1. Masland RH. The neuronal organization of the retina. Neuron. 2012, Oct
2. Marre O, Botella-Soler V, Simmons KD, Mora T, Tkačik G, Berry MJ 2nd. High Accuracy Decoding of Dynamical Motion from a Large Retinal Population. PLoS Comput Biol. 2015 Jul
3. Maccione A, Hennig MH, Gandolfo M, Muthmann O, van Coppenhagen J, Eglen SJ, Berdondini L, Sernagor E. Following the ontogeny of retinal waves: pan-retinal recordings of population dynamics in the neonatal mouse. J Physiol. 2014, Apr
P15 Calcium base plasticity rule can predict plasticity direction for a variety of stimulation paradigms
Joanna Jędrzejewska-Szmek1, Daniel B. Dorman1,2, Kim T. Blackwell1,2
1Krasnow Institute, George Mason University, Fairfax, VA 22030, USA; 2Bioengineering Department, George Mason University, Fairfax, VA 22030, USA
Correspondence: Joanna Jędrzejewska-Szmek (jjedrzej@gmu.edu)
BMC Neuroscience 2017, 18(Suppl 1):P15
The striatum is a major site of learning and memory formation for both sequence learning and habit formation. Synaptic plasticity – the long-lasting, activity dependent change in synaptic strength – is one of the mechanisms utilized by the brain for memory storage. Elevation in intracellular calcium is required in all forms of synaptic plasticity. It is widely believed that the amplitude and duration of calcium transients can determine direction of plasticity. It is not certain, however, if this hypothesis can be utilized in the striatum, partly because dopamine is required in potentiation of synaptic responses and partly because the diversity in stimulation paradigms is likely to produce a wide variety of calcium concentrations. To evaluate whether the direction of synaptic plasticity in the striatum can be predicted based on calcium dynamics, we used a model spiny projection neuron (SPN) and a calcium-based plasticity-rule. The SPN model utilized sophisticated calcium dynamics, which included calcium diffusion, buffering and pump extrusion both in the dendritic tree and spines, and also included synaptic AMPAR desensitization to more accurately model frequency dependent plasticity paradigms. The calcium based plasticity rule has been successfully used before [1] to predict plasticity direction of three spike-timing dependent plasticity (STDP) induction paradigms. To further test the rule, we utilized two frequency-dependent plasticity paradigms, one that elicits long-term depression (LTD) and one that elicits long-term potentiation (LTP). Our simulations show that, despite the variation in calcium for different protocols, a single, calcium-based weight change rule (plasticity rule) can explain the change in synaptic weights for two frequency dependent plasticity paradigms. Furthermore, using calcium-based weight change rule we tested whether excitability and possible changes in dopamine depletion on excitability of direct (dSPN) and indirect pathway spiny projection neurons (iSPN) can account for different outcome of STDP paradigm presented in [2] and [3]. Elucidating the mechanisms underlying synaptic plasticity, especially the role and interplay of calcium and dopamine, will allow for better understanding mechanisms of memory storage in health and disease.
Acknowledgements
The joint NIH-NSF CRCNS program through NIDA grant R01DA033390
References
1. Jędrzejewska-Szmek J, Damodaran S, Dorman DB, Blackwell KT: Calcium dynamics predict direction of synaptic plasticity in striatal spiny projection neurons. Eur J Neurosci 2016, doi:10.1111/ejn.13287
2. Shen W, Flajolet M, Greengard P, Surmeier D J: Dichotomous dopaminergic control of striatal synaptic plasticity. Science 2008, 321:848–51. doi: 10.1126/science.1160575.
3. Wu Y-W, Kim J-I, Tawfik VL, Lalchandani RR, Scherrer G, Ding JB: Input- and cell-type-specific endocannabinoid-dependent LTD in the striatum. Cell Reports 2015, 10: 75–87. http://dx.doi.org/10.1016/j.celrep.2014.12.005.
P16 Unstructured network topology begets privileged neurons and rank-order representation
Christoph Bauermeister1,2, Hanna Keren3,4, Jochen Braun1,2
1Institute of Biology, Otto-von-Guericke University, Magdeburg, 39120, Germany; 2Center for Behavioral Brain Sciences, Magdeburg, 39120, Germany; 3Network Biology Research Laboratory, Technion - Israel Institute of Technology, Haifa, 3200003, Israel; 4Department of Physiology, Technion - Israel Institute of Technology, Haifa, 32000, Israel
Correspondence: Christoph Bauermeister (chrbauermeister@googlemail.com)
BMC Neuroscience 2017, 18(Suppl 1):P16
A perennial question in computational neuroscience is the ‘neural code’ employed by spiking assemblies. A convenient model system are assemblies with self-organized instability expressed as all-or-none synchronization events (‘network spikes’). We simulated and analyzed assemblies with random (unstructured) connectivity, synapses with short-term plasticity, with and without external stimulation. Here we show that unstructured connectivity begets a class of privileged ‘pioneer’ neurons that herald network spikes (i.e., by discharging reliably during the incipient phase) and that, by means of the rank-order of their firing, encode the site of any external stimulation. We also demonstrate that existence of pioneers is strongly enhanced by a topological heterogeneity.
Firstly, we show how pioneers arise from an interaction between sensitivity and influentialness, in a manner reminiscent of an amplifier. This clarifies the mechanisms that produce pioneers and their distinctive behavior. Secondly, the rank-order of pioneer discharge reliably encodes the site of any external stimulation, in stark contrast to rate-based encoding schemes. We demonstrate this by stimulating the network at one of five alternative locations and by seeking to decode the stimulated location from different measures of activity (both rate- and time-based). Thirdly, by mapping the number of ‘pioneers’ as a function of recurrent excitation, inhibition, and type of topology, we show that an unstructured and broadly heterogeneous connectivity begets more pioneers than scale-free or homogeneously random connectivity (Figure 1). (Analysis based on interval from neuron discharge to peak population activity. Pioneer neurons exhibit mean larger than standard deviation.) Thus, a robust fraction of pioneers requires more than mere presence of ‘hubs’ (e.g., scale-free topology).
We conclude that random assemblies with self-organized instability offer valuable insights bearing on the issue of ‘neural coding’. Finally, we propose such assemblies as a minimal model for the privileged ‘pioneer neurons’ that reliably predict network spikes in mature cortical neuron assemblies in vitro [1,2].
Figure 1. Fraction of pioneer neurons and E/I balance, for various unstructured connection topologies: (A) homogeneous random, (B) scale-free random, (C) heterogeneous random. In A and B, pioneers are restricted to a comparatively narrow regime. In C, the domain of pioneers is greatly enlarged
References
1. Eytan D, Marom S: Dynamics and effective topology underlying synchronization in networks of cortical neurons. J Neurosci 2006, 26(33):8465–8476.
2. Shahaf G, Eytan D, Gal A, Kermany E, Lyakhov V, Zrenner C, Marom S. Order-based representation in random networks of cortical neurons. PLoS Comput Biol 2008, 4(11):e1000228. doi:10.1371/journal.pcbi.1000228
P17 Finer parcellation reveals intricate correlational structure of resting-state fMRI signals
João V. Dornas1, Jochen Braun1
1Institute of Biology, Otto von Guericke University, Magdeburg, Saxony-Anhalt 39120, Germany
Correspondence: João V. Dornas (joaodornas@gmail.com)
BMC Neuroscience 2017, 18(Suppl 1):P17
The correlation structure of resting-state BOLD signals in the human brain is highly complex. Suitable parcellations of the brain may render this structure simpler and more interpretable. Commonly used parcellations rely on anatomical and/or functional criteria, such as similarity of correlation profiles [1]. Seeking to integrate dependent (and segregate independent) sources of temporal variance, we formed ‘functional clusters’ distinguished by similar short-range correlation profiles inside their anatomically defined regions (‘AAL90’, [2]). Targeting an average size of 200 voxels per cluster, we obtained a parcellation into 758 functional clusters (termed ‘M758’), which proved to be largely contiguous in space. Whereas correlational structure was dense and simple for 90 AAL regions (62% of pairwise correlations are consistently significant, multivariate mutual information 82 bytes), it proved sparse and complex for ‘M758’ (26% of pairs, mutual information 883 bytes). To validate this approach, we examined and compared long-range functional correlations and long-range anatomical connectivity (established by fibre tracking) between cluster pairs in different anatomical regions. The correlational structure of ‘M758’ mirrored anatomical connectivity both overall and in detail. For purposes of comparison, we also established the correlational structure for the published parcellations ‘C400’ ([3]; 30% of pairs, mutual information 490 bytes) and ‘HCP360’ ([4]; 20% of pairs, mutual information 450 bytes), as well as for a parcellation of the same resolution (‘S758’) based on purely spatial criteria (45% of pairs, mutual information 837 bytes). This comparison showed ‘M758’ to be more successful than other parcellations at ‘lumping together’ redundant short-range correlations and ‘separating out’ independent long-range correlations, thereby facilitating the analysis and interpretation of correlational structure. We conclude that a finer parcellation of the human brain, based on a combination of functional and anatomical criteria, reveals a more intricate correlational structure in resting-state BOLD signals.
Acknowledgements
This work was supported by a Marie Curie Initial Training Network grant (n° 606901) under the European Union’s Seventh Framework Programme.
References
1. Cohen, A. L. and Fair, D. A. and Dosenbach, N. U. and Miezin, F. M. and Dierker, D. and Van Essen, D. C. and Schlaggar, B. L. and Petersen, S. E.: Defining functional areas in individual human brains using resting functional connectivity MRI. Neuroimage 2008, 41(1): 45–57.
2. Tzourio-Mazoyer, N. and Landeau, B. and Papathanassiou, D. and Crivello, F. and Etard, O. and Delcroix, N. and Mazoyer, B. and Joliot, M.: Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 2002, 15(1): 273–289.
3. Craddock, R. C. and James, G. A. and Holtzheimer, P. E. and Hu, X. P. and Mayberg, H. S.: A whole brain fMRI atlas generated via spatially constrained spectral clustering. Hum Brain Mapp 2008, 33: 1914–1928.
4. Glasser, M. F. and Coalson, T. S. and Robinson, E. C. and Hacker, C. D. and Harwell, J. and Yacoub, E. and Ugurbil, K. and Andersson, J. and Beckmann, C. F. and Jenkinson, M. and Smith, S. M. and Van Essen, D. C.: A multi-modal parcellation of human cerebral cortex. Nature 2016, 536: 171–178.
P18 Modelling human choices: MADeM and decision-making
Eirini Mavritsaki1,2, Silvio Aldrovandi1, Emma Bridger1
1Department of Psychology, Birmingham City University, Birmingham, UK; 2School of Psychology, University of Birmingham, Birmingham, UK
Correspondence: Eirini Mavritsaki (eirini.mavritsaki@bcu.ac.uk)
BMC Neuroscience 2017, 18(Suppl 1):P18
In this work, we present a novel approach, to our knowledge, to investigating the underlying brain processes involved in decision making using a computational model for decision making that is based on the Multiple Attribute Decision Making (MADeM) model. In decision making humans need to first evaluate the options available in the decision-making context and the way in which such evaluations are made is subject to debate. A growing body of recent literature [1] has suggested that people evaluate options in relative terms – that is, people are highly sensitive to the context in which an evaluation (and/or a choice) is made. For example, an individual product (e.g., a ready meal or a holiday) is evaluated with reference to other products (e.g., other ready meals, other holidays) available in the decision-making context [2].
The presented work is comprised of behavioral experiments and computational modelling work. The experiments we developed allow us in combination with MADeM, to further investigate the decision-making mechanisms at the cognitive and neuronal level. In the behavioral work participants chose between pairs of items, sampled across different choice domains (e.g., flats to rent and monetary gambles), which differed in terms of two features (e.g., rent cost and distance from a station for flats). Difficulty of choice was manipulated across trials by varying the distance in quality between the two features; for example, in a dominated trial an item was higher in quality on both attributes, whilst in a difficult trial participants were required to perform trade-offs between the two features.
MADeM is based on previous modelling work on visual attention using the spiking Search over Time and Space model (sSoTS). [3]. MADeM is separated into three layers; two layers for the two attributes simulated in the above experiment and one layer that gives us the outcome of the decision-making process. All layers are comprised by pools of excitatory and inhibitory neurons with properties as shown in Mavritsaki et al. [3]. Based on the levels of excitation and competition one of the choices will be higher activated in the Outcome pool and therefore this will be the selected choice. The preliminary results of this study show that the model successfully simulate the results from the behavioral studies using the organization presented above.
References
1. Aldrovandi S, Wood AM, Brown GDA: Sentencing, severity, and social norms: A rank-based model of contextual influence on judgments of crimes and punishments. Acta Psychologica 2013, 144: 538–547.
2. Aldrocandi S, Brown GDA, Wood AM: Social norms and ranked-based nudging: Changing willingness to pay for healthy food. Journal of Experimental Psychology-Applied 2015, 21: 242–254.
3. Mavritsaki E, Heinke D, Allen HA, Deco G, Humphreys GW: Bridging the gap between physiology and behavior: Evidence from the sSoTS model of human visual attention. Psychological Review, 118: 3–41.
P19 The interplay between synaptic plasticity and firing rate adaptation sharpens response dynamics with visual learning
Sukbin Lim1, Nicolas Brunel2,3
1Neural and Cognitive Sciences, NYU Shanghai, Shanghai, China, 200122; 2Department of Neurobiology, University of Chicago, Chicago, Illinois, 60637, USA; 3Department of Statistics, University of Chicago, Chicago, Illinois, 60637, USA
Correspondence: Sukbin Lim (sukbin.lim@nyu.edu)
BMC Neuroscience 2017, 18(Suppl 1):P19
Experience-dependent modifications of synaptic connections are thought to be one of the basic mechanisms for learning and memory. Changes of synaptic strengths lead to changes in inputs to neurons, which should in turn lead to changes in patterns of network activity with learning. In monkey inferotemporal cortex (ITC), changes in activity associated with familiarization with visual images include a reduction of average responses, as well as a broadening of the distribution of time-averaged visual responses [1–3]. Recently, it has been shown that not only the time-averaged responses, but also the dynamics of these visual responses changes with learning. Under conditions of rapid successive presentation of either learned or unlearned stimuli, it was found that familiar images, but not novel images, elicit strong periodic responses, which may underlie an enhancement of dynamic tracking ability with learning [3].
In this work, we investigated the mechanisms of such changes of response dynamics with learning using the time course data obtained in ITC neurons of monkeys during visual learning tasks [1–3]. Previously, we investigated how synaptic plasticity in recurrently connected circuits affects network activity, and derived a synaptic plasticity rule that reproduces changes of the distribution of time-averaged visual responses observed experimentally [4]. Here, we extended this framework to understand how the interaction between synaptic plasticity and various negative feedback mechanisms shapes response dynamics with learning.
We found that a fatigue mechanism analogous to firing rate adaptation, together with depression-dominant synaptic plasticity in recurrent circuits can explain the changes of response dynamics observed experimentally. When novel stimuli are shown repeatedly, the peak response to the second stimuli is smaller than the response to the first, due to slow recovery from the adaptation current. In contrast, for serial presentation of familiar stimuli, depression-dominant changes of synaptic strengths lead to a sharp truncation of the response to the first familiar stimulus, and consequently the response dynamics to the second stimulus is less affected by the adaptation current, and the peak response can be as strong as the first one.
We further demonstrated that such strong periodic response to rapid alternation of learned stimuli is a consequence of enhanced resonance properties with learning. Using firing rate models of recurrent circuits and mathematical analysis of mean-field dynamics, we showed that a long exposure of a single familiar image leads to a damped oscillatory response in contrast to an overdamped response to a novel image, consistent with experimental data [1, 2]. Thus, this work provides a mechanistic understanding of how interactions between depression-dominant synaptic plasticity and a negative feedback mechanism implementing firing rate adaptation shape network response dynamics, and accounts for experimental observations about the effects of visual experience on visual response dynamics of ITC neurons.
References
1. Freedman DJ, Riesenhuber M, Poggio T, Miller EK: Experience-dependent sharpening of visual shape selectivity in inferior temporal cortex. Cerebral cortex 2006, 16(11):1631–1644.
2. Woloszyn L, Sheinberg DL: Effects of long-term visual experience on responses of distinct classes of single units in inferior temporal cortex. Neuron 2012, 74(1):193–205.
3. Meyer T, Walker C, Cho RY, Olson CR: Image familiarization sharpens response dynamics of neurons in inferotemporal cortex. Nature neuroscience 2014, 17(10):1388–1394.
4. Lim S, McKee JL, Woloszyn L, Amit Y, Freedman DJ, Sheinberg DL, Brunel N: Inferring learning rules from distributions of firing rates in cortical neurons. Nature neuroscience 2015, 18(12):1804–1810.
P20 Adaptation and inhibition control the pathologic synchronization in the model of a focal epileptic seizure
Anatoly Buchin1,2, Clifford Charles Kerr3, Anton Chizhov4,5, Gilles Huberfeld6,7, Richard Miles8, Boris Gutkin9,10
1Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA; 2Allen Institute for Brain Science, Seattle, WA 98109, USA; 3SUNY Downstate Medical Center, New York City, NY 11228, USA; 4Computational Physics Laboratory, Ioffe Institute, St Petersburg, 194021, Russian Federation; 5Sechenov Institute of Evolutionary Physiology and Biochemistry, St Petersburg, 194223, Russian Federation; 6Pitié-Salpêtrière Hospital, University Pierre and Marie Curie, Paris, 75013, France; 7Inserm U1129 Infantile Epilepsies and Brain Plasticity, Paris Descartes University, Paris, 75013, France; 8Cortex and Epilepsy Group, Brain and Spine Institute, Paris, 75013, France; 9Department of Cognitive Neuroscience Group for Neural Theory, École Normale Supérieure, Paris, 75005, France; 10Center for Cognition and Decision Making, NRU Higher School of Economics, Moscow, 109316, Russian Federation
Correspondence: Anatoly Buchin (anat.buchin@gmail.com)
BMC Neuroscience 2017, 18(Suppl 1):P20
Pharmacoresistant epilepsy is a common neurological disorder in which the basic mechanisms of neuronal excitability and connection processes lead to pathologically synchronous behavior in the brain [1]. In the majority of experimental and theoretical epilepsy models, epilepsy is associated with reduced inhibition in the pathological neural circuits, but intrinsic excitability is usually neglected. Here we developed a novel neural mass model that includes synaptic and intrinsic excitability in the form of spike-frequency adaptation in the excitatory population [2]. We validated our model using local field potential data [3] recorded from human subiculum slices obtained from surgery of temporal lobe epilepsy with hippocampal sclerosis, Fig. 1. We found that synaptic excitability, slow adaptation in the excitatory population, and synaptic noise all play essential roles for generating seizures and disinhibition-induced oscillations. Using bifurcation analysis, we found that transitions towards seizure and back to the resting state take place via Hopf bifurcations. These simulations therefore suggest that single neuron adaptation as well as inhibition are responsible for orchestrating seizure dynamics and transition towards the epileptic state.
Figure 1. Population model in various excitatory regimes. A. Activity of a neural population in the resting state. B. Seizure state. C. Disinhibited state. Each plot contains the model scheme, power spectrum and time traces provided by the excitatory population as well as experimental LFP
Acknowledgements
Swartz Foundation, FRM FDT20140930942, ANR-10- LABX-0087 IEC, ANR-10-IDEX-0001-02 PSL, Contract No. 14.608.21.0001 unique ID project RFMEFI60815X0001
References
1. Chin JH, Vora N: The global burden of neurologic diseases Neurology 2014, 83:349–351.
2. Buchin AY, Chizhov A: Firing-rate model of a population of adaptive neurons Biophysics 2010, 55:592–599.
3. Huberfeld G, de La Prida LM, Pallud J, Cohen I, Le Van Quyen M, Clemenceau S, Baulac M, Miles R: Glutamatergic pre-ictal discharges emerge at the transition to seizure in human epilepsy Nature Neuroscience 2011, 5:627–634.
P21 Efficient and Effective Neural Activity Shaping for a Retinal Implant
Martin J. Spencer1, Hamish Meffin1,2, Tatiana Kameneva1, David B. Grayden1, Anthony N. Burkitt1
1Department of Biomedical Engineering, University of Melbourne, Melbourne, Australia; 2 NVRI, Department of Optometry & Vision Sciences, University of Melbourne, Melbourne, Australia
Correspondence: Martin J. Spencer (mspencer2@unimelb.edu.au)
BMC Neuroscience 2017, 18(Suppl 1):P21
Electrodes in a retinal implant can be activated in either positive or negative electrical polarity (cathodic or anodic). Either choice leads to activity in retinal ganglion cells (RGCs), and so can be used interchangeably. If every electrode is set to be positive (or negative) then this sets an upper limit on the perceived spatial contrast that can result from stimulation. In this case, the highest spatial gradient is limited by the spread of RGC activation associated with a single electrode. If positive and negative electrode activations are used simultaneously then this leads to higher perceived spatial contrast; electrodes of negative polarity can limit the spread of activity from a positive electrode, or vice versa [1]. The aim of the current investigation is to develop an algorithm that can be used to calculate an electrode activation pattern that takes advantage of this neural activity shaping effect to create a desired pattern of RGC activation.
It was assumed that the neural activity associated with each electrode activation could be summed to predict the total neural activity in the retina. A simple linear model of RGC activation was used: R = |W.E|, where R is a vector of N R RGC activations, E is a vector of N E electrode levels, and W is an array of dimension N R × N E, which maps electrode levels to neural activity. The values of the elements of W were calculated by assuming that the retinal activation created by each electrode is Ri = exp(−d 2 ij /d 2 0 ), where d ij is the distance between electrode E j and RGC R i , and d 0 is a scaling factor.
To calculate the electrode pattern required to induce a particular set of retinal activations, it was assumed that the model could be simplified to R = W.E. This now allows for the manipulation W −1.R desired = E SVD with W −1 calculated as a pseudoinverse of W using singular value decomposition. This assumption may lead to errors in the calculation in cases where the resulting retinal activation R SVD is negative. However, in simulations, it was not found to produce substantial errors because R desired is always positive, so R SVD was only ever marginally negative. Figure 1 shows an example desired image, calculated and simple electrical stimulation patterns, and resulting modeled neural activation results. R SVD produces high-contrast neural activity, but with some artifacts. R naive , derived by simply mapping the neural activity pattern directly to the electrodes, has lower contrast.
Figure 1. A. 2D pattern of desired high (white) and low (black) RGC activity. B. Electrode activations of 100 electrodes calculated using W −1.R desired . Red: positive current, Blue: negative current. C. Simulated neural activity resulting from the calculated electrode activation pattern. D. Simple electrode pattern. E. Neural activity resulting from use of the simple electrode pattern. F. Cross secion comparisons between R desired (yellow), R SVD before (blue) and after (red) rectification, and R naive (purple)
It might be anticipated that the complex mapping resulting from the system R = |W.E| would require sophisticated nonlinear or machine learning techniques to calculate desired electrode activations. However, we found that an efficient linear algebra approach was sufficient to see improved, high contrast, patterns of retinal activation. This approach is feasible for implementation in a retinal implant system.
Acknowledgements
Australian Research Council Discovery Project DP140104533
Reference
1. van den Honert C, Kesall DC: Focussed intracochlear electrical stimulation with phased array channels. J Acoust Soc Am 2007; 121:3703–3716.
P22 Application of control theory to neural learning in the brain
Catherine E Davey1, David B. Grayden1,2, Anthony N. Burkitt1
1Department of Biomedical Engineering, University of Melbourne, Melbourne, Victoria, 3010, Australia; 2Centre for Neural Engineering, University of Melbourne, Melbourne, Victoria, 3053, Australia
Correspondence: Catherine E Davey (cedavey@unimelb.edu.au)
BMC Neuroscience 2017, 18(Suppl 1):P22
Neural plasticity describes the process by which the brain learns, primarily in response to environmental inputs. Supervised learning is a subset of plasticity that describes how one sensory system trains a second sensory modality to achieve a specific goal. This sensory integration requires multimodal neurons, and is often performed in higher cortical layers. Consequently, while there are several small scale examples of supervised learning, more general cases require a complex system of interconnected neurons from multiple brain regions [1]. Supervised learning has historically been modelled using iterative gradient evaluation techniques [2]. Gradient methods typically back-propagate the error through the network to enable local updating of synaptic connection strengths. In a neural context, this assumes that the network can propagate the error backwards, which is a significant assumption that is not biologically plausible at the level of individual synapses [3].
In this work, we pose supervised learning in a control framework, with the primary objective to capitalise on the success of control theory in managing large scale, complex systems [4], by building a biologically plausible system that is scalable. Control theory has played a fundamental role in modern technological systems, with feedback control having many desirable properties, such as the ability to converge to a desired output, stable performance in a noisy environment, and a framework for modelling complex systems [5]. We develop a proof-of-concept and demonstrate equivalent performance to existing techniques. Our prototype system models the supervised learning of target direction from auditory information. More specifically, we model synaptic learning of how to transform interaural time difference (ITD), measuring delay between the arrival of a sound to the left and right ears [6], into an estimate of angle to source. The auditory feature map generated from the ITD is transformed to a source angle feature map in the superior colliculus, though exactly how this is achieved is the subject of ongoing research. The visual system provides the supervisor signal for learning this transformation.
We demonstrate the application of control theory analysis tools by describing the conditions under which the system is robust, stabilisable and controllable. Control parameters are optimised to regulate neural learning and balance the system’s ability to respond to new inputs, while exhibiting robustness to noise. Furthermore, the model is implemented without requiring backwards propagation of signals through synapses. Application of control theory will augment synaptic plasticity research with the advanced methodology and tools of the mature control theory discipline, and has the potential to resolve complexity limitations inherent in current approaches, in addition to addressing the biological plausibility issues associated with current techniques.
Acknowledgements
This research was funded by a University of Melbourne fellowship.
References
1. Knudsen E, Kokotovic PV, Morse AS. Supervised learning in the brain. J Neurosci, 1994, 14(7):3985–3997.
2. Hassoun M, Fundamentals of Artificial Neural Networks, The MIT Press, 1995.
3. Kasinski A, Ponulak K. Comparison of supervised learning methods for spike time coding in spiking neural networks. Int. J. Appl. Math. Comput. Sci, 2006, 16(1): 101–113.
4. Drouin M, Abou-Kandil H, Mariton M, Control of Complex Systems: Methods and Technology, Springer Science + Business, New York, 1991.
5. Goodwin GC, Graebe SF, Salgado ME, Control System Design, Prentice Hall, 2001.
6. Jeffress LA. A place theory of sound localization. J Comp Physiol Psychol, 1948, 41:35–39.
P23 Modeling dynamic oscillations: a method of inferring neural behavior through mean field network models
Liangyu Tao1, Vineet Tiruvadi1,2, Rehman Ali4, Helen Mayberg3, Robert Butera1
1Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA; 2Department of Biomedical Engineering, Emory University, Atlanta, GA, 30322, USA; 3Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta, GA, 30322, USA; 4Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA
Correspondence: Liangyu Tao (ltao31@gatech.edu)
BMC Neuroscience 2017, 18(Suppl 1):P23
Deep brain stimulation (DBS) is a promising investigational treatment for patients with treatment resistant depression (TRD). Previous studies using diffusion tensor imaging (DTI) have identified key white matter tracts, passing through the subcallosal cingulate (SCC), associated with TRD recovery in patients receiving DBS [1]. However, the mechanism by which stimulation modulates network level pathological activity in the SCC network has not been clearly established. Local field potential recordings in the SCC have shown the emergence of transient, nonlinear, decreases in higher frequency power over 30–60 s following specific stimulation conditions in a subset of implanted patients. We provisionally define these electrophysiological signatures as a transient down-chirps. These transient down-chirps, when present, are a reproducible SCC and potential biometric of neural circuit interactions seen with initial exposure of the SCC to high frequency stimulation. Understanding why and how stimulation causes this electrophysiological behavior in the SCC network is an important step in increasing the efficiency and success rate of treatment for patients with TRD.
One mechanism of transient down-chirp generation was hypothesized to be the excitatory/inhibitory balance of neural regions following stimulation. Mean field network models can be used to understand the dynamics of groups of neurons that would cause the observed signals in LFP recordings. Each neural region was modeled based on a Wilson Cowan population with GABA and Glutamate dominated signaling [2]. White matter tracts connecting neural regions were modeled assuming Glutamate dominant signaling.
Figure 1. A. This is a spectrogram of LFP recordings showing transient down-chirp. B. This is a spectrogram of model generated transient down chirp
We show how a simple network, based on the topological layout of the SCC network, of mean field neural populations models could be used to generate qualitatively the down-chirps seen in the local field potentials (see Figure 1). We then characterize these modeled down-chirps by the excitatory/inhibitory balance associated with each neural region. Using these metrics, we classified the maximum likelihood of excitatory and inhibitory responses of neural regions following stimulation in generating these down-chirps.
Our results highlight the role of utilizing a network of population models informed by biology as a method of predicting neural activity in neural regions that are expensive and difficult to measure. Importantly, these models serve as a possible method of informing future treatment strategies in DBS.
Acknowledgements
Liangyu Tao is supported by NIH training grant 5R90DA033462 and the President’s Undergraduate Research Award (Fall 2016). DBS support: Hope for Depression Research Foundation, FDA IDE G130107 (HM).
References
1. Riva-Posse P, Choi KS, Holtzheimer PE, McIntyre CC, Gross RE, Chaturvedi A, Crowell AL, Garlow SJ, Rajendra JK, Mayberg HS: Defining Critical White Matter Pathways Mediating Successful Subcallosal Cingulate Deep Brain Stimulation for Treatment-Resistant Depression. Biological Psychiatry 2014, 76(12):963–969.
2. Wilson HR, Cowan JD: Excitatory and Inhibitory Interactions in Localized Populations of Model Neurons. Biophysical Journal 1972, 12(1):1–24.
P24 Synaptic strengths dominate phasing of motor neurons by a central pattern generator
Cengiz Gunay1,2, Anca Doloc-Mihu1, Damon Lamb1,3, Ronald L Calabrese1
1Department of Biology, Emory University, Atlanta, GA 30322, USA; 2School of Science and Technology, Gerogia Gwinnett College, Lawrenceville, GA 30043, USA; 3Department of Neurology, Univ. Florida, Gainesville, FL, USA
Correspondence: Cengiz Gunay (cgunay@ggc.edu)
BMC Neuroscience 2017, 18(Suppl 1):P24
Rhythmic motor output is driven by upstream central pattern generator (CPG) phasing, whose synapses therefore play an important role in shaping motor patterns. Individual animals show large variability in motor circuits, not only in circuit synaptic parameters, but also in intrinsic neuronal parameters [1]. It is not known how this observed intrinsic variability influences motor circuit function across animals. Previous computer model parameter searches revealed a large landscape of intrinsic and circuit parameter combinations that can produce functional output in a general model population [2]. However, asking specific questions about individual animals requires experimental data. A leech heartbeat motor neuron model was previously tuned to individual animal circuit data using a multi-objective evolutionary algorithm (MOEA) approach [3]. Relative synaptic weights were measured experimentally [4], and model parameters were optimized to estimate intrinsic conductance parameters that produce a functional motor pattern output within the physiological ranges of the individual preparation. The method did not scale to find intrinsic conductance combinations that can match the outputs measured from five other preparations, even though the method’s convergence was improved with a fuzzy fitness criterion [5]. MOEA parameter search succeeded only when synaptic weights were allowed to vary from measured averages. Unique solutions found for each of the preparations showed the criticality of the relative weights of different synaptic inputs in determining motor output patterns as opposed to intrinsic parameters. To investigate whether the newly found synaptic weights are within experimental variability, we have calculated standard deviations of synaptic conductances from spike-triggered averages (STA) of voltage-clamped synaptic current traces. Variable baseline contributed to increased noise and variation of STA current traces. We found that a bandpass filtering method reduced baseline variability and therefore variability of estimated weights. Despite reduced variability, new synaptic weights found by the MOEA search were still within one standard deviation of experimentally measured values in each of the six preparations. Furthermore, we showed that a neuron model with same intrinsic conductances is able to produce functional outputs in all six of our preparations, as long as new synaptic weights were used. In summary, we used MOEA parameter search as a tool and improved spike-triggered average estimation of synaptic weights and their variability to find that CPG networks in individual animals require precise relative synaptic weights irreplaceable by adjusting intrinsic properties. Furthermore, we conclude that measured synaptic weights should be used with caution in computer models because any experimental noise may break functional output.
Acknowledgements
Angela Wenning and Brian Norris provided experimental data for synaptic weights. Supported by NIH NINDS 1 R01 NS085006.
References
1. Bucher D, Prinz AA, Marder E: Animal-to-animal variability in motor production in adults and during growth. J Neurosci 2005 25(7):1611–19.
2. Prinz AA, Bucher D, Marder E: Similar network activity from disparate circuit parameters. Nat Neurosci 2004 7(12):1345–52.
3. Lamb D, Calabrese RL: Correlated conductance parameters in leech heart motor neurons contribute to motor pattern formation. PLoS ONE 2013 8(11): e79267
4. Norris BJ, Weaver AL, Wenning A, García PS, Calabrese RL: A Central Pattern Generator Producing Alternative Outputs: Pattern, Strength, and Dynamics of Premotor Synaptic Input to Leech Heart Motor Neurons. J Neurophysiol 2007 98:2992–3005.
5. Smolinski TG, Prinz AA, Zurada, JM: Hybridization of rough sets and multi-objective evolutionary algorithms for classificatory signal decomposition. In Slazek and Lingras: Rough Computing: Theories, Technologies, and Applications 2007 204–27.
P25 PumpHCO-db: A database of half-center oscillator computational models for analyzing the influence of Na+/K+ pump on the bursting activity
Anca Doloc-Mihu, Ronald L. Calabrese
Department of Biology, Emory University, Atlanta, GA, 30322, USA
Correspondence: Anca Doloc-Mihu (adolocm@emory.edu)
BMC Neuroscience 2017, 18(Suppl 1):P25
Rhythmic behaviors such as walking or breathing are controlled by networks of neurons that produce rhythmic bursting activity, called central pattern generator (CPG). These CPG neurons depend upon a Na+/K+ pump to maintain the ionic gradients that establish the resting potential and thus support other ionic currents. However, how the Na+/K+ pump, which produces an outward net current proportional to its activity, directly influences bursting activity is not yet fully understood. Here, we use a half-center oscillator (HCO) (two mutually inhibitory neurons) mathematical model [1] that includes a Na+/K+ pump to replicate the electrical activity (rhythmic alternating bursting of mutually inhibitory interneurons) of the leech heartbeat CPG under a variety of experimental conditions. Our study here is preliminary to a full investigation of the role of the Na+/K+ pump in the robust maintenance of functional bursting activity. For this study, we used the mathematical model of Kueh et al. [1] of a HCO, which consists of a pair of reciprocally inhibitory model neurons, with each individual leech heart interneuron being represented as a single isopotential electrical compartment with Hodgkin and Huxley type intrinsic membrane and synaptic conductances. In this study, the HCO model has eight currents with voltage-dependent conductances including two types of inhibitory synaptic currents, spike mediated and graded. This HCO model also includes a Na+/K+ pump current that tracks changes in intracellular Na+ concentrations that occur as a result of the Na+ fluxes carried by ionic currents. The Na+/K+ pump exchanges two K+ ions for three Na+ ions, its activity and hence its current has a sigmoidal dependence on intracellular Na+ concentrations. Na+ currents include the fast spiking current (INa) and a persistent Na+ current (IP). All model equations are given in Kueh et al. [1]. The 8–9 order Prince-Dormand method from the GNU Scientific Library (www.gnu.org/software/gsl) was used to integrate the model’s differential equations. To explore systematically the parameter space of this HCO and corresponding isolated neuron models, we used the brute-force approach. We varied selected parameters in both neurons in all combinations possible: the maximal conductances of the persistent Na+ (IP), slow Ca2+, leak, hyperpolarization-activated (Ih), and persistent K+ currents, across of 50, 75, 100, 125, and 150 percent of their canonical values (see [1]), the leak reversal potential across −66.25, −62.5, −58.75, −55, and −51.25 mV, the half-activation of the Na+/K+ pump across −2, −1, 0, 1, and 2 mV, the maximum Na+/K+ pump current across 0.38, 0.41, 0.44, 0.47, and 0.5 nA, and the slope coefficient across 90, 95, 100, 105, and 110 percent of its canonical value. The resulting parameter space includes 100 million simulated models. After changing a parameter, a model was run for 200 s to allow the system to establish stable activity, and then, it was run for another 40 s, from which the data were recorded and analyzed. We then classified these HCO and isolated (synaptic currents equal zero) neuron model simulations by their activity characteristics, so that models showing the same electrical activity are segregated to the same group. Of particular interest to us, is the group of bursting simulated models, which was further split into realistic and non-realistic HCOs [3]. We built a relational database PumpHCO-db with the resulting model characteristics similar to our previous work [2]. Our ongoing studies use this database to ask fundamental questions about how realistic HCO [3] activity is influenced by the Na+/K+ pump. We will be particularly interested in parameter changes that correspond to known neuromodulations such as the modulation of Ih and maximal Na+/K+ pump current by myomodulin [4].
Acknowledgements
Work supported by the National Institute Health Grant R01 NS085006 to R.L.Calabrese.
References
1. Kueh D, Barnett WH, Cymbalyuk GS, Calabrese RL: Na+/K+ pump interacts with the h-current to control bursting activity in central pattern generator neurons of leeches. eLife: 2016; 5:e19322.
2. Doloc-Mihu A, Calabrese RL: A database of computational models of a half-center oscillator for analyzing how neuronal parameters influence network activity. J Biol Physics 2011, 37:263–283.
3. Doloc-Mihu A, Calabrese RL: Analysis of family structures reveals robustness or sensitivity of bursting activity to parameter variations in a half-center oscillator (HCO) model. eNeuro 2016, 3(4): ENEURO.0015-16.2016.
4. Tobin AE, Calabrese RL: Myomodulin increases Ih and inhibits the Na/K pump to modulate bursting in leech heart interneurons. J Neurophysiol. 2005, 94:3938–3950.
P26 Encoding of memories: effective connectivity on the hippocampus and the role of inhibition in the information flow
Víctor J. López-Madrona1, Fernanda S. Matias2, Ernesto Pereda3, Claudio R. Mirasso4, and Santiago Canals1
1Instituto de Neurociencias, Consejo Superior de Investigaciones Científicas, Universidad Miguel Hernández, Sant Joan d’Alacant 03550, Spain; 2Instituto de Física, Universidade Federal de Alagoas, Maceió, Alagoas 57072-970, Brazil; 3Departamento de Ingeniería Industrial, Escuela Superior de Ingeniería y Tecnología, Universidad de La Laguna Avda. Astrofísico Fco. Sanchez, s/n, La Laguna, Tenerife 38205, Spain; 4Instituto de Física Interdisciplinar y Sistemas Complejos, CSIC-UIB, Campus Universitat de les Illes Balears E, 07122 Palma de Mallorca, Spain
Correspondence: Víctor J. López-Madrona (v.lopez@umh.es)
BMC Neuroscience 2017, 18(Suppl 1):P26
Networks containing a huge number of neurons and synapses confer the brain an immense computational capability. Learning how activity propagates in these intricate networks would help us understand how information is globally integrated. Only then we could try to understand, how perception and its unitary nature emerges from a multisensory experience or how complex memories are formed. Activity propagation in the system is determine by the structural connections (wiring diagram) linking the different nodes in the network and, importantly, by the functional interactions between the different nodes. These interactions are highly dynamic processes that mostly relay on changes in synaptic efficacy and the differential recruitment of excitatory and inhibitory elements (the excitation/inhibition balance). The combination of both factors, the wiring diagram and the dynamic functional properties of the connections, determine the effective connectivity of the system in a particular moment or state. Here we have used a computational model and causality measurements to study activity propagation in the hippocampal formation, a brain region critical for the formation of episodic memories. It is composed by the hippocampus proper (areas CA1 and CA3), the dentate gyrus (DG) and the entorhinal cortex (EC). While extensive literature on the connectivity of the first regions exist, the connectivity of the EC remains poorly investigated.
To better understand how the internal structure of EC affects the causality of information flow in the hippocampal formation, we implemented a model containing all the above areas. We assumed the EC was formed by 3 layers (II, III and V). We fixed all connections in the model between DG, CA3 and CA1, while the EC connectivity was systematically varied. The effective connectivity was estimated using Granger Causality (GC) and Partial Transfer Entropy (PTE). For these measurements, we assumed that only information from DG, CA3 and CA1 was available, as commonly happens in experiments. We also introduced interneurons in our circuit, considering inhibitory projections from CA1 to CA3. With this new ingredient, we addressed different “causality” measures, such as information flow and synchronization between populations for excitatory and inhibitory effective connections, respectively.
Conclusions: Our procedure revealed that different EC internal connectivity patterns give rise to very distinct causality results in the hippocampus, despite its fixed connectivity. Moreover, different results were obtained for the two methods (GC, PTE), highlighting the importance of the analysis and revealing potential misinterpretations when only partial information is available. Our method allowed us to analyze the differences of causality when excitatory and inhibitory projections are considered and identified the most probable EC configuration to explain the known connectivity between the DG, CA3 and CA1.
P27 Extended generalized leaky integrate and fire neuron for cerebellum modeling
Alice Geminiani1, Alessandra Pedrocchi1, Egidio D’Angelo2, Claudia Casellato1
1NEARLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan 20133, Italy; 2Department of Brain and Behavioral Sciences, University of Pavia, Pavia 27100, Italy
Correspondence: Alice Geminiani (alice.geminiani@polimi.it)
BMC Neuroscience 2017, 18(Suppl 1):P27
Simplified but realistic neuron models are useful for investigating the emergent properties of neural circuits in large-scale simulations and the role of specific neuron dynamics in efficient signal transmission, for behavior generation. Here we extend a generalized leaky integrate-and-fire (GLIF) model [1] so to produce an enriched variety of spiking responses. We developed the GLIF point-neuron model in PyNEST using NESTML [www.nest-initiative.org], adding in the state and update equations: i) spike generation stochasticity, ii) threshold dynamics depending on membrane voltage dynamics, iii) spike-triggered hyperpolarizing current with update constant depending on the input current (Iin). The model couples time-dependent and event-driven algorithmic components; it can be tuned to generate autorhythm, specific slope between response frequency and input current (f-Iin), spike-frequency adaptation increasing with Iin, AfterHyperPolarization (AHP) duration increasing with Iin, firing irregularity (CV of inter-spike-intervals), phase reset, and post-inhibitory rebound bursting. In particular, we focus on the specific electrophysiological properties of cerebellar cells (Golgi - GoC, Granular, Purkinje, Inferior Olive and Deep Nuclei neurons). After fixing some parameters with direct biological values, we tune the others to reproduce cell-specific behaviors. We implement a protocol injecting a sequence of Iin steps (excitatory and inhibitory) of different amplitudes and durations (Figure 1). For the GoC, we get f-Iin = 0.24 Hz/pA, and CV = 0.034 (on the long excitation step, exc3). The tuned model reproduces the typical electroresponsiveness of a GoC in vitro [2] (Figure 1).
Figure 1. Membrane voltage (Vm, in blue) of GoC model, threshold voltage (Vth, black dashed line), along the 18.5 s of protocol (green steps of Iin on membrane Capacitance, Cm). Insets show the produced spikes (red vertical lines) and associated properties
After automatic parameter optimization, we will create an in vivo cerebellum microcircuit by connecting the differentiated cell populations, through plastic synapses (this GLIF model includes also presynaptic spikes with voltage-dependent conductances). Motor learning skills will be tested by closed-loop sensorimotor tasks. Therefore, the tool allows to reliably reproduce specific alterations of neuron mechanisms and the consequent misbehaviors.
Acknowledgements
This work was supported by EU grant Human Brain Project (HBP 604102).
References
1. Mihalaş S and Niebur E.: A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors. Neural Comput. 2009; 21(3): 704–718
2. D’Angelo E et al.: Modeling the Cerebellar Microcircuit: New Strategies for a Long-Standing Issue. Front. Cell. Neurosci. 2016; 10:176.
P28 Saccade Velocity Driven Oscillatory Network Model of Grid Cells
Ankur Chauhan1,KarthikSoman1, V. Srinivasa Chakravarthy1
1Department of Biotechnology, Indian Institute of Technology Madras, Chennai, Tamilnadu, India
Correspondence: V. Srinivasa Chakravarthy (schakra@iitm.ac.in)
BMC Neuroscience 2017, 18(Suppl 1):P28
Grid cells in the EntorhinalCortex (EC), one of the key neural correlates for spatial navigation in rodents and primates[1], have been recently reported to have a role in saccadic movement encoding also [2]. Experimental studies corroborated this by analyzing the characteristic hexagonal firing fields of neurons in EC of head fixed monkeys, as the animals scanned natural images displayed in front of them. Here, we present Saccade Velocity Driven Oscillatory Network (SVDON) model that captures the responses of grid cells to saccadic trajectories. SVDON is an extension of VDON model which was previously used for modeling the grid cell in actual spatial navigation [3].
SVDON has basically four stages viz: Saccade Generation (SG), Saccade Direction (SD) encoding, Path Integration (PI), and Spatial Cell (SC) stages respectively (Fig. A). SG was implemented using a saliency map based bottom-up attention model that selectively attends to the salient locations of an image depicting a natural scene [4]. Once the saccade trajectory was built on the image, velocity vectors at each point were computed. These velocity vectors were forward passed to the SD layer, an array of neurons, where each neuron had its own preferred direction. EC was experimentally reported to have neurons with tuned responses to saccade direction and called as SD cells [5]. SD layer response was passed on to the PI layer which had a one to one connection with the SD layer. Each neuron in the PI layer was a phase oscillator such that the SD response modulated the frequency of it and hence the saccade position information along that direction component was encoded as the phase of the respective oscillator. Final SC layer was an unsupervised neural layer that extracted the principal components of the oscillatory responses. Remapping the SC neuron activity on the saccade trajectory exhibited grid cell like periodicities including hexagonal firing fields (Fig. B). Further computation of the hexagonal gridness score (HGS) confirmed this result.
Figure 1. A. SVDON model architecture B. Firing field (Top), Firing Rate map (Middle) and Autocorrelation map (Bottom) of grid neuron in the SC layer. HGS score is 0.6971 (If HGS > 0 it qualifies as hexagonal grid cell)
References
1. Hafting T, Fyhn M, Molden S, Moser M-B, Moser EI: Microstructure of a spatial map in the entorhinal cortex. Nature 2005, 436(7052):801–806.
2. Killian NJ, Jutras MJ, Buffalo EA: A map of visual space in the primate entorhinal cortex. Nature 2012, 491(7426):761–764.
3. Soman K, Muralidharan V, Chakravarthy S: An oscillatory network model of head direction, spatially periodic cells and place cells using locomotor inputs. bioRxiv 2016:080267.
4. Walther D, Koch C: Modeling attention to salient proto-objects. Neural networks 2006, 19(9):1395–1407.
5. Killian NJ, Potter SM, Buffalo EA: Saccade direction encoding in the primate entorhinal cortex during visual exploration. Proceedings of the National Academy of Sciences 2015, 112(51):15743–15748.
P29 Programmed cell death in substantia nigra due to subthalamic nucleus-mediated excitotoxicity: a computational model of Parkinsonian neurodegeneration
Vignayanandam R Muddapu1, Srinivasa V. Chakravarthy1
1Bhupat and Jyoti Mehta School of Biosciences, Department of Biotechnology, IIT-Madras, Chennai, TN, India
Correspondence: Srinivasa V. Chakravarthy (schakra@iitm.ac.in)
BMC Neuroscience 2017, 18(Suppl 1):P29
Parkinson’s disease (PD) is a neurodegenerative disease with an estimated 6 million people are affected worldwide. It is caused by the loss of dopaminergic neurons in the substantia nigra pars compacta (SNc), though the exact cause of the cell death is not clear. One hypothesis about the cause of SNc death, known as the “subthalamic nucleus-mediated excitotoxicity theory” [1] states that the dopamine deficiency in SNc leads to disinhibition and overactivity of the subthalamic nucleus (STN) which, in turn causes excitotoxic damage to their target structures, including the SNc itself. In order to investigate this hypothesis, we built a computational spiking network model of SNc-STN loop along with STN-GPe loop. The model aims to capture the underlying dynamics during the overactivity of STN and study excitotoxicity caused by it in SNc. All the nuclei are modeled as Izhikevich 2D neurons (Figure 1A). The model was tuned and simulated for normal and PD conditions characterized by loss of SNc cells. We incorporate a mechanism of programmed cell death, whereby a SNc cell under high stress (compared to an apoptotic threshold) kills itself. Stress on a given SNc cell was calculated based on mean firing history of the cell – higher firing activity leads to higher stress. Under normal conditions, the loop interactions between SNc and STN are such that, the stress levels in SNc do not exceed the apoptotic threshold, and therefore the SNc cells survive. But if a critical number of SNc cells die for some reason, the reduced SNc size leads to disinhibition of STN, which becomes overactive, due to which some of the SNc cells become overactive and die by programmed cell death. Thus, the initial loss of SNc cells leads to a runaway effect, leading to an uncontrolled loss of cells in the SNc, characterizing the underlying neurodegeneration of PD.
The simulation results obtained from the proposed model in normal and PD conditions provided important insights regarding excitotoxicity in SNc. Firstly, when the connection from SNc to STN were introduced (at t = 0 s), synchrony in the STN network decreased (Figure 1Bb, 0 to 10 s) which was observed in normal physiological condition [2]. A cell in SNc is “killed” whenever its stress variable crosses a “stress threshold”. When the stress threshold = 11.5, all the SNc neurons survive. To emulate PD condition in our model, stress threshold was lowered from 11.5 to 11.3 at t = 10 s, which triggers a steady and uncontrolled loss of SNc cells (Figure 1Bb roughly 17 s onwards). The synchrony in STN network begins to increase only when more than 50% of the SNc cells are lost (Figure 1Bb, roughly 40 s onwards). The proposed model was able to exhibit STN-mediated excitotoxicity in SNc. The connection strength from GPe to STN can be used as a parameter to delay or to hasten the rate of cell loss. In future work, we will investigate if deep brain stimulation to STN can slow down the progression of cell loss in PD condition.
Figure 1. A. The model architecture. B. Simulation plot (50 s) shows (a, d) mean firing rate (mfr) and (b, d) synchrony (syn) measure for STN and SNc population. (e) subplot shows number of SNc cell lost during the simulation
References
1. Rodriguez MC, Obeso JA, Olanow CW: Subthalamic nucleus-mediated excitotoxicity in Parkinson’s disease: a target for neuroprotection. Ann Neurol 1998,44(Suppl 1):S175–S188.
2. Cragg SJ, Baufreton J, Xue Y, Bolam JP, Bevan MD: Synaptic release of dopamine in the subthalamic nucleus. Eur J Neurosci 2004,20:1788–1802.
P30 A novel approach for determining how many distinct types of neurons are in the Drosophila brain by sequencing neural structure
Chao-Chun Chuang, Nan-yow Chen
National center for high-performance computing, Hsinchu, Taiwan
Correspondence: Chao-Chun Chuang (summerhill001@gmail.com)
BMC Neuroscience 2017, 18(Suppl 1):P30
The brain can be divided into two parts, “hardware” and “software”. Hardware refers to the constructed between nerve cells networks, and software refers to the neuronal connectomes by gene expression in nerve cells. For hardware parts of brain, we have constructed a three-dimensional single-cell database of Drosophila brain (FlyCircuit), about 30,000 neurons now. For software parts of brain, we will focus on mapping the neural connections and pathway in the Drosophila brain. To define how many distinct types of neurons are in the Drosophila brain will provide us a useful way to solve this problem. There are many way to define the different types of neurons. Obvious categories include structural differences in shape and positioning of dendrites and axons. However, they are too difficult and complicated. In the current study, we will analyze about 30,000 neurons in Flycircuit. For high-speed connectomics analysis of neuron morphology, algorithms adapted from those used in protein structure studies were developed to represent 3D neuron morphology as 1D sequence. We applied this method on Drosophila neuron 3D structure to show a sequential neuropilar pathway (global neurite structure sequence), and a voxel distribution within neuropilar subdomains for neurites (local neurite structure sequence). Then in a constructed relative framework, each neuron will be defined with a specialized digital code according to its class (Figure 1A), family with global neurite structure sequence (Figure 1B), and type with local neurite structure sequence (Figure 1C). Then we can use these codes to classify how many specialized types of neurons are in the Drosophila brain. Finally, the standardization of neurite structure sequence can handle massive 3D neuronal image data collected in experiments from different research groups as well as manage bio-images with deeper neurological insight.
Figure 1. We applied this method on Drosophila neuron 3D structure to show a sequential neuropilar pathway (global neurite structure sequence), and a voxel distribution within neuropilar subdomains for neurites (local neurite structure sequence)
References
1. Lin, Chih-Yung, et al.: A comprehensive wiring diagram of the protocerebral bridge for visual information processing in the Drosophila brain.” Cell reports 2013, 3.5: 1739–1753.
2. Chiang, Ann-Shyn, et al. Three-dimensional reconstruction of brain-wide wiring networks in Drosophila at single-cell resolution. Current Biology 2011, 21.1: 1–11.
3. Jefferis, G.S. et al. Comprehensive maps of Drosophila higher olfactory centers: spatially segregated fruit and pheromone representation. Cell 2007, 128: 1187–1203.
P31 Generating sequences in recurrent neural networks for storing and retrieving episodic memories
Mehdi Bayati1,2, Jan Melchior1, Laurenz Wiskott1, Sen Cheng1,2
1Institut für Neuroinformatik, Ruhr-Universität Bochum, D-44801 Bochum, Germany; 2Mercator Research Group ‘Structure of Memory’, Ruhr-University Bochum, Bochum, Germany
Correspondence: Mehdi Bayati (bmehdi5@gmail.com)
BMC Neuroscience 2017, 18(Suppl 1):P31
It has been suggested that the reliable propagation and transformation of neural activity within and between different brain regions is crucial for neural information processing. Furthermore, temporal sequences of neural activation have recently been proposed to play an important role in the explanation of the function of hippocampal neural circuits in episodic memory, our memory of experienced events in our lives [1]. One central feature of CRISP is that hippocampal area CA3, because of its abundant recurrent connections, intrinsically produces temporal sequential activities. In this project, first we review the possible mechanisms by which a relatively fixed recurrent network structure (as a model of CA3) can generate neural activity sequences intrinsically. Next, we implement the CA3 models in a complete framework of cortico-hippocampal circuits (We use EC-CA3-CA1-EA network), each subregion has certain function based on CRISP theory. During memory encoding, intrinsic CA3 sequences are hetero-associated with sequences that are driven by sensory inputs. Later on, sequences in CA3 are hetero-associated with the sequence in CA1 and finally, the CA1 activities are hetero-associated with sensory inputs in the EC. During memory retrieval, intrinsic CA3 sequences have to be reactivated based on partial, noisy cues which is provided to EC. Finally, the retrieved sequences in CA3 reactivate the initial input sequences in EC via CA1 layer. Memory performance is determined by the network’s ability to perform sequence completion. If the network’s output is more similar to the original sequence, then the network has done some amount of sequence recall. As a measure for similarity we use the Pearson correlation coefficient between the corresponding patterns of the originally stored and retrieved sequences in different layers. Overall, we find that the neural network mechanism in CA3 generating the sequences has to be robust to noise in the triggering cue. On the other hand, less temporal-correlated patterns in CA3 give rise to more confidence in retrieving the sequence in a complete framework. To conclude, we find that using the right model in CA3, CRISP model surprisingly retrieves almost correctly the stored sequences up to moderate noise levels.
Acknowledgements
This work was supported by the grants (SFB 874, projects B2 and B3) from the German Research
Foundation (Deutsche Forschungsgemeinschaft, DFG) and a grant from the Stiftung Mercator.
Reference
1. Cheng S, The CRISP theory of hippocampal function in episodic memory. Frontiers in Neural Circuits, 2013, 7:88.
P32 Modeling replay and theta sequences in a 2-d recurrent neural network with plastic synapses
Amir Hossein Azizi1, Kamran Diba2, Sen Cheng1
1Institut für Neuroinformatik, Ruhr University Bochum (RUB), Bochum, 44801, Germany; 2Psychology faculty, university of Wisconsin-Milwaukee, Milwaukee, WI 53201, USA
Correspondence: Amir Hossein Azizi (amir.azizi@rub.de)
BMC Neuroscience 2017, 18(Suppl 1):P32
During immobility awake states or when rats are asleep, place cells are reactivated in a sequential order [1]. This reactivation co-occurs with sharp wave/ripples (SWR) in the local field potential (LFP) in the hippocampus. These replay sequences reflect the sequence of the animal’s prior spatial behaviour or the upcoming trajectory to a goal location. During running, the LFP shows characteristic theta oscillations, whose phase modulates the activity of place cells in addition to the location of the animal. This joint modulation, called phase precession, results in the activity of place cells occurring in a sequential order within a theta cycle. The causal relationship between phase precession and theta sequences remains unclear. One possibility is that phase precession leads to sequential ordering within theta cycles. Alternatively, phase precession might be the result of the directional activation of a group of cells with overlapping place fields. For instance, Romani and Tsodyks recently modelled phase precession using an unstable moving bump of activity in a 1-d continuous attractor neuronal network [2]. The driving force of the sequential activity is the short-term plasticity in the synaptic connections. This model also generates offline replay activity in a different operating mode. Since no long-term plasticity was included in the model, the resulting replay and theta sequences only reflected the recent behavior of the animal within the last few seconds and the associated span of phase precession was limited. Recent studies, however, point to a separation of phase precession and theta sequences. Although phase precession can be found immediately in novel environment, the development of theta sequences requires experience [3] and the goal location, rather than the extent of phase precession, appears to determine the length of theta look-ahead sequences [4]. Furthermore, a recent study suggests a dissociation between replay and theta sequences [5]. Only SWR-associated replay activity included portions of an environment that the animal had learned to avoid, while theta sequences did not penetrate into the avoided region. Here we study phase precession, theta sequences, replay activity, and the relationship between these phenomena in a 2-d continuous attractor network model. The units in the network exhibit spike-frequency-adaptation that destabilizes the bump attractor and synapses with long-term plasticity. This model can generate enhanced replay after exposure, theta sequences, and phase precession. The spatial extent of theta sequences is controlled by the running speed of the virtual animal (Figure 1) as hypothesized by Wu et al. [5]. Our preliminary findings suggest that replay and theta sequences can be accounted for within a single model.
Figure 1. Theta sequences reflect the goal location. The decoded location of the animal in each theta cycle is indicated by arrows starting from the current location of the animal. The arrowheads show the direction of the sequential activity. A. When the animal runs slowly, decoded sequences do not reach the avoided region (80–100 cm). B. When the animal runs faster, decoded theta sequences reflect trajectories into the avoided region
References
1. Diba K, Buzsáki G. Forward and reverse hippocampal place-cell sequences during ripples. Nat. Neurosci. 2007; 10:1241–2.
2. Romani S, Tsodyks M. Short-term plasticity based network model of place cells dynamics. Hippocampus 2015; 25:94–105.
3. Feng T, Silva D, Foster DJ. Dissociation between the Experience-Dependent Development of Hippocampal Theta Sequences and Single-Trial Phase Precession. J. Neurosci. 2015; 35:4890–902.
4. Wikenheiser AM, Redish AD. Hippocampal theta sequences reflect current goals. Nat. Neurosci. 2015; 18:289–94.
5. Wu C-T, Haggerty D, Kemere C, Ji D. Hippocampal awake replay in fear memory retrieval. Nat. Neurosci. 2017, in press
P33 Biophysically detailed model of cortical activity in response to moving gratings
Elena Y. Smirnova1,2, Elena G Yakimova3, Anton V. Chizhov1,2
1Ioffe Institute, St.-Petersburg, 194021, Russian Federation; 2Sechenov Institute of Evolutionary Physiology and Biochemistry of RAS, St.-Petersburg, 194223, Russian Federation; 3Pavlov Institute of Physiology, St.-Petersburg, 199034, Russian Federation
Correspondence: Elena Y. Smirnova (elena.smirnova@mail.ioffe.ru)
BMC Neuroscience 2017, 18(Suppl 1):P33
Description of the mechanisms of visual feature selectivity of cortical neurons is still in development. We propose a model that implements a mechanism of direction selectivity (DS) of the primary visual cortex (V1) neurons into our previous model of 2-d distributed neuronal populations in V1, selective to orientation of stationary gratings [1]. The model is based on the conductance-based refractory density approach which provides both a biophysically detailed description of neuronal populations in terms of ionic channel conductances for one- or two-compartment neurons and good precision for statistically equilibrium and non-equilibrium regimes of ensemble activity. Coupled excitatory and inhibitory neurons interact via glutamatergic and GABAergic synapses. Here, we extend this model by supplying a filter-based description of retinothalamic visual signal processing [2]. The mechanism of DS is based on asymmetrical projections from lagged and non-lagged thalamic neurons to the cortex [3], such that V1 neurons preferring a certain direction receive a non-lagged input from one side of its thalamic footprint and a lagged input from the other side. Our model realistically reproduces membrane potential, firing rate, synaptic conductances etc. in response to moving gratings. Simulations shows that the implemented mechanism of DS provides only moderate direction tuning of the time-varying characteristics averaged over the population, however the DS is clearly observed in maps of time-averaged activity, similar to experimental evidences obtained by optical imaging. The time-averaged activity of inhibitory neurons is not selective to stimulus direction, as well as the time-averaged input to the cortex. The results demonstrate how DS maps can originate from the thalamic input that is transiently selective to direction but non-selective on average in time.
Acknowledgements
This work was supported by the Russian Foundation for Basic Research (project 15-04-06234).
References
1. Chizhov AV: Conductance-based refractory density model of primary visual cortex. J Comput Neurosci 2014, 36: 297–319.
2. Dayan P, Abbott LF: Theoretical neuroscience: computational and mathematical modeling of neural systems. The MIT Press 2001.
3. Vigeland LE, Contreras D, Palmer LA: Synaptic mechanisms of temporal diversity in the lateral geniculate nucleus of the thalamus. J Neurosci 2013, 33(5): 1887–1896.
P34 NeuriteSLIM – Shrink the Neuro Fibers for Visualization the Connectome
Nan-Yow Chen1, Chi-Tin Shih2, Chao-Chun Chuang1
1High Performance Computing Division, National Center for High‐Performance Computing, Hsinchu, Taiwan; 2Department of Applied Physics, Tunghai University, Taichung, Taiwan
Correspondence: Nan-Yow Chen (nanyow@nchc.narl.org.tw)
BMC Neuroscience 2017, 18(Suppl 1):P34
Connectome assembled through fluorescent imaging is regarded as an important step to understand how brains work [1]. However, due to the spreading feature of fluorescent images, the fibers of the neurons were larger than their actual sizes. Consequently, the number of neurons which could be simultaneously visualized in the standard brain was limited because signals from different neurons would mix together and the neurons became indistinguishable. For constructing visualization of connectome, we develop an algorithm called NeuriteSLIM which could shrink the neurite thickness with preserving their length, shape and radial intensity distribution. With this tool, we can reconstruct and visualize the Drosophila connectome at the single-cell resolution and provide a useful tool for connectome studying in the future.
The goal of NeuroSLIM was to shrink the fiber thickness with preserved cross-sectional shape and intensity distribution of the fibers. At first, voxels were divided into \( n_{x} \times n_{y} \times n_{z} \) smaller, nearly cubic voxels with the same intensity as in the original voxel. The next step was to identify the nearest central point for each voxel. (As shown in Figure A and B). The intensity of each voxel was then moved from the original position to the shrunk position, which was the voxel in the cross section closest to the interpolated point according the desired shrink ratio between the original position and the central point. If more than one original voxels were moved to the same new voxel, the intensity of the new voxel would be the averaged intensity of the original voxels. Figure C showed the shrunken result of Figure A. Figure D showed the case with 120 neurons warped into the standard model brain [2] and provided a global connectomic view of the Drosophila brain. For such a large number of neurons packed in the same brain, the neurites were inevitably mixed together. NeuroSLIM shrunk the radii of the neurites and provided a clearer connectomic visualization in Figure E.
Figure 1. A-C. indicate the shrinking algorithm of NeuriteSLIM. E-F. show the results of 120 neurons before and after applying NeuriteSLIM
References
1. Alivisatos, A.P. et al.: The Brain Activity Map Project and the Challenge of Functional Connectomics. Neuron 2012, 74: 970–974.
2. Ann-Shyn Chiang, et al.: Three-Dimensional Reconstruction of Brain-wide Wiring Networks in Drosophila at Single-Cell Resolution. Current Biology 2010, 21: 1–11.
P35 Identification of models of sensory neural circuits consisting of a nonlinear filter in series with a leaky integrate-and-fire neuron
Dorian Florescu, Daniel Coca
Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, South Yorkshire, S1 3JD, UK
Correspondence: Daniel Coca (d.coca@sheffield.ac.uk)
BMC Neuroscience 2017, 18(Suppl 1):P35
The early sensory processing circuits in the brain, incorporating sensory neurons and downstream spiking neurons, have often been represented as cascade models consisting of a linear or nonlinear filter [1,2] followed by model of spike generation such as a Poisson or integrate-and-fire model. The cascade model is usually inferred directly from experimental measurements using system identification methods.
Although the leaky integrate-and-fire (LIF) neuron model is much simpler than a biophysically realistic Hodgkin-Huxley model, the LIF model has been used successfully to predict experimentally recorded spike trains [3]. The identification of a linear filter in cascade with a Hodgkin-Huxley neuron has been considered in [1] by assuming prior knowledge of the spiking neuron parameters or assuming that measurements of the input to the spiking neuron input are available. A number of methods are available to estimate a cascade model consisting of a linear filter in series with a LIF neuron (LF-LIF). In [3] this involves maximizing the likelihood of observed spike responses to a stochastic visual stimulus, assuming that the threshold parameter is known, whilst in [4] the parameters of an input-output equivalent model are estimated by assuming that the LIF parameters are known a priori. Here we propose for the first time a method to identify a cascade model consisting of an arbitrary nonlinear filter in series with a leaky integrate-and-fire neuron model where both the parameters of the LIF neuron and the structure and parameters of the nonlinear filter are unknown. Furthermore, the input of the spiking neuron is assumed to be corrupted by Gaussian white noise and not available for measurement.
A new input-output equivalent representation of the circuit is proposed in which one of the parameters represents the minimum step amplitude required to trigger a response of the NF-LIF circuit. By analogy to the rheobase of a biological neuron we call this parameter the rheobase of the NF-LIF circuit. The identification of the NF-LIF circuit has two stages. The LIF model parameters are estimated first followed by the identification of the nonlinear filter. In order to estimate the LIF parameters, we derive the theoretical steady state firing rate output of the NF-LIF in response to a step input of a given amplitude. For this estimation stage only, the filter is approximated with a linear filter. Subsequently, we fit this theoretical output to noisy measurements of the circuit in response to repeated step inputs with different amplitudes, using the Levenberg-Marquardt algorithm. Using the experimentally observed rheobase as an initial parameter guess increases significantly the performance of the algorithm. Once the LIF parameters are estimated, an orthogonal forward selection algorithm is used to identify the NARMAX model of the scaled nonlinear filter based on the nonlinear filter input and the reconstructed filter output i.e. the reconstructed input to the LIF neuron.
References
1. Lazar AA: Population encoding with Hodgkin–Huxley neurons. IEEE Transactions on Information Theory 2010 56(2):821–37.
2. Lazar AA, Slutskiy YB: Spiking neural circuits with dendritic stimulus processors. Journal of computational neuroscience 2015 38(1):1–24.
3. Pillow JW, Paninski L, Uzzell VJ, Simoncelli EP, Chichilnisky EJ: Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. Journal of Neuroscience 2005, 25(47):11003–13.
4. Lazar AA, Slutskiy Y: Identifying dendritic processing. Advances in neural information processing systems 2010:1261–1269.
P36 Modelling fluctuations in resting-state functional connectivity in epilepsy
Julie Courtiol1, Spase Petkoski1, Viktor K Jirsa1
1Aix Marseille Univ, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
Correspondence: Julie Courtiol (julie.courtiol@univ-amu.fr)
BMC Neuroscience 2017, 18(Suppl 1):P36
Understanding the mechanisms behind epilepsy is one of the most challenging problems in neuroscience. Recent efforts provided valuable evidence that epileptic activity involves widespread brain networks rather than single sources [1] and that these networks contribute to epilepsy based interictal brain alterations [2]. In order to better understand the underlying alterations of the functional connectivity (FC), we propose a whole-brain computational modelling approach of resting-state using patient-specific structural connectivity, with clinically diagnosed bitemporal epilepsy, derived from diffusion tensor imaging (DTI), and generic 2D oscillator for the node intrinsic activity. From the model, we systematically alter the neural excitability of the nodes in healthy control subjects to progressively incorporate a propagation and epileptogenic zone (EZ) according to clinical criteria of the patient, and check the effects of this manipulation on simulated FC. This is then compared with the empirical FC from the patient and compared with healthy control group.
Our results reveal a significant increase of different FC-derived measures along the entire brain, with increasing of the epileptogenic strength of the node, in line with the divergence from the bifurcation point. In addition, this is shown to be enhanced for stronger connected nodes, according to the individual connectome, or for larger epileptogenic zone. These results support the view that perturbations in whole-brain dynamics, due to the epileptogenic activity of certain nodes, cause predictable individualized alterations in the FC [3].
References
1. Jirsa VK, Proix T, Perdikis D, Woodman MM, Wang H, Gonzalez-Martinez J, Bernard C, Bénar C, Guye M, Chauvel P, Bartolomei F.: The Virtual Epileptic Patient: Individualized whole-brain models of epilepsy spread. Neuroimage 2017, 145(Pt B):377–388.
2. Wirsich J, Perry A, Ridley B, Proix T, Golos M, Bénar C, Ranjeva JP, Bartolomei F, Breakspear M, Jirsa V, Guye M.: Whole-brain analytic measures of network communication reveal increased structure-function correlation in right temporal lobe epilepsy. Neuroimage: Clin. 2016, 11:707–718.
3. Courtiol J, Petkoski S, Jirsa VK.: in preparation.
P37 Exact solutions to a Wilson-Cowan network of excitatory and inhibitory neurons whose dynamics is triggered by one single spike
Roberto J. M. Covolan
Department of Neurology, State University of Campinas, Campinas, SP, 13083-887, Brazil
Correspondence: Roberto J. M. Covolan (covolan@ifi.unicamp.br)
BMC Neuroscience 2017, 18(Suppl 1):P37
Wilson-Cowan equations [1,2] is a widely used theoretical model by which a network of coupled populations of excitatory and inhibitory neurons is represented. This model is constituted by a set of integro-differential equations that describe the time evolution of the level of activity of excitatory and inhibitory neuronal populations by using a nonlinear sigmoidal function to represent the interactions between these populations. Typical applications require approximation methods and numerical solutions.
In this paper, an analytical method that allow one to obtain exact solutions to a particular setup of Wilson-Cowan equations is presented. This method is based on a spinorial representation and a Feynman-like procedure of ordered exponential operators [3], which was further developed by Fujiwara [4].
The obtained solutions depend on specific initial conditions in the form of delta function, which are interpreted as an action potential-like inputs, thus more general results can be readily generated by applying an impulse train function.
Conclusion: Feynman procedure of ordered exponential operators, posteriorly expressed by Fujiwara in terms of expansional operators, has been successfully applied to obtain closed solutions to a particular setup of the Wilson-Cowan equations.
Acknowledgements
This work has been supported by FAPESP, Grant Number 2013/07559-3).
References
1. Wilson HR, Cowan JD: Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J 1972, 12:1–24
2. Wilson HR, Cowan JD: A mathematical theory of the functional dynamics of nervous tissue. Kybernetik 1973, 13:55–80
3. Feynman RP: An Operator Calculus Having Applications in Quantum Electrodynamics. Phys. Rev. 1951, 84:108.
4. Fujiwara I: Operator Calculus of Quantized Operator. Prog. Theor. Phys. 1952, 7:433
P38 Encoding variable cortical states with short-term spike patterns
Bartosz Teleńczuk1, Richard Kempter2, Gabriel Curio3, Alain Destexhe1
1Unité de Neurosciences, Information et Complexité, CNRS, 91198 Gif-sur-Yvette, France; European Institute for Theoretical Neuroscience, CNRS, Paris, France; 2Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany; 3Department of Neurology, Universitätsmedizin Charité, Berlin, Germany
Correspondence: Bartosz Teleńczuk (telenczuk@unic.cnrs-gif.fr)
BMC Neuroscience 2017, 18(Suppl 1):P38
Neurons in the primary somatosensory cortex (S1) respond to peripheral stimulation with synchronised bursts of spikes, which lock to the macroscopic 600 Hz EEG wavelets [1,2]. The mechanism of burst generation and synchronisation in S1 is not yet understood. We fitted unit recordings from macaque monkeys with a Poisson-like model including the refractory period (spike-train probability model, STPM). The model combines high-amplitude synaptic inputs with absolute and relative refractoriness. We show that these two properties can reproduce synchronised bursts observed in S1 neurons. The probabilistic nature of the model introduces trial-to-trial response variability. Similar to the experimental data, the variability can be decomposed into stereotypical spike patterns consisting of short bursts of spikes with variable number of spikes and length of within-burst intervals. Next, we extend the model to a population of uncoupled neurons, which receive common inputs fluctuating in amplitude across trials. We demonstrate that these fluctuations introduce correlations between neurons and between the single-neuron spike patterns and population activity (high-frequency EEG wavelets) as observed experimentally [2].
To further study the biophysical mechanism behind S1 burst responses, we develop a single-compartment model (leaky integrate-and-fire, LIF) receiving intracortical and feedforward thalamic inputs. The intracortical inputs are assumed to be in a balanced state, where excitatory and inhibitory currents nearly cancel each other out yielding the neuron in the high-conductance state [3]. This enables the model neuron to respond quickly to a transient barrage of thalamocortical inputs and generate bursts of spikes tightly locked to the stimulus onset. The transient response decays quickly to the baseline and terminates the burst due to the activity-dependent depression of thalamocortical synapses. This model can reproduce many features of experimental data, in particular the burst statistics and the presence of spike patterns.
Our findings show that a simple feedforward processing of peripheral inputs could give rise to neuronal responses with non-trivial temporal and population statistics. We conclude that neural systems could use refractoriness to encode variable cortical states into stereotypical short-term spike patterns amenable to processing at neuronal time scales (tens of milliseconds). See [4] for more details.
Acknowledgements
This study was partially funded by the CNRS, European Commission (Human Brain Project, H2020-720270) and BMBF (grants BCCN-B1, 01GQ1001A and 01GQ0972).
References
1. Baker S, Curio G, Lemon R. EEG oscillations at 600 Hz are macroscopic markers for cortical spike bursts. J. Physiol. 2003; 550:529–534.
2. Telenczuk B, Baker SN, Herz AVM, Curio G. High-frequency EEG covaries with spike burst patterns detected in cortical neurons. J. Neurophysiol. 2011; 105:2951–9.
3. Destexhe A, Rudolph M, Paré D. The high-conductance state of neocortical neurons in vivo. Nat. Rev. Neurosci. 2003; 4:739–51.
4. Telenczuk B, Kempter R, Curio G, Destexhe A. Encoding variable cortical states with short-term spike patterns. 2017; preprint http://biorxiv.org/content/early/2017/01/04/098210 bioRxiv doi:10.1101/098210
P39 Cat Paw-shaking as a Transient Response to Sensory Input to Locomotion CPG
Jessica Parker1, Alexander N. Klishko2, Boris I. Prilutsky2, Gennady Cymbalyuk1
1Neuroscience Institute, Georgia State University, Atlanta, GA 30303, USA; 2School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA 30332, USA
Correspondence: Jessica Parker (jgreen59@student.gsu.edu)
BMC Neuroscience 2017, 18(Suppl 1):P39
It has not yet been determined whether the same CPG can generate rhythmic activity of distinct behaviors with significantly different frequencies (1 vs 10 Hz), such as locomotion and paw-shake responses. We have previously published a model of a multistable CPG constructed as a half-center oscillator (HCO), which consists of two reciprocally inhibitory interneurons [1]. This HCO was able to produce the stable rhythms associated with both locomotion and paw-shake. We also used this HCO model to demonstrate that a multifunctional CPG controlling a neuromechanical model of a cat hind limb could reproduce the essential features of the rhythm, kinematics and muscle synergies of cat locomotion and paw-shake responses [1]. Here, we show that using a pulse of current, a transient paw-shake-like rhythm can be elicited either in the multistable HCO model or in a monostable version of the model. Our model predicts that the flexor burst duration and the extensor interburst interval will increase throughout a single paw-shake-like response. We tested these predictions by eliciting paw-shake responses in cats by attaching a piece of adhesive tape to the hind paw and allowing the cat to walk on a level walkway. Hind limb kinematics and EMG activity of various hind limb muscles were recorded [3]. The cats performed paw-shake responses intermittently while walking, and each paw-shake response consisted of 4 to 10 cycles. In accordance with previous studies [2], we found a progressive increase in EMG burst period throughout consecutive cycles of paw-shake responses. Furthermore, we found a progressive increase in EMG burst duration in consecutive paw-shake cycles for flexors and a progressive increase in EMG interburst interval for extensors. We conclude that a paw-shake response might be a transient response to sensory input to the locomotion CPG.
Acknowledgements
We acknowledge support by the NSF PHY-0750456 to Gennady Cymbalyuk and by NIH P01 HD32571, R01 EB012855, and R01 NS048844 and by the Center for Human Movement Studies at GA Tech to Boris I. Prilutsky.
References
1. Bondy B, Klishko AN, Edwards DH, Prilutsky BI, Cymbalyuk G: Control of cat walking and paw-shake by a multifunctional central pattern generator. In: Neuromechanical Modeling of Posture and Locomotion. edn. New York: Springer; 2016: 333–359.
2. Koshland GF, Smith JL: Mutable and immutable features of paw-shake responses after hindlimb deafferentation in the cat. J Neurophysiol 1989, 62(1):162–173.
3. Hodson-Tole EF, Pantall AL, Maas H, Farrell BJ, Gregor RJ, Prilutsky BI: Task dependent activity of motor unit populations in feline ankle extensor muscles. J Exp Biol 2012, 215:3711–3722.
P40 Population Coding with Two-Dimensional Feature Maps in the Retina
Felix Franke1, Andreas Hierlemann1, Rava Azeredo da Silveira2,3
1Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland; 2Ecole Normale Supérieure, Paris, France; 3Centre National de la Recherche Scientifique, Paris, France
Correspondence: Felix Franke (felix.franke@bsse.ethz.ch)
BMC Neuroscience 2017, 18(Suppl 1):P40
A robust internal representation of relevant variables in the external world is necessary to guide animal behavior. The brain constructs this internal representation from sensory inputs. The nervous system has to rely on a two-dimensional sensor array for visual perception, the retina. Sensor arrays encode external variables with two-dimensional feature maps: the concerted activity of photoreceptors encodes information about brightness and color across the entire retina. These brightness levels also encode all information about more complex features of the visual stimulus, such as the location, velocity, and contrast of moving objects. But to access the more complex features the nervous system first needs to extract them from the photoreceptor activity. Retinal circuitry processes the photoreceptor activity and sends the processed information via the spiking activity of retinal ganglion cells to the brain. There are over 30 different types of retinal ganglion cells, each cell type tiling the entire retina with their receptive fields, and each cell type sending information about different features to the brain. The entire information the brain receives about the visual world is, thus, encoded in the concerted activity of those >30 cell types, each representing a two-dimensional map with its particular sensitivity, e.g., sensitivity the direction of local movement or to the presence of an edge.
Here, we analyze the encoding properties of direction-selective retinal ganglion cells for position, velocity, and direction of moving objects. To this end, we use tuning functions estimated from real recordings of mouse retinae and Fisher Information to calculate the precision of the neural code. We estimate the coding precision for different visual features (position, velocity, direction), and for a variety of stimuli. We then proceed as follows: First, we vary single-cell properties, i.e., the tuning functions, and quantify the impact of these changes on the coding precision. Second, we change population properties, e.g., the geometric arrangement of the receptive fields within the mosaic, and, again, quantify the consequences.
When varying single-cell properties, we concentrate on the feature-selectivity of the cells, i.e., direction-selectivity. In particular, we compare the encoding precision on the direction and position of a moving object in a mosaic of direction-selective cells vs. in a mosaic of non-direction-selective cells (with matched mean firing rates).
We find that non-direction-selective cells encode direction (and position) more faithfully than direction-selective cells if the moving object is larger than the receptive-field size and the distance of receptive field centers within the mosaic. At first, this is counter-intuitive, as feature selectivity for a given feature should increase the encoding precision for that feature. However, a feature can be encoded by both, feature-selective and non-selective cells, the question is with what ease it can be decoded. This yields a hypothesis for the function of direction-selective cells: either they are specialized to encode direction locally within their receptive fields or they satisfy a requirement to decode direction early in the visual pathway, presumably to allow for the decoding of more sophisticated visual features immediately downstream.
P41 A detailed computational reconstruction of the cerebellum granular layer network predicts large scale spatiotemporal dynamics of neuronal activity
Stefano Casali1, Stefano Masoli1, Martina Rizza1,3, Egidio D’Angelo1,2
1Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy 27100; 2Brain Connectivity Center, C. Mondino National Neurological Institute, Pavia, Italy 27100; 3Dipartimento di Informatica, Sistemistica e Comunicazione, Università degli Studi di Milano-Bicocca, Viale Sarca, Italy
Correspondence: Stefano Masoli (stefano.masoli@unipv.it)
BMC Neuroscience 2017, 18(Suppl 1):P41
The main aim of the present work was to demonstrate that high-level spatiotemporal dynamics taking place in the cerebellar granular layer can be understood as emergent phenomena, naturally determined by the complex interaction occurring among microscopic variables, like specific topology of intercellular connectivity and neurophysiological properties of single neurons. To this aim, we have developed an updated large-scale computational model of the cerebellar granular layer in Python-Neuron [1]. Our results show that (1) the center-surround profile describing the ratio between excitation and inhibition observed in slices of cerebellar tissue depends on spatial arrangement of Granule cells (Grcs) – Golgi cells (Gocs) connectivity; (2) spatial interaction between different spots of activation spontaneously leads to combinatorial responses, like combined excitation and inhibition; (3) the entire granular layer generates coherent oscillations in response to random background input when two conditions are met: the mossy fibers (mfs) input conveyed to Gocs and mutual inhibition among Gocs are weak or absent; (4) Spatial distribution of long term potentiation (LTP) and inhibition (LTD) at the mfs-Grcs synapses can be faithfully reproduced in our network model. The presented results are in strict agreement with observations from network-level in vitro experiments, like VSD and MEA recordings [2, 3], and from in vivo LFP studies [4]. The model has also been validated against highly precise experiments conducted in cerebellar slices using Two-Photon imaging microscopy [5], which allowed to test its precision to the level of single-spike activity.
References
1. M. L. Hines, A. P. Davison, and E. Muller. NEURON and Python. Front Neuroinform, 3, 2009.
2. Mapelli J, D’Angelo E: The spatial organization of long-term synaptic plasticity at the input stage of Cerebellum. J Neurosci 2007, 27: 1285–1296
3. Mapelli J, Gandolfi D, D’Angelo E: Combinatorial responses controlled by synaptic inhibition in the Cerebellum granular layer. J Neurophysiol 2010, 103: 250–261
4. Diwakar S, Lombardo P, Solinas S, Naldi G, D’Angelo E: Local field potential modeling predicts dense activation in cerebellar granule cells clusters under LTP and LTD control. PLoS One 2011, 6(7)
5. Gandolfi D, Pozzi P, Tognolina M, Chirico G, Mapelli J, D’Angelo E: The spatiotemporal organization of cerebellar network activity resolved by two-photon imaging of multiple single neurons. Front Cell Neurosci 2014, 8:92 doi: 10.3389/fncel.2014.00092
P42 A Biophysically Detailed Cerebellar Stellate Neuron Model Predicts Local Synaptic Interactions
Martina Francesca Rizza1,2, Stefano Masoli1, Egidio D’Angelo1,3
1Department of Brain and Behavioral Sciences, University of Pavia, Via Forlanini 6, I-27100, Pavia, Italy; 2Dipartimento di Informatica, Sistemistica e Comunicazione, Università degli Studi di Milano-Bicocca, Viale Sarca 336, I-20100, Milan, Italy; 3Brain Connectivity Center, Istituto Neurologico IRCCS C. Mondino, Via Mondino 2, Pavia, I-27100, Italy
Correspondence: Martina Francesca Rizza (martina.rizza@disco.unimib.it)
BMC Neuroscience 2017, 18(Suppl 1):P42
The cerebellar stellate cells (SC) are located in the molecular layer and play a critical role in modulating the activity of Purkinje cells (PC). Starting from a broad range of published experimental observations, we constructed a biophysically realistic SC model in Python-NEURON [1]. A human SC morphology (Neuromorpho.org) was comprised of highly branched dendritic tree, soma, axon initial segment (AIS) and axon with collaterals [2]. The membrane mechanism [3] were distributed according to the literature. Two distinct types of Na2+ channels were used: Nav1.1 (without resurgent current) in the soma and Nav1.6 (with resurgent current) in the AIS/axon. The K+ channels were Kv3.4 and Kv4.3, mainly in the soma. The Ca2+ and Ca2+-dependent K+ channels were KCa1.1 and KCa2.2, mainly in the dendrites. The model was endowed with an intracellular Ca2+ buffer that contributed to spike repolarization and firing pattern regulation. In the SC model, the set of maximum ionic conductances (Gi-max) had to be tuned to match the firing pattern revealed by electrophysiological recordings. Gi-max tuning was performed by automatic parameter estimation using both the swarm intelligence algorithm (particle swarm optimization, PSO [4]) and the genetic algorithms MOEA [5] (in BluePyOpt [6]). The optimized models showed spontaneous firing with an average frequency of 14 Hz, appropriate spike shape and amplitude. The SC model was validated by running simulation demonstrating the impact of the gap junctions [7] in conjunction with glutamatergic synaptic inputs from parallel fibers (pf) and the GABAergic synapses between SCs. In addition, we evaluated the impact of SC activity on PCs. This model thus provides a valuable tool to further investigate the SC function in cerebellar network models.
References
1. Hines ML, Davison AP, Muller E: NEURON and Python. Front Neuroinform 2009, 3.
2. Jacobs B, Johnson NL, Wahl D, Schall M, Maseko BC, Lewandowski A, Manger PR: Comparative neuronal morphology of the cerebellar cortex in afrotherians, carnivores, cetartiodactyls, and primates. Frontiers in Neuroanatomy 2014, 8.
3. Masoli S, Solinas S, D’Angelo E: Action potential processing in a detailed Purkinje cell model reveals a critical role for axonal compartmentalization. Front. Cell. Neurosci. 2015; 9:1–22.
4. Kennedy J, Eberhart R: Particle Swarm Optimization. In Proc IEEE Int Conf Neural Networks 1995, volume 4, pages 1942–1948.
5. Druckmann S: A novel multiple objective optimization framework for constraining conductance-based neuron models by experimental data. Frontiers in Neuroscience 2007, 1(1): 7–18.
6. Van Geit W, Gevaert M, Chindemi G, Rössert C, Courcol J-D, Muller EB, Schürmann F, Segev I, Markram H: BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience. Front Neuroinform 2016, 10:1–30.
7. Alcami P, Marty A: Estimating functional connectivity in an electrically coupled interneuron network. Proceedings of the National Academy of Sciences 2013, 110(49): E4798–E4807.
P43 Neuromodulation of Subgenual Cingulate Activity Localizable from EEG
Yinming Sun1,2, Willy Wong1,3, Faranak Farzan2, Daniel M. Blumberger2,4, Zafiris J. Daskalakis2,4
1Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, M5S3G9, Canada; 2Centre for Addiction and Mental Health, Toronto, ON, M5T1R8, Canada; 3Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, M5S3G4, Canada; 4Department of Psychiatry, University of Toronto, Toronto, ON, M5S3G4, Canada
Correspondence: Yinming Sun (yinming.sun@gmail.com)
BMC Neuroscience 2017, 18(Suppl 1):P43
Subgenual cingulate (SGC) activity is implicated in the pathophysiology of major depressive disorder (MDD) [1]. Neuromodulation treatments for MDD may work by modifying connections between the SGC and other brain regions. One prominent connection is between the SGC and dorsolateral prefrontal cortex (DLPFC). The present work explores SGC source activity in two studies: 1) Verifying source localization with EEG recorded from patients receiving deep brain stimulation (DBS) in the SGC, 2) Using source analysis to determine if magnetic seizure therapy (MST) for MDD works by affecting the connection between the DLPFC and SGC. In the first study, accuracy of source localization was quantified by the error in locating the source of DBS stimulus, which was extracted in sensor space from EEG recorded during active stimulation with matched filters based on previously published methods [2]. Since the magnitude of the DBS stimulus is much larger than typical brain activity, a threshold for detection was determined by examining source localization results from simulated data created by adding suppressed versions of the extracted DBS stimulus to data with the stimulator turned off. Results from this study is one of the first to empirically demonstrate the efficacy of EEG in detecting activity from a deep brain source. In the second study, source analysis was applied to EEG recorded during transcranial magnetic stimulation (TMS-EEG) before and after a course of MST treatment. MST uses a train of magnetic pulses delivered over the scalp to create a seizure and has shown efficacy for reducing suicidal ideations [3]. For TMS-EEG, magnetic pulses were delivered to the dorsolateral prefrontal cortex (DLPFC) and the EEG was collected with a 64-channel system. A TMS-EEG measure called the significant current scatter (SCS) [4] was calculated based on the computed source image. SCS quantifies the spread of activation from a stimulus location to other brain regions and has been effective in capturing changes in brain network connections during task performance and loss of consciousness. A standard atlas was used to identify dipoles belonging to the SGC region. Results show that SCS values were significantly decreased after MST based on the Wilcoxon signed-rank test (Z = −2.16, p = 0.03). For patients with baseline suicidal ideation, higher baseline SCS values were correlated with greater SSI reductions (Spearman Rho = 0.625, p = 0.004). Using baseline SCS values, suicidal ideation remission can also be predicted with 100% sensitivity and 70% specificity (AUC = 0.86, p = 0.01). Overall, this work provides both a methodological confirmation for the utility of EEG for studying SGC activity and a mechanistic explanation for MST’s therapeutic benefit for MDD patients. MST may exert its therapeutic effects on suicidal ideation via transsynaptic modulation of the SGC from the prefrontal cortex and reconfiguring pathological connections in the process of treatment. With carefully designed experiments, future EEG studies with source analysis will undoubtedly yield additional mechanistic insights for neuroscience.
Acknowledgements
This work was supported by the Canadian Institutes of Health Research (CIHR), the Brain and Behaviour Research Foundation (formerly NARSAD) and the Temerty Family and Grant Family and through the Centre for Addiction and Mental Health (CAMH) Foundation and the Campbell Institute.
References
1. Mayberg HS: Limbic-cortical dysregulation: a proposed model of depression. J Neuropsychiatry Clin Neurosci 1997, 9(3):471–481.
2. Sun Y, Farzan F, Garcia Dominguez L, Barr MS, Giacobbe P, Lozano AM, Wong W, Daskalakis ZJ: A novel method for removal of deep brain stimulation artifact from electroencephalography. J Neurosci Methods 2014, 237C:33–40.
3. Sun Y, Farzan F, Mulsant BH, Rajji TK, Fitzgerald PB, Barr MS, Downar J, Wong W, Blumberger DM, Daskalakis ZJ: Indicators for Remission of Suicidal Ideation Following Magnetic Seizure Therapy in Patients With Treatment-Resistant Depression. JAMA Psychiatry 2016.
4. Casali AG, Casarotto S, Rosanova M, Mariotti M, Massimini M: General indices to characterize the electrical response of the cerebral cortex to TMS. Neuroimage 2010, 49(2):1459–1468.
P44 Phase dynamics in a GO/NOGO finger tapping task
Svitlana Popovych1,2, Shivakumar Viswanathan2,3, Nils Rosjat1,2, Christian Grefkes2,3, Gereon R. Fink2,3, Silvia Daun1,2
1Heisenberg Research Group of Computational Neuroscience - Modeling Neural Network Function, Department of Animal Physiology, Institute of Zoology, University of Cologne, Cologne, 50674, Germany; 2Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Center Juelich, Juelich, 52425, Germany; 3Department of Neurology, University Clinic Cologne, 50937, Cologne, Germany
Correspondence: Silvia Daun (Silvia.Daun@uni-koeln.de)
BMC Neuroscience 2017, 18(Suppl 1):P44
Motor actions arise as a result from a complex interplay between various brain regions. Since the same brain regions can form different functional networks depending on the action, it is, in general, a demanding task to identify the neural signals that are the constituting components of a motor action.
In a previous study, we found that voluntary and visually triggered movements exhibit significant phase locking in the delta-theta frequency band (2–7 Hz) starting already before movement onset in the motor regions contralateral to the moving hand both in younger [1] and older subjects [2]. This phase locking therefore seems to be an electrophysiological marker of movement execution, no matter how the movement has been initiated. We suggested that this synchrony helps the simultaneously active pathways of distinct cortical networks that initiate voluntary and stimulus-triggered movements, converge to a common motor output and activate the appropriate muscles to perform the movement.
In these previous studies, since a prepared movement was always executed, it is unclear whether the reported pre-movement phase locking in the low frequency bands is a necessary prerequisite for movement execution, or rather a correlate of movement preparation. To distinguish between these alternatives, we recorded EEG from young (18–35 years) right-handed healthy subjects as they performed a simple GO/NOGO finger tapping task where a prepared action was either executed (GO trials, 75%) or cancelled (NOGO, 25%).
Results of the data analysis revealed an increase in phase synchronization in the low frequency bands in both the GO and NOGO condition, hinting at a potential role of delta-theta phase synchronization in movement preparation.
Acknowledgements
This work was supported by the DFG grants to S. Daun (GR3690/2-1, GR3690/4-1 and UoC EG CONNECT).
References
1. Popovych S, et al.: Phase-locking in the delta-theta band is an EEG marker of movement execution. Neuroimage 2016, 139:439–449.
2. Liu L et al.: Movement-related intra-regional phase locking in the delta-theta frequency band in young and elderly subjects. Society for Neuroscience 2016, Annual meeting.
P45 Mechanisms of focal seizure generation in a realistic small-network model with ionic dynamics
Damiano Gentiletti1, Piotr Suffczynski1, Vadym Gnatkovski2, Marco De Curtis2
1Department of Experimental Physics, University of Warsaw, Warsaw, 02-093, Poland; 2Istituto Neurologico Carlo Besta, Milan, 20133, Italy
Correspondence: Damiano Gentiletti (Damiano.Gentiletti@fuw.edu.pl)
BMC Neuroscience 2017, 18(Suppl 1):P45
Epilepsy and seizures are traditionally associated with an imbalance between excitatory and inhibitory forces in the brain. This classic view is challenged by the in vitro isolated guinea pig brain model of focal seizures [1]. Based on experimental data recorded from the entorhinal cortex (EC), it appears that inhibitory neurons are active at the very beginning of a focal seizure, whereas excitatory cells are quiescent. This is accompanied by an increase of the extracellular potassium concentration. Within a few seconds from seizure onset, the principal cells display excessive firing associated with the seizure discharge. Neuronal firing of principal neurons subsequently decreases, and further evolves into rhythmic bursting activity that terminates the seizure.
In order to gain more understanding of the link between ionic dynamics and neuronal activity during seizures we developed a computational model of the entorhinal cortex circuit. The model consists of a small neuronal network made up of five hippocampal cells – an inhibitory interneuron and four pyramidal cells – each one surrounded by an extracellular space. Each extracellular environment incorporates realistic dynamics of Na+, K+, Cl− and Ca2+ ions, the glial buffering system and diffusion mechanisms. Different extracellular spaces communicate with each other by diffusive exchange of K+ ions.
Simulations performed with our in silico model show that ion concentration changes have significant impact on the network behaviour and determine the different phases of a focal seizure. In particular, the model is able to reproduce the membrane potential and potassium concentration traces recorded experimentally, and the pathological sequence taking place in the pyramidal cells: quiescent period – seizure onset – excessive pyramidal firing – late bursting phase. Our simulations confirm the experimentally driven hypothesis that strong discharge of inhibitory interneurons may result in long lasting accumulation of extracellular K+, which in turn is responsible for seizure progression in principal cells.
Our study also shows that a reduced model with fixed ionic concentrations is not able to reproduce the seizure patterns observed experimentally, pointing to the importance of the role played by non-synaptic mechanisms in modeling focal epileptic activity.
Additionally, we exploited the model to suggest and test novel antiepileptic therapies. A potentially viable strategy reckons on the implementation of a nanoparticle system designed to buffer the excess of extracellular potassium ions. Simulations incorporating such an additional mechanism show feasibility of seizure control by artificial pharmacological agents, suggesting future avenues of controlling ictogenesis.
References
1. De Curtis M, Gnatkovsky V: Reevaluating the mechanisms of focal ictogenesis: the role of low-voltage fast activity. Epilepsia 2009, 50(12):2514–2525.
P46 Pre-allocation of working memory modulates memory performance
Hyeonsu Lee1, Woochul Choi1,2, Se-Bum Paik1,2
1Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; 2Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
Correspondence: Hyeonsu Lee (hslee9305@kaist.ac.kr)
BMC Neuroscience 2017, 18(Suppl 1):P46
Working memory capacity is known to be limited to a small number of items. Two controversial memory models–the slot model and the resource model–were proposed to describe limited capacity. While the slot model hypothesizes that a fixed number of discrete slots store the information for each item, the resource model proposes that continuous working memory resources can be allocated to each item and that memory precision will increase as more resources are allocated to the item [1]. It has recently been shown that the resource model appears to successfully describe the observation that memory precision for each item smoothly decreases as the total number of items increases [2]. However, it is still elusive how the resources are distributed and whether allocating resources before encoding–pre-allocation– actually affects memory performance. In this study, we suggest that memory pre-allocation is in effect and modulates memory performance. To examine the pre-allocation effect of working memory on memory performance, we performed a human psychophysics experiment in which subjects memorized the pattern of visual stimuli presented sequentially. To study the pre-allocation effect, the total number of items was either informed to the subject before the items were presented or not informed in the control case. In the pre-allocated condition, there were two schemes: whether the number given as a cue is the same as the actual number of items appeared (matched cue) or is less than the number of items (non-matched cue). The results showed that the performance for the pre-allocated case was higher than that for the control case where the number of items was not informed. This suggests that working memory resources may be pre-allocated based on cue and that allocated resources improve memory performance. In addition, in the non-matched cue case, memory precision was lower than the matched cue case. Whereas working memory resources were pre-allocated by the cue in both cases, an unexpected additional item was given to subject in the non-matched cue case. Thus, working memory resources were insufficient to store the information of that item. Our results imply that pre-allocation may allocate working memory resources efficiently. We propose that working memory resources can be pre-allocated prior to encoding the items, and allocating resources may modulate memory performance.
References
1. Ma WJ, Husain M, Bays PM, Ji Ma W, Husain M, Bays PM: Changing concepts of working memory. Nat. Neurosci 2014, 17:347–356.
2. Gorgoraptis N, Catalao RFG, Bays PM, Husain M: Dynamic updating of working memory resources for visual objects. J. Neurosci 2011, 31:8502–8511.
P47 Temporal dynamics of bistable perception reveals individual time window for perceptual decision making
Woochul Choi1,2, Se-Bum Paik1,2
1Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; 2Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
Correspondence: Woochul Choi (choiwc1128@kaist.ac.kr)
BMC Neuroscience 2017, 18(Suppl 1):P47
When a sensory stimulus can be interpreted in two alternative ways, the perception of the stimulus often changes spontaneously and quasi-periodically between the two. This phenomenon, called bistable perception, may provide rich information about how the brain dynamically interprets the sensory stimulus. One of the most interesting characteristics of bistable perception is that the switching frequencies are fairly consistent within an individual but vary across individuals. However, it is still unclear what drives the periodic alternation and which parameter determines the individual switching frequency. To explain the origin of diverse perceptual alternation, we assumed that bistable perception results from the integration of sensory information, and that each individual has a specific time window for sensory information integration that might determine their own switching frequency.
To examine the hypothesis, we used randomly moving dots in an annulus (“racetrack” stimuli) [1]. During the stimulation, we controlled the portion of the rotating dots with a coherence parameter, c. When c = 0, all the dots would move in random directions, inducing illusory motion, and the participant would experience bistable perception. In contrast, when c > 0, the racetrack would generate noisy rotational motion, and the participant’s response would follow the actual motion (Figure 1B). To find the relationship between the switching frequency of bistable perception and the response dynamics during motion detection tasks, we examined individual phase duration, τ, of bistable perception and response time, and the accuracy in detection of ambiguous motion. Our result showed that the response time of motion detection was positively correlated with the τ of bistable perception (N = 49, R = 0.52, p < 0.001), while the accuracy was independent of τ (Figure 1C). Next, to investigate whether the time window for information integration is a crucial factor in determining the τ of bistable perception, we modified the racetrack to have time-varying motion with different frequencies. Our result shows that individuals with short τ has smaller integration time than individuals with long τ (Figure 1D). In addition, the simulation study shows that diverse time windows of stimulus integration can regenerate various τ of bistable perception. This result supports the idea that each individual has an intrinsic time window for information accumulation, and that the duration of the window may determine the τ of bistable perception.
Figure 1. Temporal dynamics of bistable perception and behaviour characteristic of perceptual decision making A. Racetrack stimulus. Coherence level, c determines the portion of rotating dots B. In c = 0, perception is illusory bistable motion (top) and in c > 0, perception follows actual motion (bottom) C. Correlation between subjects’ phase duration and the response time (top) and motion detection accuracy (bottom). D. Time-varying coherence stimulus (top), and the reverse correlation analysis of perceptual switching (bottom). Individual with long τ has larger information integration time than individual with short τ
Conclusions: A series of psychophysics experiments shows that the phase duration of bistable perception is positively correlated with the sensory information accumulation time. This suggests that bistable perception may result from continuous decision making related to the accumulation of sensory information.
Reference
1. Jain S: Performance characterization of Watson Ahumada motion detector using random dot rotary motion stimuli. PLoS One 2009, 4. e4536
P48 Regularly structured retinal mosaics can induce structural correlation between orientation and spatial frequency maps in V1
Jaeson Jang1, Se-Bum Paik1,2
1Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; 2Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
Correspondence: Jaeson Jang (jaesonjang@kaist.ac.kr)
BMC Neuroscience 2017, 18(Suppl 1):P48
In higher mammals, the primary visual cortex (V1) is organized into functional maps that capture specific features of visual stimulus such as orientation or spatial frequency. In each functional map, the preferred features change continuously in a quasi-periodic manner. Moreover, it has been reported that the topographies of the functional maps on the same cortical surface are correlated. For example, the contour of an iso-orientation domain orthogonally intersect the contour of an iso-frequency domain [1]. This implies the systematic organization of the functional maps but leaves ambiguous how such correlated topography could be developed in V1. In this study, using computer simulation, we show that orientation and frequency maps are both seeded from the regularly structured retinal mosaics and that this common source can induce the observed correlated organization of functional maps. Previously, it was proposed for a theoretical model that the superposition between the hexagonal mosaics of ON and OFF retinal ganglion cells (RGC) generates a moiré interference pattern (Figure 1A) [2]. The key assumption of the model was that the orientation preference of a V1 neuron can be predicted by the relative location of ON and OFF RGCs, which is supported by recent observations that the structure of cortical functional maps is strongly correlated with the local organization of ON and OFF afferents [3, 4]. Thus, the model suggested that the repetition of similar orientation preference across the interference pattern can seed a quasi-periodic orientation map (Figure 1B). Here, we propose that the frequency preference of a cortical neuron depends on the distance between local ON and OFF RGCs, which is also repeated across the interference pattern. Our simulation reproduced the quasi-periodic orientation map and the frequency map both seeded from a common set of hexagonal ON and OFF RGC mosaics (Figure 1B,C). We found that the preferred orientation and frequency change in relation to each other in the orthogonal direction [1], because the distance between ON and OFF RGCs changes in the direction orthogonal to the change of orientation preference across the moiré interference. Our simulation also reconstructed the observed relationship in which a pinwheel in the orientation map overlaps high or low frequency domains in the frequency map [5]. Additionally, we found hexagonal structure in the observed frequency map, as our model predicted. Our results explain how the topographic correlation between cortical functional maps is developed from the identical sources of retinal mosaics. This may provide a blueprint explaining how the visual system develops the correlated structure of functional maps with a simple organization principle in retina.
Figure 1. Regularly structured RGC mosaics can seed topographic correlation between cortical functional maps. A. Moiré interference pattern of ON- and OFF-center RGCs B. Simulated orientation map C. Simulated spatial frequency map; two features change in the orthogonal direction
References
1. Nauhaus I, Nielsen KJ, Disney A a, Callaway EM: Orthogonal micro-organization of orientation and spatial frequency in primate primary visual cortex. Nat Neurosci 2012, 15:1683–1690.
2. Paik S-B, Ringach DL: Retinal origin of orientation maps in visual cortex. Nat Neurosci 2011, 14:919–925.
3. Kremkow J, Jin J, Wang Y, Alonso JM: Principles underlying sensory map topography in primary visual cortex. Nature 2016, 533:52–57.
4. Lee K-S, Huang X, Fitzpatrick D: Topology of ON and OFF inputs in visual cortex enables an invariant columnar architecture. Nature 2016, 533:90–94.
5. Hübener M, Shoham D, Grinvald A, Bonhoeffer T: Spatial relationships among three columnar systems in cat area 17. J Neurosci 1997, 17:9270–9284.
P49 Distinct role of synaptic and nonsynaptic plasticity in memory ensemble formation, allocation, and linkage
Youngjin Park1, Se-Bum Paik1,2
1Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; 2Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
Correspondence: Youngjin Park (yodamaster@kaist.ac.kr)
BMC Neuroscience 2017, 18(Suppl 1):P49
Synaptic plasticity—the change of synaptic strength between pre- and postsynaptic neurons—is widely believed to be the basis of learning and memory. Yet, there is another type of plasticity observed in the brain: nonsynaptic plasticity [1]. Nonsynaptic plasticity, often referred to as intrinsic plasticity, induces changes in excitability of a neuron by modulating neuronal intrinsic properties. A growing number of studies report that changes in nonsynaptic properties, such as the action potential threshold or afterhyperpolarization level, are triggered by learning [1]. This indicates nonsynaptic plasticity may play crucial roles in memory formation, but the functional mechanisms still remain elusive. Here we hypothesize distinct roles for synaptic and nonsynaptic plasticity in learning and memory: activity-dependent synaptic plasticity is involved in memory ensemble formation, and nonsynaptic plasticity is involved in pre-allocation and linkage of memory. To validate our ideas, we constructed a spiking neural network model consisting of 100 excitatory and 30 inhibitory leaky integrate-and-fire neurons (Figure 1A). Output layer neurons were set to receive temporal patterns from input layer neurons, and receive lateral inhibition from nearby inhibitory neurons. As the synaptic learning rule, spike-timing-dependent plasticity was applied to sparse feedforward connection between input and output layers. Our simulation results show that the network model learned temporal patterns by repeated exposure and formed a neuronal ensemble, a set of output neurons that selectively responded to a trained pattern (Figure 1B). Remarkably, by applying nonsynaptic plasticity to the network, we could control the pre-allocation of memory. Neurons with higher excitability had a greater chance of being recruited into a memory ensemble than those in a control group, as reported from the experiment (Figure 1C) [2]. Moreover, the total size of the memory ensemble remained consistent throughout the simulation, due to the inhibitory feedback. We did further simulations to investigate the advantage of nonsynaptic modulation. First, it regulates the learning rate of neurons; we observed that neurons with greater excitability learned faster than normal neurons. Second, temporal change of excitability via nonsynaptic plasticity modulates the linkage of multiple memories. Last, pre-allocation of neurons boost the memory lifetime and capacity of the network.
Overall, our model shows that synaptic plasticity is required for information storage through ensemble formation, whereas nonsynaptic plasticity modulates neuronal allocation of memory.
Figure 1. A. (Left) Temporal input pattern. (Middle) Spiking neural network model with lateral inhibition. (Right) Spike-timing-dependent plasticity. B. The network learned temporal patterns and formed a neuronal ensemble. C. Role of non-synaptic plasticity. Neurons excited just before learning have higher probability of being recruited into an engram
References
1. Mozzachiodi R, Byrne JH: More than synaptic plasticity: role of nonsynaptic plasticity in learning and memory. Trends Neurosci 2010, 33:17–26.
2. Yiu AP, Mercaldo V, Yan C, Richards B, Rashid AJ, Hsiang HLL, et al.: Neurons are recruited to a memory trace based on relative neuronal excitability immediately before training. Neuron 2014, 83:722–735.
P50 Frequency- and Location-Dependence of Auditory Influence on Human Visual Perception
Jun Ho Song1, Se-Bum Paik2,3
1Information and Electronics Research Institute, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; 2Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; 3Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
Correspondence: Jun Ho Song (jhs11@kaist.ac.kr)
BMC Neuroscience 2017, 18(Suppl 1):P50
The topography in primary sensory cortices fades in higher cortical regions, but a recent fMRI study reported that the activity induced by different stimuli of one sensory modality is discernable in non-pertinent sensory systems [1]. However, whether the influence of a sensory system on another sensory system is systematically organized in humans remains a question. In the present study, we hypothesized that the frequency map in human auditory perception systematically projects to the spatial map in visual perception. To test our hypothesis, we conducted a set of psychophysical experiments: Subjects were asked to perform orientation discrimination tasks with one eye closed. Visual stimuli were presented for 67 ms at various locations on the half side of the monitor screen ipsilateral to the open eye. Half a second before a visual stimulus appeared, either no acoustic stimulus or a sound with a frequency ranging from 200 Hz to 8 kHz was introduced to the ear ipsilateral to the open eye. We found that sounds within a certain frequency bandwidth significantly changed subjects’ performance compared to the no-sound condition, whereas those outside the bandwidth did not. These effects were not homogeneous across the visual space: visual perception at different visual locations had location-specific sound frequencies that substantially affected subjects’ performance, and these frequencies changed gradually across the visual space. Our results show that auditory influence on visual perception can modelled as a function of sound frequency and location in visual space. Because both the auditory and visual stimuli that we introduced began to be processed in the very early stages of the sensory systems—i.e. frequency discrimination in the primary auditory cortex and orientation discrimination in the primary visual cortex—the observations we made imply that the tonotopic organization of the auditory cortex may be matched to the retinotopic organization of the visual cortex.
Reference
1. Liang M, Mouraux A, Hu L, Iannetti GD. Primary sensory cortices contain distinguishable spatial patterns of activity for each sense. Nat Commun 2013, 4:1979.
P51 Developmental model for ocular dominance column seeded from retinal
Min Song1,2, Se-Bum Paik1,2
1Department of Bio and Brain Engineering, KAIST, Daejeon 34141, Republic of Korea; 2Program of Brain and Cognitive Engineering, KAIST, Daejeon 34141, Republic of Korea
Correspondence: Min Song (night@kaist.ac.kr)
BMC Neuroscience 2017, 18(Suppl 1):P51
It has been reported that an ocular dominance column and an orientation map in the primary visual cortex (V1) have a close relationship in which the ocular dominance peaks are located at the pinwheel centers in the orientation map [1]. One theoretical study suggested that the quasi-periodic hexagonal structure of the orientation map can be seeded by the moiré interference pattern between ON and OFF retinal ganglion cell (RGC) mosaics [2], but the origin of ocular dominance column structure has not been explained. Because similar hexagonal patterns were also observed in an ocular dominance column in our preliminary analysis of the experimental data, and these spatial structures are also thought to be formed before eye-opening [3], we hypothesized that the ocular dominance column, along with the orientation map, is seeded from the retinal mosaics. In this study, using computer simulation, we show that the hexagonal structure of the ocular dominance column can be developed by the moiré interference pattern of the RGC density. We designed a model in which a V1 layer is statistically wired with two different RGC layers. In the development period, the initial orientation map is first developed by contralateral wiring in the visual pathway; then, the ipsilateral wiring matches the initial orientation map during the critical period of development [4]. Because of this, we assumed that V1 cells were initially only connected with contralateral RGCs within a local convergence range and with ipsilateral RGCs within a wide convergence range (Figure 1A). By presenting drifting gratings to either the contralateral or ipsilateral RGC layers, we simulated and plotted the response of V1 (Figure 1B, C). Then, the ocular dominance map was calculated as a relative ratio between the contralateral and ipsilateral response maps of V1 (Figure 1D). We observed that the hexagonal pattern in the ocular dominance map matches the moiré interference pattern of RGC mosaics (Figure 1E). We compared this ocular dominance map with an orientation map seeded by a contralateral RGC mosaic. The results show that the ocular dominance peaks are located at pinwheel centers of the orientation map, as reported by previous experimental studies (Figure 1F). Our model shows that the initial ocular dominance map can be seeded from the periodicity of the contralateral RGC mosaic. Furthermore, we expect that the initial ocular dominance column can be sharpened during development.
Figure 1. Simulation of ocular dominance column development. A. Schematics of RGC-cortex model. B. Response map of contralateral input C. Response map of ipsilateral input. D. Moiré interference periodicity of contralateral RGC. Black box indicates the net computed area of response maps to exclude boundary effects. E. Ocular dominance map. F. Relationship between ocular dominance map and orientation map seeded from contralateral RGC mosaic
References
1. Crair MC, Ruthazer ES, Gillespie DC, Stryker MP: Ocular dominance peaks at pinwheel center singularities of the orientation map in cat visual cortex. Journal of Neurophysiology 1997, 77.6: 3381–3385.
2. Paik SB, Ringach DL: Retinal origin of orientation maps in visual cortex. Nature neuroscience 2011, 14.7: 919–925.
3. Crowley JC, Lawrence CK: Early development of ocular dominance columns. Science 2000 290.5495: 1321–1324.
4. Crair MC, Gillespie DC, Stryker MP: The role of visual experience in the development of columns in cat visual cortex.” Science 1998 279.5350: 566–570.
P52 Reliability of effective connectivity from fMRI resting-state data: discrimination between individuals
Vicente Pallarés1, Matthieu Gilson1, Simone Kühn2, Andrea Insabato1, Gustavo Deco1
1Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain; 2Max Plank Institute for Human Development, Berlin, Germany
Correspondence: Vicente Pallarés (vicente.pallares@upf.edu)
BMC Neuroscience 2017, 18(Suppl 1):P52
Neuroimaging studies traditionally analyze data at the group level, without considering individual characteristics. However, recent studies have stressed the relevance of subject-specific analysis. In particular, efforts have been made to assess the variability and reliability of brain connectivity based on fMRI data to characterize individuals [1]. Brain connectivity is typically calculated as the statistical dependence between the activity of brain regions - for example using Pearson correlation - giving matrices of functional connectivity (FC). To understand the causal interactions between regions that generate the observed FC patterns, the concept of effective connectivity (EC) has been developed [2,3]. EC reflects many biophysical mechanisms such as neurotransmitters, excitability, etc., and captures spatiotemporal information of fMRI signals.
In this work, we use fMRI resting-state data acquired from 6 subjects that underwent scanning for 50 sessions over 6 months, as well as data from 50 subjects that were scanned once. We calculate the whole-brain FC using a parcellation of 116 anatomical regions and estimate the EC for a dynamic model that reproduces the measured FC [3]. This unique dataset allows us to evaluate the variability and reliability of the EC, taken as a fingerprint of fMRI activity, and to establish a comparison with the FC [4]. Practically, we classify subjects from 1-6 sessions using their EC and FC. We train a linear classifier and are able to predict subject identity of remaining sessions. We achieve a very high identification accuracy (>90%) after training with 3 or 4 sessions with a duration of 5 min each. The better performance of the EC than the FC in discriminating between individuals demonstrates the importance of temporal information in fMRI signals and our model-based approach (Figure 1).
Beyond the theoretical understanding of brain dynamics, our results are a first step toward the clinical applicability of EC model. Our long-term goal is to provide a mechanistic explanation for neuropsychiatric disorders, allowing for the follow-up of subject-specific drug treatments or therapies based on EC measures from the non-invasive fMRI.
Figure 1. Accuracy of the classification for test sessions after training the classifier with 4 resting-state sessions. Results are shown for EC and two versions of FC: correlation (corrFC) and no-shift covariances (FC0)
References
1. Shehzad Z, Kelly AM, Reiss PT, Gee DG, Gotimer K, Uddin LQ, Lee SH, Margulies DS, Roy AK et al.: The resting brain: unconstrained yet reliable. Cereb Cortex. 2009, 19(10):2209–2229.
2. Friston KJ: Functional and effective connectivity: a review. Brain Connect. 2011, 1(1):13–36.
3. Gilson M, Moreno-Bote R, Ponce-Alvarez A, Ritter P, Deco G: Estimation of Directed Effective Connectivity from fMRI Functional Connectivity Hints at Asymmetries of Cortical Connectome. PLoS Comput Biol. 2016, 12(3):e1004762.
4. Finn ES, Shen X, Scheinost D, Rosenberg MD, Huang J, Chun MM, Papademetris X, Constable RT: Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity. Nat Neurosci. 2015, 18(11):1664–1671.
P53 Temporal dynamics of resting state networks on a whole-brain level
Katharina Glomb1, Adrián Ponce-Alvarez1, Matthieu Gilson1, Petra Ritter2, Gustavo Deco1,3
1Center for Brain and Cognition, Department of Technology and Information, Universitat Pompeu Fabra, Carrer Ramon Trias Fargas, 25-27, 08005 Barcelona, Spain; 2Department of Neurology, Charité - University Medicine, Charitéplatz 1, 10117 Berlin, Germany; 3Institució Catalana de la Recerca i Estudis Avançats, Universitat Barcelona, Passeig Lluís Companys 23, 08010 Barcelona, Spain
Correspondence: Katharina Glomb (katharina.glomb@upf.edu)
BMC Neuroscience 2017, 18(Suppl 1):P53
FMRI BOLD signals recorded during resting state (RS) can be used to study the large-scale functional organization of the human brain [1]. This way, robust patterns of functional connectivity (FC) have been shown to exist and are termed resting state networks (RSNs) [2]. However, FC is not constant over time, and the properties and significance of its modulations are not yet understood and characterized, despite substantial interest in the topic over the last years [3]. While it seems clear that they are relevant to behavior and are at least to some extent related to underlying neural activity, there is ongoing debate as to whether they reflect nonstationarities (e.g. state switching) or not [4]. We analyzed fMRI RS data (22 min, TR = 2 s) recorded from 24 healthy controls, studying FC of 66 ROIs covering the entire cortex. With this whole-brain approach, we characterized dynamic FC (dFC) on a global level via a simple sliding-window technique. We extracted RSNs and their time courses with a dimensionality reduction technique known as tensor decomposition, which does not assume independence as ICA does [5]. We examined global dynamic modulations in the underlying BOLD signal to shed light on the mechanisms behind RSN dynamics apparent in the time courses extracted from dFC-based tensors. We show that the substantial modulations in the activity of RSNs are to a large extent explained by modulations in underlying BOLD variance and average correlation strength, establishing a tight relationship between the three measures (see Figure 1 for one subject’s example). We ask whether the modulations can be explained by stationary dynamics, using both surrogate data and a mean-field model. This way, we show that the presence and the size of modulations are explained by stationary dynamics. However, the dwell times at the peaks and troughs of the modulations are longer in the real data than expected. We conclude that in order to understand dFC, we should consider deviations from expected modulations rather than focusing primarily on their size, stressing the importance of appropriate null models.
Figure 1. Traces of two measures of BOLD dynamics (blue: instantaneous average correlation, i.e. overall level of FC; orange: instantaneous BOLD variance, i.e. average over all brain regions’ variance) together with an RSN time course (grey)
References
1. Biswal B., Zerrin Yetkin F., Haughton VM., Hyde JS: Functional connectivity in the motor cortex of resting human brain using echo‐planar mri. Magnetic resonance in medicine 1995, 34(4):537–541
2. Beckmann CF, DeLuca M, Devlin JT, Smith SM. Investigations into resting-state connectivity using independent component analysis. Philosophical Transactions of the Royal Society of London B: Biological Sciences 2005, 360(1457):1001–1013
3. Preti MG, Bolton TAW, Van De Ville D: The dynamic functional connectome: State-of-the-art and perspectives. NeuroImage (in press)
4. Hindriks R, Adhikari MH, Murayama Y, Ganzetti M, Mantini D, Logothetis NK, Deco G: Can sliding-window correlations reveal dynamic functional connectivity in resting-state fMRI? Neuroimage 2016, 127:242–256
5. Glomb K, Ponce-Alvarez A, Gilson M, Ritter P, Deco G: Robust extraction of spatio-temporal patterns from resting state fMRI. bioRxiv 2016:08951
P54 Non-parametric estimation of network connectivity using MVAR processes in multiunit activity
Matthieu Gilson1, Adria Tauste Campo1,2, Alexander Thiele3, Gustavo Deco1,4
1Computational Neuroscience Group, Department de Tecnologies de la Informació i les Comunicacions, Universitat Pompeu Fabra, Barcelona, Spain; 2Epilepsy Monitoring Unit, Department of Neurology, Hospital del Mar Medical Research Institute, Barcelona, Spain; 3Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK; 4Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
Correspondence: Matthieu Gilson (matthieu.gilson@upf.edu)
BMC Neuroscience 2017, 18(Suppl 1):P54
Connectivity inference has become a cornerstone in neuroscience following the recent progress in recording techniques to characterize functional networks. New recording techniques using electrode arrays allow for the study of the simultaneous activity of distant neuronal populations. Based on our recently proposed non-parametric detection method for multivariate autoregressive (MVAR) process [1], we examine interactions between 26 electrode channels from a UTAH array implanted in a monkey performing a passive visual task. The multiunit activity envelope (MUAe) putatively reflects the spiking activity of neuronal population neighboring the electrodes, with a resolution of a few milliseconds. However, MUAe activity appears very noisy across trials – for example, it requires an averaging over many trials to exhibit differences in magnitude. Therefore, it is questionable whether MUAe conveys temporal information related to pairs of channels that could be decoded by MVAR. Our method estimates (correlated) noisy inputs received by the channels in addition to the directed connectivity between them. We find many significant interactions after the stimulus presentation, in contrast to the pre-stimulus period.
Meanwhile, we compare several types of surrogate techniques applied on the MUA time series time to build the null hypothesis of no connection in the channel network. In doing so, we also evaluate the importance of building a null distribution for each possible interaction, as compared to a single null distribution for the whole network (i.e., homogeneous test for all channel pairs). Last, we examine the stimulus-related directed interactions with the increase of MUAe activity of the source and target channels: we observe that outgoing weights are positively correlated with the channel’s activity, suggesting a gating of an underlying non-trivial connectivity by the local channel activity. The application of our method to MUAe (corresponding to high frequencies between 600 Hz and 4 kHz) complements existing techniques such as Granger causality applied to the local-field potential (1–300 Hz) for these electrode recordings.
Acknowledgements
MG acknowledges funding from the Marie Sklodowska-Curie Action (grant H2020-MSCA-656547). MG and GD were supported by the Human Brain Project (grant FP7-FET-ICT-604102 and H2020-720270 HBP SGA1). GD and ATC were supported by the European Research Council Advanced Grant DYSTRUCTURE 588 (Grant 295129). The authors are grateful to Robert Castelo and Inma Tur for constructive discussions.
Reference
P55 Dependence of Absence Seizure Dynamics on Physiological Parameters
Farah Deeba1,2, Paula Sanz-Leon1,2, P. A. Robinson1,2
1School of Physics, University of Sydney, Sydney, Australia; 2Center for Integrative Brain Function, University of Sydney, Sydney, Australia
Correspondence: Farah Deeba (farah.deeba@sydney.edu.au)
BMC Neuroscience 2017, 18(Suppl 1):P55
A neural field model of the corticothalamic system is applied to investigate the temporal and spectral characteristics of absence seizures in the presence of a temporally varying connection strength between the cerebral cortex and thalamus. It has previously been found that increasing connection strength drives the system into seizure once a threshold is passed and a supercritical Hopf bifurcation occurs [1,2]. In this study, the dynamics and spectral characteristics of the resulting seizures are explored as functions of maximum connection strength, time above threshold, and ramp rate [3]. Figure 1 shows the outcomes of the variation of maximum connection strength. The results enable spectral and temporal characteristics of seizures to be related to underlying physiological variations via nonlinear dynamics and neural field theory. Spectral analysis reveals that the power of harmonics and duration of the oscillations increase as maximum connection strength and time above threshold increase. It is also found that the time to reach the stable limit-cycle seizure oscillation from the instability threshold decreases with the square root of the ramp rate.
Figure 1. Effects of the variation of maximum connection strength. A. Maximum firing rate. B. Number of harmonics above dB during ictal state. C. Duration of oscillations. D. Power in harmonics
Acknowledgements
This work was supported by the Australian Research Council Center of Excellence for Integrative Brain Function Grant CE140100007, and by Australian Research Council Laureate Fellowship Grant FL140100025.
References
1. M. Breakspear, J. A. Roberts, J. R. Terry, S. Rodrigues, N. Mahant, P. A. Robinson: A unifying explanation of primary generalized seizures through nonlinear brain modeling and bifurcation analysis. Cereb. Cortex 2005, 16: 1296–1313.
2. P. A. Robinson, C. J. Rennie, D. L. Rowe: Dynamics of large-scale brain activity in normal arousal states and epileptic seizures. Phys. Rev. E 2002, 64: 041924.
3. F. Deeba, Paula Sanz-Leon, P. A. Robinson: Dependence of absence seizure dynamics on physiological parameters. Phys. Rev. E, submitted.
P56 NEST-SpiNNaker comparison of large-scale network simulations
Sacha J. van Albada1, Andrew Rowley2, Johanna Senk1, Michael Hopkins2, Maximilian Schmidt1,3, Alan B Stokes2, David R Lester2, Steve Furber2, Markus Diesmann1,4,5
1Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA BRAIN Institute I, Jülich, 52425, Germany; 2School of Computer Science, University of Manchester, Manchester, M13 9PL, UK; 3Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, 351-0106, Japan; 4Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, 52062, Germany; 5Department of Physics, Faculty 1, RWTH Aachen University, Aachen, 52062, Germany
Correspondence: Sacha J. van Albada (s.van.albada@fz-juelich.de)
BMC Neuroscience 2017, 18(Suppl 1):P56
We previously reported [1] the porting of a full-scale cortical microcircuit model [2] from the neural network simulation software NEST [3] to the digital neuromorphic hardware SpiNNaker [4] via the PyNN [5] meta-simulation language. The network contains around 80,000 leaky integrate-and-fire neurons and 0.3 billion synapses, and is thereby the network with the most connections simulated on SpiNNaker to date. The Poisson drive of the original model was replaced by a DC input. The NEST simulations were performed on a cluster using multithreading and MPI parallelism, at 0.1 ms resolution. The single-neuron and network dynamics were compared between the two simulators and with NEST simulations with precise spike timing [6] as a reference.
In this work, we further compare the performance of the two simulators in terms of speed, power, and energy consumption, controlling for accuracy. For the network simulations, achieving an accuracy comparable to that of NEST requires a slowdown of around 20 with respect to real time on the present SpiNNaker version to account for the 0.1 ms resolution and to avoid spike loss. NEST simulation speed saturates at one-third real time, but this speed is associated with an energy cost. The energy-to-solution of the NEST simulations is minimized around 96 virtual processes, for which it runs at about one-seventh real time and achieves a similar energy consumption per synaptic event to SpiNNaker for similar solution accuracy. The asynchronous update of SpiNNaker may yet confer an advantage in terms of power efficiency for even larger network simulations.
Acknowledgements
This project received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 720270, and was previously supported by the European Union under grant agreement No. 269921 (BrainScaleS) and FP7-604102 (Human Brain Project). The design and construction of the SpiNNaker machine was supported by EPSRC (the UK Engineering and Physical Sciences Research Council) under grants EP/D07908X/1 and EP/G015740/1. Ongoing support comes from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement 320689.
References
1. Van Albada SJ, Rowley AG, Hopkins M, Schmidt M, Senk J, Stokes AB, Galluppi F, Lester DR, Diesmann M, Furber SB: Full-scale simulation of a cortical microcircuit on SpiNNaker. Front Neuroinform Conference Abstract: Neuroinformatics 2016. doi: 10.3389/conf.fninf.2016.20.00029
2. Potjans TC, Diesmann M: The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. Cereb Cortex 2014, 24:785–806.
3. Eppler JM et al. NEST 2.8.0. Zenodo 2015, 10.5281/zenodo.32969
4. Furber SB, Lester DR, Plana LA, Garside JD, Painkras E, Temple S, Brown AD: Overview of the SpiNNaker system architecture. IEEE Transactions on Computers 2013, 62:2454–2467.
5. Davison A, Brüderle D, Kremkow J, Muller E, Pecevski D, Perrinet L, Yger P: PyNN: a common interface for neuronal network simulators. Front Neuroinform 2009, 2(11).
6. Hanuschkin A, Kunkel S, Helias M, Morrison A, Diesmann M: A general and efficient method for incorporating precise spike times in globally time-driven simulations. Front Neuroinform 2010, 4:113.
P57 Temporal processing in the cerebellar cortex enabled by dynamical synapses
Alessandro Barri1, Martin T. Wiechert2, David A. DiGregorio1
1Unite d’Imagerie Dynamique du Neurone, Institut Pasteur, Paris, France; 2Department of Physiology, Universität Bern, Bern, Switzerland
Correspondence: Alessandro Barri (abarri@pasteur.fr)
BMC Neuroscience 2017, 18(Suppl 1):P57
The cerebellar cortex (CC) is considered to be essential for the learning of precisely timed tasks on the order of several tens of ms to a few seconds. Experimentally, this property of the cerebellum can be probed with the classical eye-blink paradigm [1] in which an animal learns to associate two stimuli that are separated by a temporal delay.
Since the classical work of Marr and Albus [2,3] the great majority of cerebellar models considers the CC as a three layered network where mossy fibres (MFs) and Purkinje cells (PCs) form the input and output layer, respectively, and granule cells (GCs) constitute a hidden layer. In this framework, temporal learning in the CC is thought to work as follows: an external input to the CC elicits temporally varying responses in GCs. PCs then weight these GC signals (by adjusting the GC-PC synapses) so as to produce the desired output [4]. This learning paradigm requires sufficiently diverse temporal signals across the GCs.
Various mechanism which generate diverse time varying signals in the GCs have been proposed [e.g. 5–7]. Recent findings have established that synaptic transmission between MFs and GCs exhibits various forms of synaptic short-term plasticity (STP) [8]. Here we show that these synaptic dynamics can provide a sufficiently rich temporal modulation of GC activity to enable temporal learning by PCs on behaviourally relevant timescales.
Our study consists of two parts. First, we re-analysed data from MF-GC dual-cell recordings from Ref. [8] with a model based inference method [9] and extracted parameters associated with pre-synaptic depression, facilitation and post-synaptic receptor desensitisation. This revealed the existence of a rich diversity of synaptic time-constants. We find that the longest of these time constants are associated with desensitisation.
In a second step, we used the experimentally obtained synaptic parameters to constrain a firing-rate-based model of the CC. In this model, GCs exhibit transient modulations of their firing rates in response to changes in MF activity. We show that these GC transients enable PCs to learn precisely timed modulations of their firing rates. The time-scales of the PC signals that can be learned are similar to those observed in behavioural responses during the eye-blink paradigm. Furthermore, when MF-GC synapses are dynamic, abrupt changes in MF activation cause model PCs to respond with sharp transient changes in their firing rates. We show that these PC responses can be interpreted as a signal of how much the sensory context provided by MFs has changed.
References
1. McCormick DA, Thompson RF: Cerebellum: essential involvement in the classically conditioned eyelid response. Science, 1984, 223:296–299.
2. Marr D: A theory of cerebellar cortex. The Journal of Physiology, 1968, 202:437–470.
3. Albus JS: A Theory of Cerebellar Function. Mathematical Biosciences, 1971, 10:25–61.
4. Dean P, Porrill J, Ekerot C-F, Jörntell H: The cerebellar microcircuit as an adaptive filter: experimental and computational evidence. Nature Reviews Neuroscience. 2010, 11:30–43.
5. Moore JW, Desmond JE, Berthier NE: Adaptively timed conditioned responses and the cerebellum: a neural network approach. Biological cybernetics, 1989, 62:17–28.
6. Medina JF, Mauk MD: Computer simulation of cerebellar information processing. Nature Neuroscience, 2000, 3:1205–1211.
7. Yamazaki T, Tanaka S. The cerebellum as a liquid state machine. Neural Networks, 2007, 20:290–297.
8. Chabrol FP, Arenz A, Wiechert MT, Margrie TW, DiGregorio DA: Synaptic diversity enables temporal coding of coincident multisensory inputs in single neurons. Nature Neuroscience, 2015, 18:718–727.
9. Barri A, Wang Y, Hansel D, Mongillo G: Quantifying Repetitive Transmission at Chemical Synapses: A Generative-Model Approach. Eneuro, 2016, 3.
P58 Emergence of perceptual invariances in biological sensory processing
Alexander G. Dimitrov
Department of Mathematics and Statistics, Washington State University Vancouver, Vancouver, WA 98686, USA
Correspondence: Alexander G. Dimitrov (alex.dimitrov@wsu.edu)
BMC Neuroscience 2017, 18(Suppl 1):P58
A problem faced by all perceptual systems is natural variability in sensory stimuli. Some variability is irrelevant for perception, whereas other types of variability form the critical basis for distinguishing different objects. This is a common problem in sensory perception. Interpreting varied optical signals as originating from the same object requires a large degree of tolerance [1]. Understanding speech requires identifying phonemes, such as the consonant/g/, that constitute spoken words. A/g/is perceived as a/g/, despite tremendous variability in acoustic structure that depends on the surrounding vowels and consonants [2]. The main goal of an object recognition problem is the ability to identify individual objects while invariant to changes stemming from multiple transformations.
To model invariant representation in sensory systems, we model the represented probability of sensory stimuli as a distribution over stimulus features w, \( p(w) \) jointly with a distribution \( p(\tau ) = \prod_{i} p(\tau_{i} ) \) of transformations g(τ) acting independently on the features. The ensemble of features {s j } is considered to have been drawn from the feature distribution \( p(w) \), with transformations \( g_{k} \equiv g_{k} (\tau_{i} ) \) applied to the sound features, so that \( s_{j} = g_{1} g_{2} \ldots g_{n} w_{j} . \)
This probabilistic stimulus representation allows a straightforward expansion for the degree of invariance. Consider a population of locally invariant (transformation-tuned) feature detectors, each representing the probability \( p\left( {w |w_{0} } \right) p(\tau |\mu ) \) for a signal having a specific feature \( w_{0} \), but a separate preferred transformation distribution \( p\left( {\tau |\mu } \right) \). For example, \( \mu \) can be the preferred scale or position of the feature. Invariance extension is natural in this formalism: we define a broad region of transformation parameters, \( \varOmega \), and a distribution over preferred means, \( p(\mu ) \), which is essentially uniform over \( \varOmega \). With this addition, a set of locally invariant units with \( \mu \in \varOmega \) can be combined to a unit invariant for all transformations with \( \tau \in \varOmega \) by the simple act of marginalization over preferences,
When only one or a few transformations are marginalized, the system will be more invariant to those transformations, while retaining its degree of covariance to other transformations. This process realizes a mixture model of feature or template detectors with different preferred transformations.
The theoretical aspects of marginalization are deceptively simple: according to Eq. (1), a linear operation (weighted sum when discretized) leads to invariant stimulus representation. Instantiating the theory in the neural context is more involved. While we posit that neural activity represents probabilities of stimuli and transformations, what is available to other parts of the nervous system is a specific sample from that probability, the neural population vector response, \( r = \left( {r_{1} ,r_{2} ,r_{3} , \ldots r_{n} } \right) \). The question that needs to be solved then is what operation should be performed on \( r \), such that the resultant \( R = f\left( r \right) \) has the desired distribution from Eq. (1)? In other words, how do neuronal populations actually achieve invariance through marginalization?
To address this question, we use probabilistic population coding (PPC, [3]), a model of neural population coding that has the capacity of performing the necessary marginalization. With PPC, it has been shown conceptually how marginalization of two distinct transformations can be realized through divisive normalization. We innovate in two aspects. First, we use this model approach as an explanatory tool for specific brain areas, rather than the conceptual example provided in [3]. Second, we generalize the results to multiple transformations, as required by Eq. 1.
References
1.Rust NC, DiCarlo JJ: Selectivity and tolerance (“invariance”) both increase as visual information propagates from cortical area V4 to IT. Journal of Neuroscience 2010, 30(39):12978–12995.
2. Diehl RL, LottoAJ, Holt LL: Speech Perception Annu. Rev. Psychol. 2004, 55:149–79.
3. Beck JM, Latham PE, Pouget A: Marginalization in neural circuits with divisive normalization. J. Neurosci. 2011, 31:15310–15319.
P59 A non-linear stochastic strategy to estimate synaptic conductances under the presence of subthreshold ionic currents
Catalina Vich1, Rune W. Berg2, Antoni Guillamon3, Susanne Ditlevsen4
1Department of Mathematics and Computer Science, Universitat de les Illes Balears, Palma, 07122, Spain; 2Department of Neuroscience and Pharmacology, University of Copenhagen, Copenhagen, 2100, Denmark; 3Department of Applied Mathematics I, EPSEB, Universitat Politècnica de Catalunya, 08028, Barcelona, Spain; 4Department of Mathematical Science, University of Copenhagen, Copenhagen, 2100, Denmark
Correspondence: Catalina Vich (catalina.vich@uib.es)
BMC Neuroscience 2017, 18(Suppl 1):P59
Unveiling the information that a neuron receives from other neurons and distinguishing between excitatory and inhibitory inputs is an important task in neuroscience as it provides valuable information on local connectivity and brain operating conditions. Experimentally, the synaptic conductances are difficult to estimate due to the diversity of synaptic inputs and their unattainable conductances. Different linear inverse methods have been proposed to solve this problem, such as [1–3].
It has been reported that linear models provide poor estimates in spiking regimes (see [4]), but they can also be poor if ionic currents are active in the subthreshold regime (see [5]). Thus, taking a linear model as a generic one to estimate conductances does not seem a valid strategy in all situations; even with some data treatment, such as filtering the observed trace, the transformed dynamics cannot be assumed to follow a linear model.
A deterministic strategy has been developed taking into account quadratic terms (see [5]), which seems to improve estimations under the presence of subthreshold fluctuations. However, the method does not incorporate noise and, moreover, it requires the use of two voltage traces from different trials, which can lead to some misestimations.
In this work, we propose a new strategy to estimate synaptic conductances, which has been tested using in silico data and applied to in vivo recordings. The model is constructed to capture the non-linearities caused by subthreshold activated currents, and the estimation procedure can discern between excitatory and inhibitory conductances using only one membrane potential trace. More precisely, we perform second order approximations of biophysical models to capture the subthreshold non-linearities, resulting in quadratic integrate-and-fire models, and apply approximate maximum likelihood estimation where we only suppose that conductances are stationary in a 50 ms time window. The results show good estimations when applied to different computational models of endowed with different subthreshold ionic currents. Moreover, we also obtain an improvement when we compare the proposed estimation procedure with a linear method with similar features and an oversampling method.
References
1. Bédard C., Béhuret S, Deleuze C, Bal T, Destexhe A: Oversampling method to extract excitatory and inhibitory conductances from single-trial membrane potential recordings. Journal of neuroscience methods 2011, 210 (1).
2. Berg RW, Ditlevsen S: Synaptic inhibition and excitation estimated via the time constant of membrane potential fluctuations. Journal of Neurophysiology 2013, 110 (4).
3. Lankarany M, Heiss JE, Lampl I, Toyoizumi T: Simultaneous Bayesian estimation of excitatory and inhibitory synaptic conductances by exploiting multiple trails. Frontiers in Computational Neuroscience 2016, 10:110.
4. Guillamon A, McLaughlin DW, Rinzel J: Estimation of synaptic conductances. Journal of Physiology-Paris 2006, 100 (1–3)
5. Vich C, Guillamon A: Dissecting estimation of conductances in subthreshold regimes. Journal of Computational Neuroscience 2015, 39 (3).
P60 Involvement of randomness in reinforcement learning
Romain D. Cazé1, Benoît Girard1, Stéphane Doncieux1
1ISIR, Université Pierre et Marie Curie, Paris, 75005, France
Correspondence: Romain D. Cazé (romain.caze@gmail.com)
BMC Neuroscience 2017, 18(Suppl 1):P60
Animals could differently respond when confronted two times to the exact same situation. Classic reinforcement learning agents implement this randomness using a constant parameter. Most of the time, the agent picks the action with the highest value to maximize its reward, and a fraction of the time, regulated by the previous parameter, it randomly picks an action. This randomness enables an agent to probe unexplored options and help to address the exploration-exploitation trade off. While this approach successfully explains how animals can behave randomly, it fails to replicate the non-uniform variation of performances observed from one day to another. For instance, performances at the end of the previous day could be higher than at the beginning of the next day. To reproduce this daily variation, we use here a parameter that varies periodically from session to session. First, we compare on a single session three types of agent performing the standard armed-bandit problem. For these three agents, the parameter setting randomness takes either: (1) a low (2) a high (3) or a decreasing value over a session and we show that this latter type of agent collects more reward than the others. We reset the parameter regulating our third agent randomness between every session to mimic a resting period. Second, we demonstrate that in a session where agents have already learned another armed-bandit problem different from the previous session. The third type of agent still performs best and remains unaffected by the variation that is not the case especially for the first type of agent. Our work paves the way for a new type of agent with periodic variations of its choices randomness.
P61 Modelling the impact of dendritic spine geometry on electrical and calcic signalling with the Finite Element Method
Nicolas Doyon, Frank Boahen
Department of Mathematics and Statistics, Laval University, Quebec, Canada, G1V 0A6
Correspondence: Nicolas Doyon (nicolas.doyon@mat.ulaval.ca)
BMC Neuroscience 2017, 18(Suppl 1):P61
The complex geometry of neural sub compartments such as dendritic spines and nodes of Ranvier play important roles in calcium and electrical signaling. The usual multi compartment approach fails to accurately describe electro-diffusion in such domains. A way to obtain more accurate results and to describe the spatial distribution of ionic concentrations and electrical potential up to a nanometric resolution is to solve the Poisson Nernst Planck equations with the Finite Element Method (FEM). Given that applying this technic on complex three dimensional geometries can rapidly lead to prohibitive computational cost, mathematical tools from the field of numerical analysis are required. We present how such a tool, automatic mesh adaptation, can improve solution accuracy in an electrodiffusion model of a node of Ranvier [1]. We then describe electrical and calcium signalling in a dendritic spine with the FEM. Spine geometry varies greatly from one spine to another as well as during synaptic potentiation but the functional roles of this geometry are still not fully understood [2]. We show how FEM based models provide an ideal tool to investigate this question [3]. Using models with different geometries, we finally obtain relationships between the geometry of the spine and properties of calcium as well as electrical signalling.
Acknowledgements
A The work here presented was supported by the National Science and Engineering Research Council of Canada (NSERC)
References
1. I Dione, J Deteix, T Briffard, E Chamberland, N Doyon: Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation. PloS one 2016,11(8).
2. R Yuste: Electrical compartmenatalization in dendritic spines. Annu Rev Neurosci. 2013, 36:429–49.
3. D Holcman, R Yuste: The new nanophysiology: regulation of ionic flow in neuronal subcompartments. Nat Rev Neurosci. 2015, 16(11):685–92.
P62 Resilience in dynamical neural networks with synaptic adaptation
Patrick Desrosiers1,2, Edward Laurence2, Nicolas Doyon1,3, Louis J. Dubé2
1Centre de recherche de l’Institut universitaire en santé mentale de Québec, Québec, Québec, Canada, G1J 2G3; 2Département de physique, de génie physique et d’optique, Université Laval, Québec, Québec, Canada G1V 0A6; 3Département de mathématiques et de statistique, Université Laval, Québec, Québec, Canada G1V 0A6
Correspondence: Patrick Desrosiers (Patrick.desrosiers.1@ulaval.ca)
BMC Neuroscience 2017, 18(Suppl 1):P62
The brain is a notorious resilient system. After minor strokes, for example, parts of the brain reorganize their structural connectivity and essentially recover their original functions. Although some dynamical effects of brain network failures on their activity have been found [1], most studies about resilient neural systems have so far focused on purely topological properties of connectomes. This is due in part to the inherent high-dimensionality of dynamical neural systems. Recent progresses suggest however that the resilience analysis of many complex dynamical systems can be dramatically simplified by dimension reductions resulting from mean-field approximations [2,3]. We extend these previous works to study models of neural networks in which neurons and synaptic weights are dynamical variables. In our framework, the dynamics of a network with N neurons is described by N(N + 1) nonlinear coupled ODEs that govern the fast evolution of the neural activity (e.g., firing-rates) as well as the slow adaptation of the synaptic weights (e.g., Hebbian potentiation with saturation). Two global variables, the effective activity \( x_{eff} \) and the effective synaptic weight \( \beta_{eff} \), are used for predicting the global evolution of the whole system. We prove, both numerically and theoretically, that \( x_{eff} \) captures more accurately the behavior of the network than the usual mean network activity. When the synaptic adaptation is neglected, the resilience analysis can be easily done with bifurcation diagrams as in Figure 1A. Structural perturbations, such as weak or strong attacks that respectively change weights or break synaptic connections, result in a modification of \( \beta_{eff} \). If the latter reaches some critical value, \( \beta_{c} \), the system undergoes a sudden transition and loses its resilience. This is numerically confirmed in Figure 1B. As illustrated in Fig 1C, the addition of synaptic adaptation leads to the emergence of new resilience patterns and often facilitates the recovery of the original network activity.
Figure 1. A. Typical bifurcation diagram for the effective model without synaptic adaptation, where \( \varvec{\alpha},\varvec{\lambda},\varvec{\mu} \) are dynamical parameters regulating the neural dynamics while \( \varvec{\beta}_{{\varvec{eff}}} \) is effective synaptic weight. B. Global effective activity at equilibrium after weak (red line) or strong (blue line) attacks on static synaptic connections compared to the theoretical hysteresis curve (dashed line) obtained from mean-field theory. C. Same as B but with synaptic adaptation. The square, stars, and triangles respectively denote the equilibria before an attack, just after an attack but before adaption, and after adaptation. Green line: resilience enabled by adaptation. The numerical solutions in B and C were produced from small random networks with 200 neurons and connectivity density \( \varvec{p} = 0.2 \)
Acknowledgements
FRQNT, NSERC, and the Sentinel North program supported by the Canada First Research Excellence Fund.
References
1. Joyce KE, Hayasaka S, Laurienti PJ: The human functional brain network demonstrates structural and dynamical resilience to targeted attack. PLoS Comput Biol 2013, 9 (1): e1002885 1–11.
2. Majdandzic A, Podobnik B, Buldyrev SV, Kenett DY, Havlin S, Stanley HE: Spontaneous recovery in dynamical networks. Nature Physics 2014, 10(1):34–38.
3. Gao J, Barzel B, Barabási AL: Universal resilience patterns in complex networks. Nature 2016, 530(7590):307–312.
P63 Cell assemblies: a computational challenge
Russo Eleonora, Daniel Durstewitz
Department of Theoretical Neuroscience, ZI - Central Institute for Mental Health, Mannheim, 68159, Germany
Correspondence: Russo Eleonora (eleonora.russo@zi-mannheim.de)
BMC Neuroscience 2017, 18(Suppl 1):P63
More than half a century ago, Hebb proposed that neurons may organize into coherent spatio-temporal activity patterns (‘cell assemblies’) to represent mental entities. Only recently, with the advance of multiple single-unit recording techniques, this core concept of computational and cognitive neuroscience has become experimentally accessible. From a statistical perspective, however, detecting these patterns in data still remains a major challenge: the presence of non-stationarity, the combinatorial explosion of multi-unit pattern configurations and the resulting necessity of a fast statistical test are only some of the difficulties to be faced when detecting cell assemblies. Here we present a novel mathematical framework that captures assembly structure at different temporal scales, levels of precision, and with arbitrary internal organization. Applying this methodology to multi-cell recordings from various brain areas we found that there is no universal cortical coding scheme, but that assembly structure strongly differs with brain area recorded and current task demands.
P64 Reconstructing neural dynamics from experimental data using radial basis function recurrent neural networks
Dominik Schmidt, Daniel Durstewitz
Department of Theoretical Neuroscience, Bernstein Center for Computational Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Heidelberg, Germany
Correspondence: Dominik Schmidt (dominik.schmidt@zi-mannheim.de)
BMC Neuroscience 2017, 18(Suppl 1):P64
Neural recordings are often very complex, noisy and high-dimensional signals. Modern data acquisition techniques allow for simultaneous recordings from up to hundreds of units over many trials. To assess underlying network mechanics and dynamics, one has to analyze the population as a whole, for example, by reducing the dimensionality of the data [1]. In addition, neural responses are highly noisy and often fluctuate significantly between trials, even when experimental conditions are unchanged. These fluctuations may encode relevant behavioral information, such that simple averaging over trials could potentially smooth out and obscure behaviorally important aspects of neural dynamics [2]. A popular class of methods to reduce dimensionality while analyzing data on a trial by trial basis is the statistical framework of State Space Models (SSMs) [3]. The idea behind SSMs is that there is an underlying latent dynamical system generating the observations, with latent dynamics and observations having separate noise terms. While linear SSMs are widely used to recover hidden neural trajectories [4], they are only able to reproduce the linear aspects of the underlying neural dynamics. They are thus not powerful enough to capture the underlying dynamical system itself [5].
For that reason, we use a nonlinear SSM that includes radial basis functions (RBF) for the latent state dynamics, originally developed in [6]. With such a RBF expansion, arbitrary dynamical systems can be approximated [6], which potentially not only allows for dimensionality reduction and retrieving hidden neural trajectories, but also for reproducing the underlying dynamical system itself. To estimate parameters and hidden states of the model, an Expectation Maximization (EM) algorithm together with an Extended Kalman Filter-Smoother is used [6]. One advantage of this method is that all steps of the algorithm have a closed form analytical expression, resulting in computationally efficient parameter estimation that does not depend on computationally expensive numerical methods.
To assess the validity of the method and explore its capabilities, it is first applied to synthetically generated data from a number of different dynamical systems, including multistable, oscillatory and chaotic systems. In addition to this synthetic data, the method is probed on experimental data. This enables a detailed analysis of attractor dynamics within the observed regions and potentially yields not only a descriptive model, but also a predictive one.
Acknowledgements
The work was funded by the German Research Foundation within the CRC 1134 (D01) and by the Federal Ministry of Education and Research (BMBF; 01ZX1311A)
References
1. Cunningham JP and Yu B M: Dimensionality reduction for large-scale neural recordings. Nature Neuroscience 2014, 17(11), 1500–1509.
2. Latimer KW, Yates JL, Meister MLR, Huk AC, Pillow JW: Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science 2013, 349(6244), 184–187.
3. Durstewitz D, Koppe G, Toutounji H: Computational models as statistical tools. Current Opinion in Behavioral Sciences 2016, 11, 93–99.
4. Yu BM, Cunningham JP, Santhanam G, Ryu SI, Shenoy KV, Sahani M: Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Journal of Neurophysiology 2009, 102(1), 614–635.
5. Durstewitz D: A State Space Approach for Piecewise-Linear Recurrent Neural Networks for Reconstructing Nonlinear Dynamics from Neural Measurements 2016, arXiv:1612.07846 [q-bio.NC]
6. Roweis S, Ghahramani Z: Learning Nonlinear Dynamical Systems Using the Expectation–Maximization Algorithm, in Kalman Filtering and Neural Networks 2001 (ed S. Haykin), John Wiley & Sons, Inc., New York, USA.
P65 Layer V pyramidal cells as mediators of delta oscillations: Insights from biophysically detailed modeling and connections with schizophrenia genetics
Tuomo Mäki-Marttunen1, Florian Krull1, Francesco Bettella1, Christoph Metzner2, Anna Devor3,4, Srdjan Djurovic5, Anders M. Dale3,4, Ole A. Andreassen1, Gaute T. Einevoll6,7
1NORMENT, Institute of Clinical Medicine, University of Oslo, Oslo, Norway; 2Centre for Computer Science and Informatics Research, University of Hertfordshire, Hatfield, UK; 3Department of Neurosciences, University of California San Diego, La Jolla, CA, USA; 4Department of Radiology, University of California San Diego, La Jolla, CA, USA; 5Department of Medical Genetics, Oslo University Hospital, Oslo, Norway; 6Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway; 7Department of Physics, University of Oslo, Oslo, Norway
Correspondence: Tuomo Mäki-Marttunen (tuomomm@uio.no)
BMC Neuroscience 2017, 18(Suppl 1):P65
Delta oscillations (0.5–4 Hz) are widely distributed brain oscillations that are observable with electroencephalogram (EEG) measurements during sleep and mental tasks. They seem to have two components, one thalamically generated and one originating from the neocortex [1]. The thalamically generated delta oscillation stems solely from the intrinsic properties of the thalamocortical neurons, while the cortically generated delta oscillations likely rely on the intrinsic properties of layer V pyramidal cells (L5PCs) [1]. Moreover, L5PCs integrate large numbers of inputs from thalamic nuclei [2] and could therefore play a crucial role in maintaining the thalamically generated delta as well.
Due to the pivotal role of L5PCs as hubs integrating information from nearby and distant brain areas, altered L5PC activity has been suggested as the reason behind faulty perceptions, such as hallucinations, in mental disease [2]. Importantly, schizophrenia (SCZ) patients show elevated power in delta oscillations, which may also be a sign of altered L5PC firing. The recent genome-wide association studies confirm the contribution of a large set of ion-channel (both synaptic and non-synaptic) and calcium-transporter-encoding genes to risk of SCZ [3].
In this work, we study the contributions of the intrinsic processes of L5PCs to the generation and maintenance of delta oscillations using biophysically detailed modeling. We employ models of single L5PCs and networks of coupled L5PCs [4]. The single-cell models are multi-compartmental models that include description of Ca2+ dynamics and Hodgkin-Huxley type of kinetics for many types of ion channels. The network model [4] includes the description of L5PC-to-L5PC glutamatergic synapses. We employ a reduced version of this model [5] to boost up the simulation speed. We modify the parameters of these models in a way that mimics the small effects that are expected to be observed in common variants associated with SCZ [6]. We show that the L5PC network gain and the responses of the network to delta oscillations are altered by variants of many SCZ-associated ion-channel and Ca2+-transporter-encoding genes. In a similar fashion, we study the effects of differential gene expression by varying the conductances of the ion-channel species that correspond to genes whose expression in blood sample data of SCZ patients deviated from that of healthy controls. Our results deepen the understanding of altered delta power in SCZ patients and could ultimately aid the development of novel future treatments of the mental disease.
References
1. Neske GT. The Slow Oscillation in Cortical and Thalamic Networks: Mechanisms and Functions. Front Neural Circ 2015;9:88.
2. Larkum M. A cellular mechanism for cortical associations: an organizing principle for the cerebral cortex. Trends Neurosci 2013;36.3:141–51.
3. Ripke S, Sanders AR, Kendler KS, Levinson DF, Sklar P, Holmans PA, Lin DY, Duan J, Ophoff RA, Andreassen OA et al.: Genome-wide association study identifies five new schizophrenia loci. Nat Gen 2011, 43:969–976.
4. Hay E, Segev I. Dendritic Excitability and Gain Control in Recurrent Cortical Microcircuits. Cereb Cortex 2015;25.10:3561–71.
5. Mäki-Marttunen T, Halnes G, Devor A, Metzner C, Dale AM, Andreassen OA, Einevoll GT. Step-wise model fitting accounting for high-resolution spatial measurements: Construction of a layer V pyramidal cell model with reduced morphology. BMC Neuroscience, 17(Suppl 1):P165, 2016.
6. Mäki-Marttunen T, Halnes G, Devor A, Witoelar A, Bettella F, Djurovic S, Wang Y, Einevoll GT, Andreassen OA, Dale AM. Functional Effects of Schizophrenia-Linked Genetic Variants on Intrinsic Single-Neuron Excitability: A Modeling Study. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. 2016;1:49–59.
P66 Biophysical modeling of single-neuron contributions to ECoG and EEG signals
Solveig Næss1,2, Torbjørn V Ness3, Geir Halnes3, Eric Halgren4, Anders M Dale4 and Gaute T Einevoll3,5
1Department of Informatics, University of Oslo, Oslo, Norway; 2Simula-UiO-UCSD Research and PhD (SUURPh) training program, Oslo, Norway; 3Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway; 4Department of Neuroscience and Radiology, School of Medicine, UC San Diego, CA, USA; 5Department of Physics, University of Oslo, Oslo, Norway
Correspondence: Solveig Næss (solvenae@ifi.uio.no)
BMC Neuroscience 2017, 18(Suppl 1):P66
Electroencephalography (EEG), i.e., recordings of electrical potentials at the scalp, and electrocorticography (ECoG), i.e., potentials recorded on the cortical surface, are two prominent techniques probing brain activity at the systems level. Despite their long history and widespread use, the proper interpretation of these brain signals in terms of the biophysical activity in underlying neurons (nerve cells) and neuronal networks is still lacking. Present-day analysis is predominantly statistical and limited to identification of phenomenological signal generators without a clear biophysical interpretation. New biophysics-based analysis methods are thus needed to take full advantage of these brain-imaging techniques [1].
Here we used biophysical modeling based on morphologically detailed multicompartmental neuron models to explore single-neuron contributions to ECoG and EEG signals and in particular the feasibility of using the so-called current-dipole approximation in predicting these signals [2]. Specifically, we used the open-source Python package LFPy [3] which builds on Neuron [4] and is based on well-established volume-conductor theory for numerical calculations of extracellular potentials. The LFPy package was supplemented with new Python tools for calculating the current-dipole moment of a neuron for use of the current-dipole approximation to predict ECoG and EEG signals. Current-dipole approximations were explored in the inhomogeneous four-concentric-spheres head model [5], and compared with results from using the Finite Element Method [6].
When comparing computed cortical-cell contributions to the EEG and ECoG signals from using the current-dipole approximation with results from the full model explicitly including all transmembrane currents, we find that the current-dipole approximation is applicable for modeling EEG signals. This allows for a drastic simplification of future biophysics-based computation of EEG signals from cortical cell populations. However, we find that the current-dipole approximation is not generally applicable for computing ECoG signals.
References
1. Einevoll GT, Kayser C, Logothetis NK, Panzeri S: Modelling and analysis of local field potentials for studying the function of cortical circuits. Nat Rev Neurosci 2013, 14:770-785.
2. Hämäläinen M, Hari R, Ilmoniemi RJ, Knuutila J, Lounasmaa OV: Magnetoencephalography – theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev Mod Phys 1993, 65:413–505.
3. LFPy [lfpy.github.io]
4. NEURON [www.neuron.yale.edu]
5. Srinivasan R, Nunez PL, Silberstein RB: Spatial filtering and neocortical dynamics: estimates of EEG coherence. IEEE Trans Biomed Eng 1998, 45: 814–826 (Srinivasan et al., IEEE Trans Biomed Eng, 1998)
6. Larson MG, Bengzon F: The Finite Element Method: theory, implementations and applications. Heidelberg: Springer; 2013.
P67 Extracellular diffusion can introduce errors in current source density estimates
Geir Halnes1, Tuomo Mäki-Marttunen2, Klas H Pettersen3,4,Ole A Andreassen2, Gaute T Einevoll1,5
1Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway; 2NORMENT, Institute of Clinical Medicine, University of Oslo, Oslo, Norway; 3Letten Centre and Glialab, Department of Molecular Medicine, Inst. of Basic Medical Sciences, University of Oslo, Oslo, Norway; 4Centre for Molecular Medicine Norway, University of Oslo, Oslo, Norway; 5Department of Physics, University of Oslo, Oslo, Norway
Correspondence: Geir Halnes (geir.halnes@nmbu.no)
BMC Neuroscience 2017, 18(Suppl 1):P67
A standard way to study neuronal activity is to record the local field potential (LFP) in the extracellular space (ECS) surrounding active neurons. Theoretical methods, such as current source density (CSD) theory can then be used to infer the distribution of neuronal current sources from recorded potentials. When estimating the CSD, several assumptions are made. Typically, one assumes a spatially homogeneous neuronal activity level and a constant extracellular conductivity. Another important assumption is that ionic diffusion in the ECS has a negligible impact on the LFP, so that the recorded potentials exclusively reflect underlying cellular current sources [1].
As the charge carriers in brain tissue are ions, diffusion and electrical migration are in reality interdependent processes. The assumption that diffusion has a negligible impact on the LFP could therefore be challenged, especially under conditions when concentration gradients in the ECS become large. Large extracellular concentration gradients are symptomatic for many pathological conditions, but periods of intense neural signalling can evoke concentration shifts of several millimolars even in non-pathological cases [2].
By means of biophysical modelling, we here explore the error introduced in CSD estimates by neglecting effects from diffusion currents. We use a the previously developed electrodiffusive Kirchhoff-Nernst-Planck formalism [3], which allows us to simulate the dynamics of the electrical potential and of the ion concentrations in the ECS surrounding a neural population [4]. In this in silico scenario, the true CSD (i.e., the spatiotemporal distribution of neuronal transmembrane currents in the model) is known, and can be compared to the conventional CSD estimate (based only on the LFP), and an alternative CSD estimate which also accounts for diffusion dependent effects. We find that the electrodiffusive CSD estimate accurately predicts the true CSD, while the conventional CSD estimate dramatically deviates from the true CSD when extracellular concentration gradients become large, and can lead to the prediction of spurious current sources (Figure 1).
Figure 1. Temporally averaged CSD estimates at different locations (n = 1 is the bottom and n = 15 is the top of a cortical column). Black line: True CSD. Blue line: Conventional CSD estimate (double spatial derivative of the LFP). Red line: Improved, electrodiffusive CSD estimate. Green line: CSD correction imposed by diffusion
References
1. Gratiy SL, Halnes G, Koch C, Hawrylycz MJ, Einevoll GT, Anastassiou CA: The theory of current-source density analysis in brain tissue. European Journal of Neuroscience 2017. doi: 10.1111/ejn.13534.
2. Kofuji P, Newman EA: Potassium buffering in the central nervous system: Neuroscience 2004, 129.4: 1043–1054.
3. Halnes G, Østby I, Pettersen KH, Omholt S & Einevoll GT: Electrodiffusive model for astrocytic and neuronal ion concentration dynamics. PLoS Comput Biol. 2013, 9(12): e1003386.
4. Halnes G, Mäki-Marttunen T, Keller D, Pettersen KH, Andreassen OA, Einevoll GT: Effect of Ionic Diffusion on Extracellular Potentials in Neural Tissue. PLoS Comput Biol. 2016, 12(11): e1005193.
P68 Estimation of metabolic oxygen consumption from optical measurements in cortex
Marte J. Sætra1, Anders M Dale2, Anna Devor2, Gaute T Einevoll1,3
1Department of Physics, University of Oslo, Oslo, 0316, Norway; 2Department of Neurosciences, UC San Diego, La Jolla, California, 92093-0021, USA; 3Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, 1433, Norway
Correspondence: Marte J. Sætra (m.j.satra@fys.uio.no)
BMC Neuroscience 2017, 18(Suppl 1):P68
The cerebral metabolic rate of oxygen (CMRO2) is an important parameter for understanding how the brain responds to changes in metabolism and oxygen delivery. Such changes are associated with clinical conditions like stroke and Alzheimer’s disease. An estimate of the oxygen consumption rate is further important for the interpretation of functional magnetic resonance imaging. Despite the obvious need of an O2 consumption measure, there happens to be no standardized way of measuring it. This is true for both the steady-state situation and for measuring dynamic CMRO2 changes.
Common practice varies and relies on measurements of both blood flow and oxygenation. The CMRO2 parameter is estimated by analysing the measurements within the context of an appropriate mathematical model. All methods of estimating CMRO2 are essentially “solving a mass balance equation where CMRO2 is equated to the difference of oxygen flowing into a region of interest and the oxygen flowing out” [1]. Estimating CMRO2 is therefore a complex task where inaccuracies of both experimental methods and mathematical models need to be evaluated.
Here, we present a more direct method for estimating CMRO2. It enables us to extract the CMRO2 parameter from a single quantity only, by fitting Poisson’s equation to measurements of O2 partial pressure (pO2) around vessels. Earlier attempts of doing the same has been limited by the inability to measure tissue pO2 with adequate resolution.
The development of two-photon phosphorescence lifetime microscopy has recently made us overcome this limitation [1].
Using pO2 measurements of this kind, we have studied the Krogh method for steady-state CMRO2 [2]. For the Krogh method we assume and axisymmetric, cylindrical geometry of the vessel-tissue region [3]. The assumption leads to a model describing pO2 as a function of the distance to vessel. The Krogh method, mostly used to study muscles earlier, shows disconcerting results when applied to data from brain tissue [2]. The results indicate that the method is not robust.
We introduce the Laplace method as an alternative way of estimating CMRO2. The method states that CMRO2 can be estimated by taking the second derivative of pO2 measurements [2]. In order to validate the method, we construct datasets with known ground truth. Estimates of CMRO2 from ground truth model data suggest that the Laplace method represents a more useful tool for measuring O2 consumption than the Krogh method [2].
References
1. Sakadzic S, Yaseen MA, Jaswal R, Roussakis E, Dale AM, Buxton RB, Vinogradov SA, Boas DA, Devor A: Two-photon microscopy measurement of cerebral metabolic rate of oxygen using periarteriolar oxygen concentration gradients. Neurophotonics 2016, 3(4): 045005.
2. Sætra MJ: Estimation of metabolic oxygen consumption from optical measurements in cortex. Master’s thesis, University of Oslo 2016. [http://urn.nb.no/URN:NBN:no-54857]
3. Goldman D: Theoretical models of microvascular oxygen transport to tissue. Microcirculation 2008, 15(8): 795–811.
P69 Computing Brain Signals: Concurrent simulation of network activity, extracellular electric potentials and magnetic fields
Espen Hagen1, Solveig Næss2, Torbjørn V. Ness3, Gaute T. Einevoll1,3
1Department of Physics, University of Oslo, Oslo, 0316, Norway; 2Department of Informatics, University of Oslo, Oslo, 0316, Norway; 3Faculty of Science and Technology, Norwegian University of Life Sciences, Aas, 1433, Norway
Correspondence: Espen Hagen (espen.hagen@fys.uio.no)
BMC Neuroscience 2017, 18(Suppl 1):P69
Recordings of extracellular electrical, and later also magnetic, brain signals have been the dominant technique for measuring brain activity for almost a century. The interpretation of such signals is nontrivial [1–3], however, as the measured signals result of both local and remote neuronal activity. The recorded extracellular potentials in general stem from a complicated sum of contributions from transmembrane currents of neurons near the measurement site, while corresponding intra- and extracellular electric currents generate the brain’s magnetic field [4]. This calls for forward-models grounded in the biophysics of the different measurement modalities [3] while the underlying sources are faithfully represented. The initial release of the Python package LFPy ([5], LFPy.github.io) incorporated a now commonplace and well-established scheme for predicting extracellular potentials of individual neurons with arbitrary levels of biological detail. LFPy relies on the NEURON simulation environment ([6], neuron.yale.edu) to compute transmembrane currents of multicompartment neurons in conjunction with an electrostatic forward model [7]. We have now extended its functionality to populations and networks of multicompartment neurons with concurrent calculations of extracellular potentials and current-dipole moments [8]. The current-dipole moments are used to compute non-invasive measures of neuronal activity, e.g., electroencephalogram (EEG) scalp potentials when combined with an appropriate volume-conductor model. One such model is the 4-sphere model including the different electric conductivities of brain, cerebral spinal fluid, skull and scalp [9]. In addition, the current-dipole moments can be used for magnetoencephalography (MEG) signal prediction [4,9]. The version of LFPy presented here is thus a true multi-scale simulator, capable of simulating electric neuronal activity at the level of cell-membrane dynamics, individual synapses, neurons, networks, extracellular potentials within neuronal populations and macroscopic EEG and MEG signals. The present implementation is suitable for parallel execution on HPC facilities.
Acknowledgements
This work is supported by the Norwegian Ministry of Education and Research through the Research Council of Norway (NFR, through COBRA, CINPLA, NOTUR -NN4661 K) and SUURPh Programme, and EU Grant 604102 (HBP).
References
1. Pettersen, KH, Lindén, H, Dale, AM, Einevoll, GT: Extracellular spikes and CSD, in Brette, R. and Destexhe, A. (eds.) Handbook of Neural Activity Measurement. Cambridge: Cambridge University Press; 2012
2. Buzsaki G, Anastassiou CA, Koch K: The origin of extracellular fields and currents – EEG, ECoG, LFP and spikes. Nat Rev Neurosci 2012, 13:407–419
3. Einevoll GT, Kayser C, Logothetis NK, Panzeri S: Modelling and analysis of local field potentials for studying the function of cortical functions. Nat Rev Neurosci 2013, 14:770–785
4. Hämäläinen M, Hari R, Ilmoniemi RJ, Knuutila J, Lounasmaa OV: Magnetoencephalography—theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev Mod Phys 1993, 65:413–487
5. Lindén H, Hagen E, Leski S, Norheim E, Pettersen KH, Einevoll GT: LFPy: a tool for biophysical simulation of extracellular potentials generated by detailed model neurons. Front Neuroinf 2014, 7(41):1–15
6. Hines ML, Davison AP, Muller E: NEURON and Python. Front Neuroinf 2009, 3(1):1–12
7. Holt G, Koch C: Electrical Interactions via the Extracellular Potential Near Cell Bodies. J Comp Neurosci 1999, 6:169–184
8. H Lindén H, Pettersen KH, Einevoll GT: Intrinsic dendritic filtering gives low-pass power spectra of local field potentials. J Comp Neurosci 2010, 29:423–444
9. Nunez PL, Srinivasan R: Electric Fields of the Brain: The neurophysics of EEG, 2 nd edition. Oxford: Oxford University Press; 2006
P70 Integration of orientation and spatial frequency in a model of visual cortex
Alina Schiffer1, Axel Grzymisch1, Malte Persike2, Udo Ernst1
1Computational Neuroscience Lab, Institute for Theoretical Physics, Univ. of Bremen, Bremen, 28359, Germany; 2Department of Psychology, Methods Section, Johannes Gutenberg University Mainz, Mainz, 55122, Germany
Correspondence: Alina Schiffer (alina@neuro.uni-bremen.de)
BMC Neuroscience 2017, 18(Suppl 1):P70
In the visual system, complex scenes have to be integrated from simple local features into global and meaningful percepts. One basic process in feature integration that is needed to e.g. form the shape of objects is contour integration. Models studying this process usually focus on orientation alignment as the defining feature of a contour, however, experimental work has shown that also other features such as spatial frequency (SF) strongly shape contour integration. In our framework, we include SFs as a second cue to gain deeper insight into mechanisms of contour integration, by hypothesizing that similar SFs will be integrated more strongly than dissimilar ones.
We constructed a structurally simplistic cortical model with population dynamics described by simplified Wilson-Cowan equations. The model was presented with stimuli consisting of an ensemble of oriented Gabor patches with different orientations and spatial frequencies, into which contours of aligned and/or SF-homogeneous patches are embedded. Feature integration is performed by recurrent interactions between populations with receptive fields (RFs) tuned to the orientation and SF of localized stimulus patches. Interactions comprise excitatory and inhibitory couplings, with inhibition providing normalization and being independent on orientation preference. Excitatory connections realize an association field [1] specifying the linking strength for elements with different properties: In particular, we implement strong links between collinear and co-circularly aligned RFs, and we assume that interaction strength exponentially increases with decreasing SF difference (i.e., “what fires together wires together”).
By quantitatively reproducing the results of multiple psychophysical studies [2] we are able to provide a unifying account of contour integration in a variety of different stimulation paradigms. Our model suggests a novel mechanism involved in feature integration, namely spatial-frequency dependent interactions, which accounts for previously unexplained findings (see Figure 1), thus going beyond contour integration on orientation information only, and helping to create a more comprehensive understanding of computation in the visual system.
Figure 1. Comparison of model (solid lines) and experimental psychometric curves (dashed lines) for contour detection in a 2-AFC design. Since the model is not subject to noise, we expect its performance to be equal or higher than for human observers. A: Contour defined by orientation alignment only (same SF for all Gabors): performance decreases with increasing tilt angle deviating from perfect alignment. B: Contour defined by SF shift between contour and background elements (random orientations for all Gabors): performance increases with increasing SF shift (green crosses: experiment). C: Contour defined by orientation alignment, with SFs of contour and background subject to different levels of random jitter (2 octaves and 3 octaves width, light and dark blue, respectively): detection threshold decreases with increasing jitter. For jitter on the contour elements only (red), the target remains visible even for large tilt angles (prediction of model confirmed by new experiments, unpublished data)
Acknowledgements
This work was supported by the BMBF (Bernstein Award Udo Ernst, grant no. 01GQ1106). Alina Schiffer was supported by the SMART START 2 Program.
References
1. Field DJ, Hayes S, Hess RF: Contour integration by the human visual system: Evidence for a local association field. Vision Res. 1993, 33:173–193.
2. Persike M, Meinhardt G: Cue combination anisotropies in contour integration: The role of lower spatial frequencies. Journal of Vision 2015a, 15(5):17; Persike M, Meinhardt G: Effects of spatial frequency similarity and dissimilarity on contour integration. PLoS One 2015b, 10(6):1–19.
P71 Performance-optimization guided distribution of attentional resources
Daniel Harnack, Udo A Ernst1
Computational Neuroscience Lab, Institute for Theoretical Physics, University Bremen, Bremen, Germany
Correspondence: Daniel Harnack (daniel@neuro.uni-bremen.de)
BMC Neuroscience 2017, 18(Suppl 1):P71
In the visual system, attention improves information processing and is required to solve complex tasks such as shape detection and object recognition. On the neuronal level, it is found that different task demands, given by e.g. nature and specific combination of cues and cue validities, modulate response properties in many cortical areas in parallel [1]. Selective attention is assumed to be instrumental in orchestrating the flexible and efficient distribution of resources among brain areas to set up task-specific functional networks [2]. However, it is unclear how this process is organized on a functional level, and according to which principles computation is coordinated among different neuronal populations and visual areas. Here, we investigate task-specific attentional distribution in a simplified framework where a stimulus is processed by two or more neuronal populations (or visual areas) which are specialized in representing different features such as orientation or color. We assume the task is to detect a stimulus change in one of its features, while a cue is given that matches the changing feature with a certain probability (cue validity). Adhering to physiological constraints, attention is modeled as a bounded gain change on the populations’ outputs to a higher area decision population, whereas the whole input to the decision population is normalized [3] (Fig. 1A). Considering distributed attention as an optimization problem, we compute the gain factors minimizing error rates for the different populations engaged in the task by analytical gradient descent. We find that optimal gain factors depend on cue validity and change saliency, with attention also boosting the populations representing non-cued features if cue validity is below 100% and if change saliency is high. Furthermore, when attention spreads to non-cued features, we find that a multitude of attentional distributions exist that yield the same optimal performance (Fig. 1B). Our results have important implications for empirical studies: first, we provide a first-principle explanation in a minimal framework of attentional modulation spreading also to non-cued feature dimensions or attributes. Second, the dependence of optimal modulation strength on task parameters and the degenerative nature of solutions in part of the parameter space implies that attention-related gain changes observed in animal studies might not be constant, but will change over time if e.g. cue validity is manipulated and if perceptual learning takes place.
Figure 1. A: Schematic of the model setup with a two-feature stimulus. B: In the white region, optimal performance is achieved by directing maximal attention towards the cued feature and none to the non-cued one. In the gray region, it is optimal to also attend to the non-cued feature. Here, solutions are degenerate such that a multitude of attentional configurations leads to the same optimal performance. The probability distribution of modulation differences illustrates this for one exemplary parameter set
Acknowledgements
This research was funded by the BMBF (Bernstein Award Udo Ernst, Grant 01GQ1106).
References
1. Siegel M, Buschmank TJ, Miller EK: Cortical information flow during flexible sensorimotor decisions. Science 2015, 348:1352–1355.
2. Harnack D, Ernst UA, Pawelzik KR: A model for attentional information routing through coherence predicts biased competition and multistable perception. J Neurophysiol 2015, 114:1593–1605.
3. Reynolds JH, Heeger DJ: The normalization model of attention. Neuron 2009, 61:168–185.
P72 Feature integration with critical dynamics in cortical subnetworks
Nergis Tomen, Udo Ernst
Computational Neuroscience Lab, Institute for Theoretical Physics, University of Bremen, 28359, Bremen, Germany
Correspondence: Nergis Tomen (nergis@neuro.uni-bremen.de)
BMC Neuroscience 2017, 18(Suppl 1):P72
Recent experimental and theoretical work increasingly suggests that cortical neurons operate close to a critical state which describes a phase transition from chaotic to ordered dynamics and optimizes multiple aspects of information processing (e.g. [1,2]). However, although critical dynamics have been demonstrated in recordings of spontaneously active cortical neurons [3], the link between criticality and active cortical computation remains largely unexplored. Establishing this link requires addressing major conceptual challenges, namely: making abstract complexity measures work in realistic computational settings and considering—instead of homogeneous, spontaneously active networks—strongly driven systems with high firing rates and networks with structured connectivity.
In our work, we focus on visual feature integration as a prototypical and prominent example for cortical computation. Visual feature integration refers to neural processes which link localized image information into more global representations such as contours, shapes, and objects. We study feature integration in a figure-ground segregation task, where cortical subnetworks operate close to the critical state when part of a visual stimulus matches a ‘figure’ which is to be detected by the visual system. Within the simple, but analytically well-described framework of the Ernst-Herrmann-Eurich (EHE) model, we embed a large number of figures into a recurrently coupled network. Out of N units representing each figure, we allow for n units to represent multiple figures at the same time and characterize the network dynamics for different stimuli.
We find that presenting a visual stimulus with a target figure dynamically organizes the network into two parts: one with critical dynamics, encoding the ensemble of features making up the figure, and one with subcritical dynamics, encoding the background elements. We show that figure representation in the oscillatory dynamics of the system as well as the task performance in a 2AFC-scenario is maximized near the critical point. Adding inhibitory interactions between neurons encoding different figures ensures that the coupling strength for which the network is critical is robust against changes in n (Figure 1), the network size and the number of figures in the network.
Our model extends the idea of criticality being optimal for computation to inhomogeneous systems, establishes links to spatial computation performed in the visual system and predicts that local subnetworks can display supercritical activity, contained by inhibition, while the cortex at large is poised at subcritical regimes.
Figure 1. The Kolmogorov–Smirnov (KS) statistic, quantifying the distance of spike statistics from a power-law, for an excitatory network (A) and for a network with both excitation and inhibition (B), as a function of the coupling strength and the overlap between figures, n. In the white regions, the network starts to exhibit infinite avalanches. White circles mark where the KS statistic is lowest and the red line shows our theoretical approximation for the critical point. We find that for the network with inhibition, the critical coupling strength does not change as n increases until n = N
Acknowledgements
This research project was funded by the BMBF (Bernstein Award Udo Ernst, grant no. 01GQ1106).
References
1. Shew WL, Yang H, Petermann T, Roy R, Plenz D: Neuronal avalanches imply maximum dynamic range in cortical networks at criticality. J Neurosci 2009, 29:15595–15600.
2. Tomen N, Rotermund D, Ernst U: Marginally subcritical dynamics explain enhanced stimulus discriminability under attention. Front Syst Neurosci 2014, 8:151.
3. Beggs JM, Plenz D: Neuronal avalanches in neocortical circuits. J Neurosci 2003, 23:11167–11177.