Open Access

26th Annual Computational Neuroscience Meeting (CNS*2017): Part 3

BMC NeuroscienceBMC series – open, inclusive and trusted201718(Suppl 1):60

Published: 18 August 2017

P156 Multiscale modeling of ischemic stroke with the NEURON reaction-diffusion module

Adam J. H. Newton1,2, Alexandra H. Seidenstein2,3, Robert A. McDougal1, William W. Lytton2,4

1Department of Neuroscience, Yale University, New Haven, CT 06520, USA; 2Department Physiology & Pharmacology, SUNY Downstate, Brooklyn, NY 11203, USA; 3NYU School of Engineering, 6 MetroTech Center, Brooklyn, NY 11201, USA; 4Kings County Hospital Center, Brooklyn, NY 11203, USA

Correspondence: Adam J. H. Newton (

BMC Neuroscience 2017, 18 (Suppl 1):P156

Ischemic stroke is fundamentally a multiscale phenomenon [1]. Occlusion of blood vessels in the brain triggers a cascade of changes including: 1. synaptic glutamate release, related to excitotoxicity; 2. elevated extracellular potassium, leading to spreading depression; 3. cell swelling, reducing the extracellular volume and diffusion; 4. production of reactive oxygen species, which give rise to inflammation. These cascades occur over multiple time-scales, with the initial rapid changes in cell metabolism and ionic concentrations trigging several damaging agents that may ultimately leads to cell death. Tissue affected by ischemic stroke is divided into three regions; 1. a core where cells suffer irreparable damage and death, 2. a penumbra where cells may recover with reperfusion, 3. a further region of edema where spontaneous recovery is expected. Multiscale modeling and multiphysics modeling is essential to capture this cascade. Such modeling requires coupling complex intracellular molecular alterations with electrophysiology, and consideration of network properties in the context of bulk tissue alterations mediated by extracellular diffusion.

Spreading depression is a wave of depolarization that propagates through tissue and causes cells in the penumbra to expend energy by repolarization, increasing their vulnerability to cell death. We modeled the spreading depression seen in ischemic stroke by coupling a detailed biophysical model of cortical pyramidal neurons equipped with Na+/K+-ATPase pumps with reaction-diffusion of ions in the extracellular space (ECS). A macroscopic view of the ECS is characterised by its tortuosity (a reduction in the diffusion coefficient due to obstructions) and its free volume fraction (typically ~20%). The addition of reactions allows the ECS be modeled as an active medium glial buffering of K+. Ischemia impedes ATP production which results in a failure of the Na+/K+-ATPase pump and a rise in extracellular K+. Once extracellular K+ exceeds a threshold it will cause neurons to depolarize, further increasing extracellular K+.

NEURON’s reaction-diffusion module NRxD [2] provides a platform where detailed neurons models can be embedded in a macroscopic model of tissue. This is demonstrated with a multiscale biophysical model of ischemic stroke where the rapid intracellular changes are coupled with the slower diffusive signaling.


Research supported by NIH grant 5R01MH086638


1. Newton, AJH, and Lytton, WW: Computer modeling of ischemic stroke. Drug Discovery Today: Disease Models. 2017.

2. McDougal RA, Hines ML, Lytton WW: Reaction-diffusion in the NEURON simulator. Frontiers in neuroinformatics. 2013, 7(28).

P157 Accelerating NEURON reaction-diffusion simulations

Robert A. McDougal1, William W. Lytton2,3

1Neuroscience, Yale University, New Haven, CT 06520, USA; 2Physiology & Pharmacology, SUNY Downstate Medical Center, Brooklyn, NY 11203, USA; 3Kings County Hospital, Brooklyn, NY 11203, USA

Correspondence: Robert A. McDougal (

BMC Neuroscience 2017, 18 (Suppl 1):P157

A neuron’s electrical activity is governed not just by presynaptic activity, but also by its internal state. This state is a function of history including prior synaptic input (e.g. cytosolic calcium concentration, protein expression in SCN neurons), cellular health, and routine biological processes. The NEURON simulator [1], like much of computational neuroscience, has traditionally focused on electrophysiology. NEURON has included NRxD to give standardized support for reaction-diffusion (i.e. intracellular) modeling for the past 5 years [2], facilitating studies into the role of electrical-chemical interactions. The original reaction-diffusion support was written in vectorized Python, which offered limited performance, but ongoing improvements have now significantly reduced run-times, making larger-scale studies more practical.

New accelerated reaction-diffusion methods are being developed as part of a separate NEURON module, crxd. This new module will ultimately be a fully compatible replacement for the existing NRxD module (rxd). Developing it as a separate module allows us to make it available to the community before it supports the full functionality of NRxD. The interface code for crxd remains in Python, but it now transfers model structure to C code via ctypes, which performs all run-time calculations; Python is no longer invoked during simulation. Dynamic code generation allows arbitrary reaction schemes to run at full compiled speed. Thread-based parallelization accelerates extracellular reaction-diffusion simulations.

Preliminary tests suggest an approximately 10x reduction in 1D run-time using crxd instead of the Python-based rxd. Like rxd, crxd uses the Hines method [3] for O(n) 1D reaction-diffusion simulations. Using 4 cores for extracellular diffusion currently reduces the runtime by a factor of 2.3. Additionally, using the crxd module simplifies setup relative to rxd-based simulations since it does not require installing scipy.

Once crxd supports the entire documented NRxD interface and has been thoroughly tested, it will replace the rxd module and thus become NEURON’s default module for specifying reaction-diffusion kinetics.


Research supported by NIH R01 MH086638.


1. NEURON | for empirically based simulations of neurons and networks of neurons []

2. McDougal RA, Hines ML, Lytton WW: Reaction-diffusion in the NEURON simulator. Front. Neuroinform 2013, 7:28.

3. Hines M: Efficient computation of branched nerve equations. Int. J. Bio-Medical Computing 1984, 15:69–76.

P158 Computation of invariant objects in the analysis of periodically forced neural oscillators

Alberto Pérez-Cervera, Gemma Huguet, Tere M-Seara

Departament de Matemàtica Aplicada, Universitat Politècnica de Catalunya, Barcelona, E-08028, Spain

Correspondence: Alberto Pérez-Cervera (

BMC Neuroscience 2017, 18 (Suppl 1):P158

Background oscillations, reflecting the excitability of neurons, are ubiquitous in the brain. Some studies have conjectured that when spikes sent by one population reach the other population in the peaks of excitability, then information transmission between two oscillating neuronal groups is more effective [1]. In this context, the phase relationship between oscillating neuronal populations may have implications in neuronal communication between brain areas [2, 3]. The Phase Response Curve (PRC) of a neural oscillator measures the phase-shift resulting from perturbing the oscillator at different phases of the cycle. It provides useful information to understand how phase-locking relationships between neural oscillators emerge but only when perturbations are weak and amplitude is not taken into account.

In this work, we consider a population rate model [4] and perturb it with a time-dependent input. In order to study the phase-locking relationships that emerge, we use the stroboscopic map to perform a bifurcation analysis as a function of the amplitude and frequency of the perturbation. We observe the existence of bistable solutions for some regions of the parameters space, suggesting that, for a given input, populations may operate in different regimes. Furthermore, we apply powerful computational methods [5] to compute the invariant objects for the stroboscopic map, providing a framework that enlarges the PRC comprehension of the perturbative effects in the phase dynamics.


1. Fries P: A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends in cognitive sciences 2005, 9(10):474–48

2. Tiesinga PH, Sejnowski TJ: Mechanisms for phase shifting in cortical networks and their role in communication through coherence. Frontiers in human neuroscience 2010, 4:196.

3. Canavier CC: Phase-resetting as a tool of information transmission. Current opinion in neurobiology 2015, 31: 206–213.

4. Wilson HR, Cowan JD: Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical journal 1972, 12.1:1–24.

5. Haro À, Canadell M, Figueras JL, Luque A, Mondelo JM: The Parameterization Method for Invariant Manifolds 2016. Springer.

P159 Computational model of spatio-temporal coding in CA3 with speed-dependent theta oscillation

Caroline Haimerl1,2, David Angulo-Garcia1,3, Alessandro Torcini1,3,4, Rosa Cossart1, Arnaud Malvache1

1Institut de Neurobiologie de la Méditerrannée (INMED), INSERM, UMR901, Aix-Marseille Univ, Marseille, France; 2Center of Neural Science, New York University, New York, NY, USA; 3Aix-Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France; 4Laboratoire de Physique Théorique et Modélisation, CNRS UMR 8089, Université de Cergy-Pontoise, F-95300 Cergy-Pontoise Cedex, France

Correspondence: Caroline Haimerl (

BMC Neuroscience 2017, 18 (Suppl 1):P159

Recent studies have demonstrated the capacity of hippocampal sequences associated with theta oscillation, to encode spatio-temporal information. In particular, cells in CA1 become active sequentially in a stable unidirectional order during spontaneous run periods and under minimal external cues [1]. This sequential activity seems to integrate either the distance that the animal has run or the time that has elapsed, two related coding states that can be separated through the change in cellular dynamics with the animals’ speed. Other studies indicate that these cell sequences depend on theta oscillation from the medial septum and may reflect input from CA3 [2–4].

Running speed of the animal has also shown to influence theta oscillation frequency and amplitude. This oscillation could thereby carry the spatio-temporal information input required to determine distance/time coding. Inspired by [2], we modeled a circular recurrent network of excitatory cells with short-term synaptic plasticity [5] and global inhibition. By applying speed-dependent theta oscillation, we reproduced the dynamics of spatio-temporal coding observed in experimental data and propose a mechanism of switching between the two coding states through a change in integration of theta input. In particular, our firing rate model reproduces the sequence properties (recurrence, unidirectionality, sparse activity, memory) based on the network characteristics of CA3 and allows exploring the dynamics of the sequential activity. Simulations with this model show a non-trivial relationship between sequence slope and the frequency/amplitude of the oscillatory input: depending on the amplitude range of the theta oscillation, sequence dynamics can either be independent of speed (time coding) or linearly dependent on speed (distance coding). Therefore, the model proposes a network structure that could give rise to two basic and possibly default, self-referenced coding states observed in the hippocampus.

This model provides insights into how a recurrent network operates in the absence of spatially specific input, but still allows for such input to modulate sequential activity towards place field representation [2]. We will next explore further the mechanisms of sequence generation and coding correlates in both theoretical and experimental work.


1. Villete V, Malvache A, Tressard T, Dupuy N, Cossart R: Internally Recurring Hippocampal Sequences as a Population Template of Spatiotemporal Information. Neuron 2015, 88(2):357–366.

2. Wang Y, Romani S, Lustig B, Leonardo A, Pastalkova E: Theta sequences are essential for internally generated hippocampal firing fields. Nature Neuroscience 2015 18(2):282–290.

3. Salz DM., Tigany Z, Khasnabish S, Kohley A, Sheehan D, Howard MW, Eichenbaum H: Time Cells in Hippocampal Area CA3. J. Neurosci. 2016, 36:7476–7484.

4. Guzman SJ, Schlögl A, Frotscher M, Jonas P: Synaptic mechanisms of pattern completion in the hippocampal CA3 network. Science 2016, 353:1117–1123.

5. Mongillo G, Barak, O, Tsodyks M: Synaptic theory of working memory. Science 2008, 319:1543–1546.

P160 The effect of progressive degradation of connectivity between brain areas on the brain network structure

Kaoutar Skiker, Mounir Maouene

Department of mathematics and computer science, ENSAT, Abdelmalek Essaadi’s University, Tangier, Morocco

Correspondence: Kaoutar Skiker (

BMC Neuroscience 2017, 18 (Suppl 1):P160

Neurodegenerative diseases such as Alzheimer and Schizophrenia are characterized by the progressive decline of cognitive functions such as memory, language and consciousness with take the form of memory loss, deficits in verbal and non-verbal communication and so on. Cognitive deficits are interpreted in terms of damage in the network of brain areas, instead of damage to specific brain areas [1]. Many studies combining network theory and neuroimaging data have shown that brain networks, known to have a small world structure [2], are disorganized in people with neurodegenerative diseases indicating that the connectivity between brain areas is altered by the disease [1]. The disorganization of brain networks can be a consequence of the vulnerability of hub areas to diseases or from the abnormal connectivity between brain areas.

In this paper, we assess how the progressive degradation of connectivity between brain areas affects the brain network structure. We propose an algorithm building on the idea that the connections between brain areas are weakened as the disease progress in time. We apply the algorithm on a functional connectivity matrix freely available for download from the Brain Connectivity Toolbox consisting of nodes representing brain areas and edges representing the functional links between two brain areas [3]. The network is weighted, with weights wij reflect the correlations between two brain areas Ai and Aj. At a given threshold t, the new weights are given by wij-t; with t indicates the progression of disease in time. The structure of the new network is analyzed using graph theoretical measures including clustering coefficient and path length. After damage, the functional brain network shows the properties of high clustering and low path length indicting that the network presents a small world structure necessary for the proper cognitive functioning. The progressive degradation of links doesn’t change the network’s properties dramatically, clustering coefficient are slightly modified until t = 0.25 (see Figure 1 for clustering coefficient). At this stage, the functional network shifts from high organization to randomness.

In sum, cognitive deficits in neurodegenerative diseases can be understood in the scope of the progressive degradation of the connectivity between brain areas within the network.

Figure 1. The average clustering coefficient of the network decreases following the progressive degradation of the connectivity between brain areas


1. DS Bassett, ET Bullmore: Human Brain Networks in Health and Disease. Current Opinion in Neurology 2009, 22: 340–47.

2. O Sporns: Network Attributes for Segregation and Integration in the Human Brain. Current Opinion in Neurobiology 2013, 23: 162–71.

3. M Rubinov, O Sporns: Complex network measures of brain connectivity: Uses and interpretations. Neuroimage 2010, 52:1059–1069.

P161 A network architecture for comparing the behavior of a neurocomputational model of reward-based learning with human

Gianmarco Ragognetti1, Letizia Lorusso2, Andrea Viggiano2 and Angelo Marcelli1

1Laboratory of Natural Computation, Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano (SA), Italy; 2Department of Medicine, University of Salerno, 84083 Lancusi (SA), Italy

Correspondence: Gianmarco Ragognetti (

BMC Neuroscience 2017, 18 (Suppl 1):P161

Neuro computational models represent a powerful tool for bridging the gap between functions of the neural circuits and observable behaviors [1]. Once the model has been built, its output is compared with the observations either to validate the model itself or to propose new hypotheses. This approach has led to building a multi-scale model of the sensorimotor system from muscles, proprioceptors to skeletal joints, spinal regulating centers and central control circuits [2–6].

In this framework, we propose a neural network architecture to simulate the selection of actions performed by the motor cortex in response to a sensory input during a reward-based movement learning. The network has as many input nodes as the number of different stimuli, each node being a combination of the sensory inputs, and as many output nodes as the number of different actions that can be performed, each node being a combination of the motor commands. The network is fully connected, so that each stimulus concurs to the selection of each action and each action is selected concurrently by all the stimuli. The weights are updated by taking into account both the expected reward and the actual reward, as suggested in [7]. By adopting this architecture, the percept is represented by a combination of sensory inputs, while the action is represented by a combination of motor commands. Thus, it reproduces faithfully the condition of experiments of motor learning when a set of sensory inputs, such as semantically neutral visual stimuli, are presented to the subject whose response is merely a motor action, such as pushing a button. Under such conditions, it then becomes possible to fit the data provided by the experiments with the model to both estimate the validity of the model and to infer the role of the parameter on behavioral traits.

The simulations were compared to the behaviors of human subjects while learning which out of two buttons to press in response to a collection of visual stimuli containing edges and geometric shapes in a reward based setting. The results showed that the behavior of the complete system is the one expected under the hypothesis that the reward acts by modulating the action selection triggered by the input stimuli during motor learning. Moreover, differently from most literature models, the learning rate varies with the complexity of the task, i.e. the number of input stimuli. It can be argued that the decrease in learning rate seen in humans learning large set of stimuli could be due to an attenuation of memory traces in real synapses over time. In our future investigations, we will work to improve the model by adding such an effect in our network.


1. Lan, N., Cheung, V. and Gandevia, S.C.: EDITORIAL - Neural and Computational Modeling of Movement Control. Front. in Comp. Neurosc. 2016, 10: 1–5.

2. Cheng, E. J., Brown, I.E., and Loeb, G. E.: Virtual muscle: a computational approach to understanding the effects of muscle properties on motor control. J. Neurosci. Methods 2000, 101: 117–130.

3. Mileusnic, M. P., Brown, I.E., Lan, N., and Loeb, G. E.: Mathematical models of proprioceptors. I. Control and transduction in the muscle spindle. J. Neurophysiol. 2006, 96: 1772–1788.

4. Song, D., Raphael, G., Lan, N., and Loeb, G. E.: Computationally efficient models of neuromuscular recruitment and mechanics. J. Neural Eng. 2008, 5: 175–184.

5. Song, D., Lan, N., Loeb, G. E., and Gordon, J.: Model-based sensorimotor integration for multi-joint control, development of a virtual arm model. Ann. Biomed. Eng. 2008, 36: 1033–1048.

6. He, X., Du, Y. F., and Lan, N.: Evaluation of feedforward and feedback contributions to hand stiffness and variability in multi joint arm control. IEEE Trans. Neural Syst. Rehabil. Eng. 2013, 21: 634–647.

7. Sutton, R. S., and Barto A.G.: Reinforcement learning: An introduction. Cambridge: MIT press, 1998.

P162 Distributed plasticity in the cerebellum: how do cerebellar cortex and nuclei plasticity cooperate for learning?

Rosa Senatore, Antonio Parziale, Angelo Marcelli

Laboratory of Natural Computation, Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano (SA), Italy

Correspondence: Rosa Senatore (

BMC Neuroscience 2017, 18 (Suppl 1):P162

Different forms of synaptic plasticity have been revealed within the cerebellum (CB), and many hypothesis about their role have been proposed [1]. We used a model-based analysis for investigating the role of these forms of plasticity in three behaviors: phase reversal of the vestibule-ocular reflex, acquisition of conditioned responses and learning a novel limb movement. We investigated these behaviors since they involve different forms of learning: phase reversal requires to modify a preexistent stimulus-response (S-R) association according to the feedback signal provided by climbing fibers (CFs); conditioning involves learning a new S-R association according to a preexistent one between the stimulus coming from the CFs and a motor response; learning novel motor behaviors corresponds to create new S-R associations according to the CF feedback. The analysis was carried through a CB model that incorporates plasticity mechanisms at different stages of the CB processing, both in cortex and nuclei [2]. Synaptic plasticity has been simulated in both granular (Gr) and Purkinje (PC) network: granule cells show intrinsic plasticity depending on mossy fibers (MFs) activity, and MF-Gr synapses undergo both Long Term Depression (LTD) and Long Term Potentiation (LTP)[3]; PF-PC synapses undergo both LTD and LTP, depending on PF and CF activity [4]. The model also includes synaptic plasticity involving the molecular interneurons (MLI) at PF-MLI synapses [5] and Rebound potentiation at MLI-PC synapses [6]. Within the CB nuclei, LTD occurs in MF-NC synapses during inhibition from PCs, whereas LTP occurs during release from inhibition [7]. Our results suggest that the main contribution to CB learning is provided by the synaptic plasticity at PF-PC and MF-NC synapses. Indeed, excluding the plasticity at PF–PC site caused strong impairment in learning all the considered behaviors, while excluding the plasticity at MF–NC site induced mild impairment in acquiring conditioned responses and novel limb movements, and strong impairment was observed in phase reversal and motor adaptation. Removal of other forms of synaptic plasticity only induced slower learning. Our results also suggest that LTP at PF-PC underlies the extinction phenomenon observed in conditioning, and that saving phenomenon could be ascribed to a residual plasticity within the CB cortex rather than within the CB nucleus, since saving was observed even after removal of MF-NC plasticity before reconditioning. Finally, model simulations support the view that learned associations are transferred from the CB cortex to the CB nuclei, due to the combined effect of plasticity at PF-PC synapses in early stage of learning, and MF-NC synapses in late learning. Indeed, lesions at PCs layer or removal of PF-PC synaptic plasticity in late learning stage did not induced any impairment in the behavior of the model, whereas removal of PF-PC synaptic plasticity in early learning impaired learning capabilities of the model.


1. Gao Z, van Beugen BJ, De Zeeuw CI: Distributed synergistic plasticity and cerebellar learning. Nat Rev Neurosci 2012, 13:619–635.

2. Senatore R, Parziale A, Marcelli A: A computational model for investigating the role of cerebellum in acquisition and retention of motor behavior. 25th Annual Computational Neuroscience Meeting: CNS-2016. BCM Neurosci 2016, 17: 64–64.

3. Gall D, Prestori F, Sola E, D’Errico A, Roussel C, Forti L, Rossi P, D’Angelo E: Intracellular calcium regulation by burst discharge determines bidirectional long-term synaptic plasticity at the cerebellum input stage. J Neurosci 2005, 25:4813–4822.

4. Coesmans M, Weber JT, De Zeeuw CI, Hansel C: Bidirectional parallel fiber plasticity in the cerebellum under climbing fiber control. Neuron 2004, 44:691–700.

5. Rancillac A, Crépel F: Synapses between parallel fibres and stellate cells express long-term changes in synaptic efficacy in rat cerebellum. J Physiol 2004, 554:707–720.

6. Kano M, Rexhausen U, Dreessen J, Konnerth A: Synaptic excitation produces a long-lasting rebound potentiation of inhibitory synaptic signals in cerebellar Purkinje cells. Nature 1992, 356:601–604.

7. Aizenman CD, Linden DJ: Rapid, synaptically driven increases in the intrinsic excitability of cerebellar deep nuclear neurons. Nat Neurosci 2000, 3:109–111.

P163 Ising Model with conserved magnetization on the Human Connectome: implications on the relation structure-function in wakefulness and anesthesia

S. Stramaglia1, M. Pellicoro1, L. Angelini1, E. Amico2,3, H. Aerts2, J. Cortés4, S. Laureys3, D. Marinazzo2

1Dipartimento di Fisica, Università degli Studi Aldo Moro, Bari, and INFN, Sezione di Bari, Italy; 2Data Analysis Department, Ghent University, Ghent, Belgium; 3Coma Science Group, University of Liège, Liège, Belgium; 4Cruces Hospital and Ikerbasque Research Center, Bilbao, Spain

Correspondence: S. Stramaglia (

BMC Neuroscience 2017, 18 (Suppl 1):P163

Dynamical models implemented on the large-scale architecture of the human brain may shed light on how function arises from the underlying structure. This is the case notably for simple abstract models, such as the Ising one. We compare the spin correlations of the Ising model and the empirical functional brain correlations, both at the single link level and at the modular level, and show that the prediction is better in anesthesia than in wakefulness, in agreement with recent experiments. We show that conserving the magnetization in the Ising model dynamics (Kawasaki dynamics) leads to an improved prediction of the empirical correlations in anesthetised brains, see Figure 1. Moreover, we show that at the peak of the specific heat (the critical state) the spin correlations are minimally shaped by the underlying structural network, explaining how the best match between structure and function is obtained at the onset of criticality, as previously observed.

These findings could open the way to novel perspectives when the conserved magnetization is interpreted in terms of a homeostatic principle imposed to neural activity.

Figure 1. A. Mean Squared Error in Wakefulness and Anesthesia between the empirical connectivity and the one simulated by Glauber and Kawasaki dynamics. B. Mutual Information between the modular partitions of the empirical and modelled functional networks. These quantities are depicted as a function of the inverse temperature β

Conclusions: In agreement with recent theoretical frameworks [1], our results suggest that a wide range of temperatures correspond to criticality of the dynamical Ising system on the connectome, rather than a narrow interval centered in a critical state. In such conditions, the correlational pattern is minimally shaped by the underlying structural network. It follows that, assuming that the human brain operates close to a critical regime [2], there is an intrinsic limitation in the relationship between structure and function that can be observed in data. We show that empirical correlations among brain areas are better reproduced at the modular level using a model which conserves the global magnetization. The most suitable way to compare functional and structural patterns is to contrast them at the network level, using, e.g., the mutual information between partitions like in the present work.


1. Moretti P. and Muñoz M.A.: Griffiths phases and the stretching of criticality in brain networks, Nature communications 2013, 4: 2521.

2. Chialvo D.: Emergent complex neural dynamics, Nature Physics 2010, 6: 744–750.

P164 Multiscale Granger causality analysis by à trous wavelet transform

S. Stramaglia1, I. Bassez2, L. Faes3, D. Marinazzo2

1Dipartimento di Fisica, Università degli Studi Aldo Moro, Bari, and INFN, Sezione di Bari, Italy; 2Data Analysis Department, Ghent University, Ghent, Belgium; 3BIOtech, Dept. of Industrial Engineering, University of Trento, and IRCS-PAT FBK, Trento, Italy

Correspondence: S. Stramaglia (

BMC Neuroscience 2017, 18 (Suppl 1):P164

Great attention has been devoted in the last years to the identification of information flows in human brains. Since interactions occur across multiple temporal scales, it is likely that information flow will exhibit a multiscale structure: high-frequency activity, reflecting local domains of cortical processing, and low-frequency activity dynamically spread across the brain regions by both external sensory input and internal cognitive events. In order to detect information flow at multiple scale the decomposition of the signals in the wavelet space has been proposed in [1]; an analytical frame for linear multivariate stochastic processes explored at different time scales has been proposed in [2]. However, the computation of multiscale measures of information dynamics may be complicated by theoretical and practical issues such as filtering and undersampling: to overcome this problems, we propose here another wavelet-based approach for multiscale causality analysis, which is characterized by the following properties: (i) only the candidate driver variable is wavelet transformed (ii) the decomposition is performed using the à trous wavelet transform with cubic B-spline filter [3]. The use of the à trous transform is suggested by its interesting properties, indeed it satisfies the shift invariance, and its coefficients at time t are a linear combination of the time series values; no decimation of the time series, as in the discrete wavelet transform, is done. Granger causality examines how much the predictability of the target from its past improves when the driver variables’ past values are included in the regression, where m is the order of the model. We propose here to measure the causality at scale s by including w(t-1,s), w(t-2,s),…,w(t-m,s) in the regression model of the target, where w(t,s) are the à trous wavelet coefficients of the driver. In figure 1 we depict the multiscale causality evaluated by the proposed approach on a simulated two-dimensional linear system unidirectionally coupled with lag equal to 8 and strength a: it increases with the strength and peaks in correspondence of the lag. We have applied the proposed algorithm to scalp EEG signals [4], and we found that the global amount of causality among signals is significantly decreasing as the scale s is increased. Furthermore, comparing signals corresponding to resting conditions with closed eyes and with open eyes, we found that at large scales the effective connectivity, in terms of the proposed measure, is significantly lower with eyes open.

Figure 1. A. Granger causality in an unidirectionally coupled system is depicted as a function of the scale for several values of the coupling. B. GC values for eyes open and closed conditions from regular time series. C. GC values in the same conditions from wavelet coefficients (scale 4)


1. Lungarella M, Pitti A, Kuniyoshi K: Information transfer at multiple scales. Phys. Rev. E 2007, 76: 056117

2. Faes, L., Montalto, A., Stramaglia, S., Nollo, G., Marinazzo, D.: Multiscale analysis of information dynamics for linear multivariate processes, Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS 2016.

3. Renaud O, Starck, J-L, Murtagh, F: Wavelet-Based Combined Signal Filtering and Prediction. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics. 2005, vol. 35, no. 6, p. 1241–1251


P165 New (spectral) dynamic causal modeling scheme improves effective connectivity estimation within resting state networks in longitudinal data

Hannes Almgren1, Frederik Van De Steen1, Adeel Razi2,3, Daniele Marinazzo1

1Department of Data Analysis, Ghent University, Ghent, 9000, Belgium; 2The Wellcome Trust Centre for Neuroimaging, University College London, London, WC1 N 3BG, UK; 3Department of Electronic Engineering, NED University of Engineering and Technology, Karachi, Pakistan

Correspondence: Hannes Almgren (

BMC Neuroscience 2017, 18 (Suppl 1):P165

Effective connectivity within resting state networks has been estimated using spectral dynamic causal modeling (spDCM) [1]. Since its initial release, spDCM has been updated to improve performance and to render it applicable to larger networks. The objective of the present study is to assess the impact of these changes on parameter estimates and stability. We therefore compared performance between an early version of DCM (v6303) and a newer version of DCM (v6801) in combination with the parametric empirical Bayesian (PEB) framework [2]. Both were compared regarding (1) ability to explain observed cross spectral densities (CSD), (2) estimated network structure, and (3) stability of parameter estimates. An extensive single-subject longitudinal dataset, including 101 resting state fMRI sessions, was analyzed ( [3]. Eight resting state sessions were chosen for our analyses: occipital and lateral visual, auditory, somatomotor, left and right frontoparietal, default mode, and executive control network. Results showed that the newer spDCM-PEB combination explained the data (i.e., CSDs) far better than the older spDCM (95.31% versus 68.31% explained variance, respectively). Furthermore, the older version often failed to yield proper estimates (i.e., because of low proportion explained variance or estimated connection strengths near zero) in networks consisting of two- or three regions, while the newer version showed less such problems. Concerning average network structure across sessions, the newer spDCM-PEB combination detected asymmetric influences within networks consisting of two regions (see Figure 1). Furthermore, regions located in the medial part of the brain showed larger in- versus out-connectivity. For the default mode network, consisting of four regions in the present study, both versions yielded largely similar network structures (i.e., reciprocal influences between bilateral parietal cortices, and larger in- versus out-connectivity for medial areas). However, the older version of spDCM showed a positive influence (0.21 Hz) from precuneus to medial prefrontal cortex, which was much smaller (0.05 Hz) for the newer DCM-PEB combination. Stability depended profoundly on the size of the network: parameter estimates showed higher stability in two-region networks than in larger networks for both versions.

Figure 1. Comparison of posterior parameter estimates within the auditory network. A. median posterior parameter estimates for the older version (shown in red) and the newer spDCM-PEB combination (shown in black). B and C. distribution of these parameter estimates over sessions, together with the bootstrapped high density intervals, for both the older and newer scheme


1. Friston KJ, Kahan J, Biswal B, Razi, A: A DCM for resting state fMRI. NeuroImage 2014, 94:396–407.

2. Friston KJ, Litvak V, Oswal A, Razi A, Stephan KE, van Wijk BC, Ziegler G, Zeidman P: Bayesian model reduction and empirical Bayes for group (DCM) studies. NeuroImage 2016, 128:413–431.

3. Laumann TO, Gordon EM, Adeyemo B, Snyder AZ, Joo SJ, Chen MY, Gilmore AW, McDermott KB, Nelson SM, Dosenbach NU, et al.: Functional system and areal organization of a highly sampled individual human brain. Neuron 2015, 87(3):657–670.

P166 Effective connectivity modulations of win-and loss feedback: A dynamic causal modeling study of the human connectome gambling task

Frederik Van de Steen1, Ruth Krebs2, Daniele Marinazzo1

1Department of data analysis, Ghent University, Ghent, 9000, Belgium; 2Department of experimental psychology, Ghent University, Ghent, 9000, Belgium

Correspondence: Frederik Van de Steen (

BMC Neuroscience 2017, 18 (Suppl 1):P166

The main goal of this study was to investigate changes in effective connectivity associated with reward and punishment. More specifically, changes in connectivity between the ventral striatum (VS), anterior insula (aI), anterior cingulate cortex (ACC) and occipital cortex (OCC) that are related to win- and loss- feedback were studied.

Here, fMRI data from the human connectome project [1] was used for our study purposes. Data from 369 unrelated subjects performing a gambling task was analyzed. In short, participants played a card game where they had to guess whether the upcoming card would be higher or less than 5 (range was between 1 and 9). After the gamble, feedback was provided indicating a reward, punishment or neutral trial. The minimally preprocessed data was used and extra spatially smoothed with a 5-mm FWHM Gaussian kernel. The images were then entered in a first level general linear model (GLM) and summary statistic images of the first level GLM were entered in a second level GLM. The following two contrasts were used to identify the relevant brain regions at the group level: [Win - Neut] AND [Loss-Neut] (i.e. conjunction), and [Win-neut]. Based on the group level results, time-series of VS, aI, ACC and OCC were extracted for every subject and used in further dynamic causal modeling (DCM, [2]) analysis. We specified a fully connected model (i.e. all nodes are reciprocally connected) where the win and loss events were allowed to modulate all connections. The driving input consisted of all feedback events (win, loss and neutral events) and entered the DCM’s via OCC. The fully connected model was estimated for every subject and then used in the recently proposed parametric empirical Bayesian (PEB, [3]) framework for estimating DCM parameters at the group level. Finally, we used Bayesian model reduction to obtain the best 255 nested models. Since there was no clear winning model, Bayesian model averaging (BMA) of the 256 model (full + 255 nested models) parameters was performed. Figure 1. shows the group level BMA modulatory parameters with a posterior probability >.95.

Conclusion: Overall, both win- and loss- feedback have a general increasing effect on effective connectivity. The main difference between win and loss can be observed for the connection from aI and OCC with loss-feedback having a decreased effect. In addition, only win-feedback increases the connection from VS to aI. Overall, the VS appears as a key region in conveying loss and win information across the network.

Figure 1. BMA modulatory parameters at the group level are shown for A. loss feedback; B. win feedback


This research was supported by the Fund for Scientific Research-Flanders (FWO-V), Grant FWO16/ASP_H/255.


1. Van Essen, D. et al. The WU-Minn Human Connectome Project: An overview. NeuroImage, 2013, 80: 62–79.

2. Friston, Karl J., Lee Harrison, and Will Penny. Dynamic causal modelling. Neuroimage, 2003, 19(4): 1273–1302.

3. Friston, Karl J., et al. Bayesian model reduction and empirical Bayes for group (DCM) studies. Neuroimage

P167 Modeling global brain dynamics in brain tumor patients using the Virtual Brain

Hannelore Aerts, Daniele Marinazzo

Department of Data Analysis, Ghent University, Ghent, Belgium

Correspondence: Hannelore Aerts (

BMC Neuroscience 2017, 18 (Suppl 1):P167

Increasingly, computational models of brain activity are applied to investigate the relation between structure and function. In addition, biologically interpretable dynamical models may be used as unique predictive tools to investigate the impact of structural connectivity damage on brain dynamics. That is, individually modeled biophysical parameters could inform on alterations in patients’ local and large-scale brain dynamics, which are invisible to brain-imaging devices. In this study, we compared global biophysical model parameters between brain tumor patients and healthy controls. To this end, we used The Virtual Brain (TVB; [1]), a neuroinformatics platform that utilizes empirical structural connectivity data to create dynamic models of an individual’s brain.

Ten glioma patients (WHO grade II and III, mean age 41.1yo, 4 females; 5 from open access dataset [2]), 13 meningioma patients (mean age 60.23y, 11 females), three pseudo-meningioma patients (subtentorial brain tumors, mean age 58yo, 2 females) and 11 healthy partners (mean age 58.6y, 4 females) were included in this study. From all participants, diffusion MRI, resting-state fMRI and T1-weighted MRI data were acquired. Data were preprocessed and converted to a subject-specific structural and functional connectivity matrix using a modified version of the TVB preprocessing pipeline [3].

In order to simulate brain dynamics, the reduced Wong-Wang model [4] was used. This is a dynamical mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network. A subject-specific parameter space exploration was conducted to obtain an optimal correspondence between the individual’s simulated and empirical functional connectivity matrix. To this end, values of the global scaling factor G and the local feedback inhibitory synaptic coupling J i were varied. Values of G and J i yielding optimal correspondence were then compared between the brain tumor patient groups and healthy controls.

The distribution of optimal values for G and J i per group is depicted in Figure 1. Visually, no clear group differences are apparent. In future studies, larger sample sizes will be utilized, as data collection is still ongoing and more efforts to data sharing across labs are undertaken. In addition, local model parameter alterations in the vicinity of the lesion will be examined, since global model parameters might not be sufficiently sensitive to capture local lesion effects.

Figure 1. Distribution of optimal model parameter values per group: control subjects (CON), pseudo control subjects with subtentorial brain tumor (pCON), meningioma patients (MEN), and glioma WHO grade II and III patients (GLI). A. Global scaling factor (G); B. Local feedback inhibitory synaptic coupling (J i )


1. P Sanz Leon, S A Knock, M M Woodman, L Domide, J Mersmann, A R McIntosh, V K Jirsa. The Virtual Brain: A simulator of primate brain network dynamics. Frontiers in Neuroinformatics 2013, 7:1–23.

2. C Pernet, K Gorgolewski, I Whittle. UK Data Archive. []

3. M Schirner, S Rothmeier, V K Jirsa, A R McIntosh, P Ritter. An automated pipeline for constructing personalized virtual brains from multimodal neuroimaging data. NeuroImage 2015, 117:343–357.

4. G Deco, A Ponce-Alvarez, P Hagmann, G L Romani, D Martini, M Corbetta. How local excitation-inhibition ratio impacts the whole brain dynamics. The Journal of Neuroscience 2014, 34:7886–7898.

P168 Representation of Neuronal Morphologies

Lida Kanari1, Pawel Dlotko2, Martina Scolamiero3, Ran Levi4, Julian Shillcock1, Christiaan P.J. de Kock5, Kathryn Hess3 and Henry Markram1

1Blue Brain Project, École polytechnique fédérale de Lausanne, Lausanne, Switzerland; 2Departement of Mathematics, Swansea University, Swansea, Wales, UK; 3Laboratory for Topology and Neuroscience at the Brain Mind Institute, École polytechnique fédérale de Lausanne, Lausanne, Switzerland; 4Institute of Mathematics, University of Aberdeen, Aberdeen, Scotland, UK; 5Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU Universiteit Amsterdam, Amsterdam, the Netherlands

Correspondence: Lida Kanari (

BMC Neuroscience 2017, 18 (Suppl 1):P168

The shape of neuronal arborizations defines amongst other aspects their physical connectivity and functionality. Yet an efficient method for quantitatively analyzing the spatial structure of such trees has been difficult to establish. The wide diversity of neuronal morphologies in the brain, even for cells identified by experts as of the same type, renders an objective classification scheme a challenging task.

We propose a Topological Morphology Descriptor [1], inspired by Topological Data Analysis, to quantitatively analyze the branching shapes of neurons, which overcomes the limitations of existing techniques. The TMD algorithm maps the branches of a tree (Fig 1A) into a “barcode” (Fig 1B). The TMD encodes the morphology of the tree into a simplified topological representation that preserves sufficient information to be useful for the comparison and the distinction of different branching patterns.

Figure 1. Topological morphology descriptor. A. The neuronal tree is mapped into a barcode. B. Each bar represents the lifetime of a branch; its start and end distance from the soma

This method is applicable to any tree-like structure, and we demonstrate its generality by applying it to groups of mathematical random trees and neuronal morphologies. We identify the structural differences between known morphological types [2-3] as well as subtypes for human temporal cortex L2/3 pyramidal cells [4]. Our results show that the TMD of tree shapes reliably and efficiently distinguishes different shapes of trees and neurons. Therefore, the TMD provides an objective benchmark test of the quality of any grouping of branching trees into discrete morphological classes. Our results demonstrate that the TMD can enhance our understanding of the anatomy of neuronal morphologies.


1. Kanari L, Dłotko P, Scolamiero M, Levi R, Shillcock J, Hess K, Markram H, Quantifying topological invariants of neuronal morphologies, 2016, []

2. Ascoli G.A., Donohue D.E. and Halavi M., NeuroMorpho.Org: A Central Resource for Neuronal Morphologies, J. Neurosc. 2007, 27 (35): 9247–9251.

3. Markram H. Muller E., Ramaswamy S., Reimann M.W. et al., Reconstruction and Simulation of Neocortical Microcircuitry, Cell 2015, 163 (2): 456–492.

4. Mohan H., de Kock C.P.J., et al. Dendritic and Axonal Architecture of Individual Pyramidal Neurons across Layers of Adult Human Neocortex, Cereb Cortex 2015, 25 (12): 4839–4853.

P169 Firing Rate Heterogeneity and Consequences for Stimulus Estimation in the Electrosensory System

Cheng Ly1, Gary Marsat2

1Department of Statistical Sciences and Operations Research, Virginia Commonwealth University, Richmond, VA 23284, USA; 2Biology Department, West Virginia University, Morgantown, WV 26506, USA

Correspondence: Cheng Ly (

BMC Neuroscience 2017, 18 (Suppl 1):P169

Heterogeneity of neural attributes is recognized as a crucial feature in neural processing. Thus, we have developed theoretical methods (based on [1]) to characterize the firing rate distribution of spiking neural networks with intrinsic and network heterogeneity [2], both of which have been widely reported in experiments. This relationship (intrinsic and network) can lead to various levels of firing rate heterogeneity, depending on regime.

Next we adapt our theory to a delayed feedforward spiking network model of the electrosensory system of the weakly electric fish. Experimental recordings indicate that feedforward network input can mediate response heterogeneity of pyramidal cells [3]. We demonstrate that structured connectivity rules, derived from our theory, can lead to qualitatively similar statistics as the experimental data. Thus, the model demonstrates that intrinsic and network attributes do not interact in a linear manner but rather in a complex stimulus-dependent fashion to increase or decrease neural heterogeneity and thus shape population codes.

As evidence for heterogeneity shaping population codes, we also present some preliminary work using recordings from electric fish subject to noisy stimuli. We use a GLM model for each neuron, fit the parameters to the data using standard maximum likelihood methods, and perform Bayesian estimation of the stimuli. We find that firing rate heterogeneity is a signature of optimal (Bayesian) stimulus estimation of noisy stimuli. Interestingly, the firing rate correlation is not an indicator of decoding performance for a given population of neurons.


1. W. Nicola, C. Ly, S.A. Campbell: One-Dimensional Population Density Approaches to Recurrently Coupled Networks of Neurons with Noise. SIAM Journal on Applied Mathematics 2015, 75:2333–2360.

2. C. Ly: Firing Rate Dynamics in Recurrent Spiking Neural Networks with Intrinsic and Network Heterogeneity. Journal of Computational Neuroscience 2015, 39:311–327.

3. G. Marsat, G.J. Hupe, K.M. Allen: Heterogeneous response properties in a population of sensory neurons are structured to efficiently code naturalistic stimuli. Program # 181.20 Neuroscience Meeting Planner 2014.

P170 Knowledge Space: a community encyclopedia linking brain research concepts to data, models and literature

Tom Gillespie3, Willy Wong3, Malin Sandström1, Mathew Abrams1, Jeffrey S. Grethe3, Maryann Martone4

1INCF Secretariat, Karolinska Institute, Nobels väg 15A, 17177 Stockholm, Sweden; 2Campus Biotech, EPFL, CH-1202 Genève, Switzerland; 3Center for Research in Biological Systems, UCSD, La Jolla 92093, CA, USA; 4Neurosciences, UCSD, La Jolla 92093, CA, USA

Correspondence: Malin Sandström (

BMC Neuroscience 2017, 18 (Suppl 1):P170

KnowledgeSpace [1] is a community encyclopedia platform currently under development where neuroscience data and knowledge are synthesized. KnowledgeSpace aims to provide a global interface between current brain research concepts and the data, models and literature about them. It is an open project that welcomes participation and contributions from members of the global research community.

KnowledgeSpace version 1.0 was launched at Neuroscience 2016 in San Diego, November 12-16, with three modes of search - keyword, category and atlas-based (so far only for mouse brain). During the pre-launch phase, work focused on linking concepts to data, models, and literature from existing community resources. Current data sources include NeuroLex, Allen Institute for Brain Sciences, The Blue Brain Project, NeuroMorpho, NeuroElectro, Cell Image Library, NIF Integrated Connectivity, Ion Channel Genealogy, ModelDB, Open Source Brain, GenSat, BrainMaps, NeuronDB, The Human Brain Atlas, and PubMed. Initial content included in KnowledgeSpace covers ion channels, neuron types, and microcircuitry. For each content type, physiology, gene expression, anatomy, models, and morphology data sources are available.

Going forward we will enhance atlas representations of the mouse brain linking concepts to data, models, and literature, and an atlas representation of the human brain that links to available data, models, and literature will be implemented. Links to analysis tools will also be integrated into the KnowledgeSpace data section. The project will also develop protocols, standards, and mechanisms that allow the community to add data, analysis tools, and model content to KnowledgeSpace.

The initial development of KnowledgeSpace has been driven and supported by the International Neuroinformatics Coordinating Facility (INCF;, the Neuroscience Information Framework (NIF; and the Blue Brain Project (BBP; The KnowledgeSpace also represents an important component of the Neuroinformatics Platform being deployed in the Human Brain Project web portal. KnowledgeSpace is currently transitioning to a shared governance model, with a Governing Board composed of members of the neuroscience community who are currently funded to generate or share data and/or code as part of a lab, project or organization, and who will rotate off the board when their project ends.


1. KnowledgeSpace website []

P171 Evaluating the computational capacity of a cerebellum model

Robin De Gernier1, Sergio Solinas2, Christian Rössert3, Marc Haelterman1, Serge Massar1

1École polytechnique de Bruxelles, Université libre de Bruxelles, Brussels, Belgium, 1050; 2Department of Biomedical Science, University of Sassari, Sassari, Italia, 07100; 3Blue Brain Project, École polytechnique fédérale de Lausanne, Geneva, CH-1202, Switzerland

Correspondence: Robin De Gernier (

BMC Neuroscience 2017, 18 (Suppl 1):P171

The cerebellum plays an essential role in tasks ranging from motor control to higher cognitive functions (such as language processing) and receives input from many brain areas. A general framework for understanding cerebellar function is to view it as an adaptive-filter [1]. Within this framework, understanding, from computational and experimental studies, how the cerebellum processes information and what kind of computations it performs is a complex task, yet to be fully accomplished. In the case of computational studies, this reflects a need for new systematic methods to characterize the computational capacities of cerebellum models. In the present work, to fulfill this need, we apply a method borrowed from the field of machine learning to evaluate the computational capacity of a prototypical model of the cerebellum cortical network. Using this method, we find that the model can perform both linear operations on input signals –which is expected from previous work-, and –more surprisingly- highly nonlinear operations on input signals.

The model that we study is a simple rate model of the cerebellar granular layer in which granule cells inhibit each other via a single-exponential synaptic connection. The resulting recurrent inhibition is an abstraction of the inhibitory feedback circuit composed of granule and Golgi cells. Purkinje cells are modelled as linear trainable readout neurons. The model was originally introduced in [2, 3] to demonstrate that models of the cerebellum that include recurrence in the granular layer are suited for timing-related tasks. Further studies carried out in [4] showed how the recurrent dynamics of the network can provide the basis for constructing temporal filters.

The method, described in detail in [5], and developed in the context of the artificial intelligence algorithm known as reservoir computing [6], consists in feeding the network model with a random time dependent input signal and then quantifying how well a complete set of functions (each function representing a different type of computation) of the input signal can be reconstructed by taking a linear combination of the neuronal activations. The result is a quantitative estimate of the number of different computations that can be carried out by the model. We conducted simulations with 1000 granule cells. Our results show that the cerebellum prototypical model has the capability to compute both linear and highly nonlinear functions of its input. Specifically, the model is able to reconstruct Legendre polynomial functions up to the 10th degree. Moreover, the model can internally maintain a delayed representation of the input with delays of up to 100 ms, and perform operations on that delayed representation. Despite their abstract nature, these two properties are essential to perform typical cerebellar functions, such as learning the timing of conditioned reflexes or fine-tuning nonlinear motor control tasks or, we believe, even higher cognitive functions.

In future work, we hope to confirm these abstract results by applying our cerebellum model to typical cerebellar tasks. Additionally, we will compare our results with a very recent work which studied how a model of the cerebellum could solve several machine learning tasks [7].


1. Dean P, Porril J: The cerebellar microcircuit as an adaptive filter: experimental and computational evidence. Nat Rev Neurosci 2010, 11(1): 30–43.

2. Yamazaki T, Tanaka S: Neural Modeling of an Internal Clock. Neural Comput 2005, 17(5): 1032–1058.

3. Yamazaki T, Tanaka S: The cerebellum as a liquid state machine. Neural Netw 2007, 20(3): 290–297.

4. Rössert C, Dean P, Porrill J: At the Edge of Chaos: How Cerebellar Granular Layer Network Dynamics Can Provide the Basis for Temporal Filters. PLOS Comput Biol 2015, 11(10):e1004515.

5. Dambre J, Verstraeten D, Schrauwen B, Massar S: Information processing capacity of dynamical systems. Sci Rep 2012, 2:514.

6. Lukoševičius M, Jaeger H: Reservoir computing approaches to recurrent neural network training. Computer Science Review 2009, 3:127–149.

7. Hausknecht M, Li WK, Mauk M, Stone P: Machine Learning Capabilities of a Simulated Cerebellum. IEEE Trans Neural Netw Learn Syst 2017, 28(3):510–522.

P172 Complexity of cortical connectivity promotes self-organized criticality

Valentina Pasquale1, Vito Paolo Pastore2, Sergio Martinoia2, Paolo Massobrio2

1Neuroscience and Brain Technologies Department, Istituto Italiano di Tecnologia (IIT), Genova, Italy; 2Department of Informatics, Bioengineering, Robotics, System Engineering (DIBRIS), University of Genova, Genova, Italy

Correspondence: Valentina Pasquale (

BMC Neuroscience 2017, 18 (Suppl 1):P172

Large-scale in vitro cortical networks spontaneously exhibit recurrent events of propagating spiking and bursting activity, usually termed as neuronal avalanches, since their size (and lifetime) distribution can be approximated by a power law, as in critical sand pile models [1, 2] (Figure 1). However, neuronal avalanches in cultures of dissociated cortical neurons can distribute according to three different dynamic states, namely sub-critical, critical, or super-critical, depending on several factors like developmental stage, excitation/inhibition balance, cell density, etc. [3]. In this work, we investigated the role of connectivity in driving spontaneous activity towards critical, sub-critical or super-critical regimes, by combining both experimental and computational investigations.

Our experimental model consists of mature networks (third week of in vitro development) of cortical dissociated neurons coupled to High-Density Micro-Electrode Arrays (HD-MEAs) (3Brain, Wadenswill, Switzerland). These devices, containing 4’096 microelectrodes, 81 µm-spaced, allow to follow the emergence and propagation of neuronal avalanches with high spatio-temporal resolution. We estimated the functional connectivity of cortical networks by using cross-correlation based methods, collected in the software ToolConnect [4]. In particular, our cross-correlation algorithm is able to reliably and accurately infer functional and effective excitatory and inhibitory links in ex vivo neuronal networks, while guaranteeing high computational performances necessary to process large-scale population recordings. To support our experimental investigations, we also developed a computational model of neuronal network, made up of Izhikevich neurons [5] structurally connected by following well defined topologies of connectivity (e.g., random, scale-free, small-world).

Simulations of the model demonstrated that the presence of hubs, the physiological balance between excitation and inhibition, and the concurrent presence of scale-free and small-world features are necessary to induce critical dynamics. We then confirmed the predictions of the model by analyzing medium/high density cortical cultures coupled to HD-MEAs, finding that networks featuring both scale-free and small-world properties (as computed from functional connectivity graphs) display critical behavior.

Figure 1. Example of electrophysiological activity of a cortical network coupled to a High-Density Micro-Electrode Arrays (HD-MEAs)


1. Beggs JM, Plenz D: Neuronal avalanches in neocortical circuits. J Neurosci 2003, 23(35):11167–11177.

2. Bak P: How nature works. Oxford (UK): Oxford University Press; 1997.

3. Pasquale V, Massobrio P, Bologna LL, Chiappalone M, Martinoia S: Self-organization and neuronal avalanches in networks of dissociated cortical neurons. Neuroscience 2008, 153(4):1354–1369.

4. Pastore VP, Poli D, Godjoski A, Martinoia S, Massobrio P: ToolConnect: a functional connectivity toolbox for in vitro networks. Front Neuroinform 2016, 10(13).

5. Izhikevich EM: Simple model of spiking neurons. IEEE Trans Neur Net 2003, 14:1569–1572.

P173 Attractor dynamics of cortical assemblies underlying brain awakening from deep anesthesia

Cristiano Capone1,2, Núria Tort-Colet3, Maria V. Sanchez-Vives3,4, Maurizio Mattia1

1Istituto Superiore di Sanità (ISS), 00161 Rome, Italy; 2PhD Program in Physics, Sapienza University, 00185 Rome, Italy; 3Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), 08036 Barcelona, Spain; 4Institució Catalana de Recerca i Estudis Avançats (ICREA), 08010 Barcelona, Spain

Correspondence: Cristiano Capone (

BMC Neuroscience 2017, 18 (Suppl 1):P173

Slow rhythms of activity (~1 Hz) and slow-wave activity [1, 2] are a remarkably reproducible dynamical activity pattern with a low degree of complexity which opens a window on the brain multiscale organization, on top of which cognitive functions emerge during wakefulness. Understanding how such transition takes place might shade light on the emergence of the rich repertoire of neuronal dynamics underlying brain computation. Sleep-wake transition is a widely-studied phenomenon ranging in experimental, computational and theoretical frameworks [3–5], however it is still debated how brain state changes occur. In our previous work [6] we showed from intracortical recordings in anesthetized rats, that sleep-like rhythms fade out when wakefulness is approached giving rise to an alternation between slow Up/Down oscillations and awake-like (AL) activity periods. We also shown how this phase of activity pattern bistability is captured by a mean-field rate-based model of a cortical column. Guided by this mean-field model, spiking neuron networks are devised to reproduce the electrophysiological changes displayed during the transition. Also, the model gave us hints on the mechanistic and dynamical nature of the patterns of activity observed, suggesting that the AL periods appearance is due to a Hopf-like transition from a limit cycle to a stable fixed point at a high level of activity, and that AL-SO alternation is related to the presence of a slow oscillating ( 0.2 Hz) level of excitation probably due to populations of neurons in deeper regions of the brain.

We extended our previous findings by performing a stability analysis of the competing attractors, observing a modulation of their stability, that affect the dynamics of the Down-to-AL transition and the residence dynamics within the AL state. Moreover, we found that the mean-field model remarkably matches the stability modulation observed in experiments. This match between theory and experiments further strengthens our claim that cortical assemblies of neurons display a Hopf bifurcation when anesthesia fades out.

Such observation gives important information on intrinsic dynamical properties of the system, suggesting that it does not respond in a passive way but rather it is a strongly nonlinear component, capable to drastically change its dynamics under small changes of relevant parameters. This can provide a computational advantage in terms of the capability of producing a rich repertoire of network states during wakefulness.


Supported by EC FET Flagship HBP SGA1 (720270) to MM and MVSV


1. Sanchez-Vives MV, & Mattia M: Slow wave activity as the default mode of the cerebral cortex. Arch Ital Biol 2014, 152:147–155.

2. Capone Cristiano, Mattia Maurizio: Speed hysteresis and noise shaping of traveling fronts in neural fields: role of local circuitry and nonlocal connectivity. Scientific Reports 2016, 7:39611 doi:

3. Bettinardi RG, Tort-Colet N, Ruiz-Mejias M, Sanchez-Vives MV, & Deco G: Gradual emergence of spontaneous correlated brain activity during fading of general anesthesia in rats: evidences from fMRI and local field potentials. Neuroimage 2015, 114:185–198.

4. G. Deco, P. Hagmann, A. G. Hudetz, and G. Tononi: Modeling resting-stat state functional networks when the cortex falls asleep: local and global changes., Cereb. Cortex 2014, vol. 24, no. 12, pp. 3180–3194.

5. Steyn-Ross ML, Steyn-Ross DA, and Sleigh JW: Interacting Turing-Hopf instabilities drive symmetry-breaking transitions in a mean-field model of the cortex: a mechanism for the slow oscillation, Phys. Rev. X, vol. 3, no. 2, p. 21005, 2013.

6. Capone C, Tort-Colet N, Mattia M, Sanchez-Vives MV (2016) Multistable attractor dynamics in columnar cortical networks transitioning from deep anesthesia to wakefulness. Bernstein Conference 2016.

P174 Are receptive fields in visual cortex quantitatively consistent with efficient coding?

Ali Almasi1,2, Shaun L. Cloherty4, David B. Grayden2, Yan T. Wong3,4, Michael R. Ibbotson1,5, Hamish Meffin1,5

1National Vision Research Institute, Australian College of Optometry, Melbourne, Australia; 2NeuroEngineering Laboratory, Dept. Biomedical Eng., University of Melbourne, Melbourne, Australia; 3Dept. of Physiology, Monash University, Melbourne, Australia; 4Dept. of Electrical & Computer Systems Eng., Monash University, Melbourne, Australia; 5ARC Centre of Excellence for Integrative Brain Function, University of Melbourne, Melbourne, Australia

Correspondence: Hamish Meffin (

BMC Neuroscience 2017, 18 (Suppl 1):P174

Numerous studies, across different sensory modalities, suggest that the neural code employed in early stages of the cortical hierarchy can be explained in terms of Efficient Coding. This principle states that information is represented in a neural population so as to minimize redundancy. This is achieved when the features to which neurons are tuned occur in a statistically independent fashion in the sensory environment. The “statistically independent features” can be rigorously identified through methods of statistical inference, and can be associated with a cell’s receptive field (RF). Several studies using these methods have shown a qualitative similarity between predicted RFs and those found in primary visual cortex, for simple and complex cells (with linear and non-linear RF structures, respectively).

Recent methods allow direct experimental estimation of RFs. Using these methods, we report on the first quantitative evaluation of the Efficient Coding Hypothesis at the level of RF structures, including both simple and complex cells.

Experimental RF structures were estimated from recordings of single-units in the primary visual cortex of anaesthetized cats in response to presentation of Gaussian white noise. RFs were estimated from recordings assuming a General Quadratic Model for spike rate and performing maximum likelihood estimation on the response given the stimulus. Theoretical Efficient Coding RF structures were inferred by performing unsupervised learning on a set of natural images, under the assumption of Efficient Coding that evoked spike rates were statistically independent and sparsely distributed, and using the same General Quadratic Model as for the experimental RFs.

We recovered spatial RF structures from 94 well isolated single-units in 3 cats, of which 26 were classified as simple cells, 38 as complex cells and 30 as a mixed cell class.

The results confirmed the qualitatively similarity of theoretical RF structures from Efficient Coding with those estimated experimentally. However, quantitatively a number of discrepancies were observed as well as similarities. (1) RF orientation tuning was wider experimentally than theoretically (bandwidth was most frequently between 60° and 90° experimentally, while theoretically, it was mostly between 30° and 60°). (2) Spatial frequency tuning was wider experimentally than theoretically (bandwidth was most frequently 2 ± 0.5 octaves experimentally, but only 1 ± 0.5 octaves theoretically). (3) For cells with more than one sub-RF it was possible to compare the tuning to orientation and spatial frequency between different sub-RFs. The difference in orientation tuning between sub-RFs showed that experimentally around 60% cells had precisely matched orientation preferences (<15°), while in the theoretical population this proportion dropped to around 40%. (4) Experimentally, the spatial frequency preference of sub-RFs in the same cell were also tightly matched for the majority of cells (<0.5 octaves), with a similar result in the theoretical population (<0.5 octaves). (5) Finally, the spatial phase relationships of sub-RFs were compared: experimentally a large majority (80%) of cells that had two quadratic sub-RFs that were 90° ± 15° out of phase. In the theoretical population, this spatial phase relationship was common but less prevalent (50%).

The quantitative discrepancies we found were robust to changes in meta-parameters, such as the degree of image compression in pre-processing or the source of natural images. The results suggest that the experimental RFs are sub-optimal in terms of coding efficiency. However, it is important to note that we used a deterministic model of spike rate in response to an image stimulus: a stochastic model is more realistic and may limit the coding efficiency of the theoretical result, bringing it in closer quantitative agreement with experiment.


AA acknowledges a Melbourne University Postgraduate Research Award. HM and MI acknowledge support from the Australian Research Council Centre of Excellence for Integrative Brain function.

P175 Cholinergic Modulation of DG-CA3 microcircuit dynamics and function

Luke Y. Prince1, Krasimira Tsaneva-Atanasova2,3, Jack R. Mellor1

1Centre for Synaptic Plasticity, School of Physiology, Pharmacology, and Neuroscience, University of Bristol, Bristol, BS8 1TD, UK; 2Department of Mathematic, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK, EX4 4QF; 3EPRSC Centre for Predictive Modelling in Healthcare, University of Exeter, Exeter, UK, EX4 4QJ

Correspondence: Luke Y. Prince (

BMC Neuroscience 2017, 18 (Suppl 1):P175

Dentate gyrus granule cells provide powerful feedforward excitatory drive onto a local circuit of CA3 pyramidal cells and inhibitory interneurons, and is believed to selectively activate subsets of pyramidal cells in the CA3 recurrent network for encoding and recall of memories. Cholinergic receptors provide a key means to modulate this circuit, increasing cellular excitability and altering synaptic release, but the combined action of these changes on information processing between the dentate gyrus and CA3 remains unknown. We recorded evoked monosynaptic EPSCs and disynaptic IPSCs in CA3 pyramidal cells in response to a range of frequencies and stimulation patterns and in the presence and absence of the cholinergic receptor agonist carbachol (5 μM). We found that carbachol strongly reduced IPSC amplitudes but only mildly reduced EPSC amplitudes. The short-term plasticity dynamics of these responses were used to constrain a computational model of mossy fibre driven transmission across a range of stimulation patterns. This model was then used to analyse how aceytlcholine influences encoding and recall in a spiking neural network model of CA3 to study encoding and recall of neuronal ensembles driven by mossy fibre input. We found that acetylcholine lowers the requirements for encoding neuronal ensembles and increases memory storage in CA3.

P176 Subthalamic nucleus low frequency fluctuations carry information about future economic decisions in parkinsonian gamblers

Alberto Mazzoni1†, Manuela Rosa2†, Jacopo Carpaneto1, Luigi M. Romito3, Alberto Priori2,4, Silvestro Micera1,5

1Translational Neural Engineering, The Biorobotics Institute, Scuola Superiore Sant’Anna, Pontedera, 56025, Italy; 2Clinical Center for Neurostimulation, Neurotechnology and Movement Disorders Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, Milan, 20122, Italy; 3Movement Disorders Department, Neurological Institute Carlo Besta, Milan, 20133, Italy; 4Department of Health Sciences, University of Milan & ASST Santi Paolo e Carlo, Milan, 20142, Italy; 5Bertarelli Foundation Chair in Translational NeuroEngineering, Institute of Bioengineering and Center for Neuroprosthetics, Ecole Polytechnique Federale De Lausanne, Lausanne, CH-1015, Switzerland

Correspondence: Alberto Mazzoni (

equal first author contribution

BMC Neuroscience 2017, 18 (Suppl 1):P176

Dopamine replacement therapy for the treatment for Parkinson Disease (PD) has been related to an increased risk of occurrence of Impulse Control Disorders (ICD), such as Gambling Disorder (GD) [1]. Previous experimental and modeling studies [2] have shown a link between ICD and specific activity of the subthalamic nucleus (STN), a standard target for Deep Brain Stimulation (DBS) therapy for advanced PD. Several brain areas involved in decision making, impulsivity and reward valuation, such as the prefrontal cortex and striatum, are interconnected to the STN, and activity in these areas might be modulated by STN DBS. Understanding the relationship between STN functioning and ICD would help developing better therapies for PD while shedding light on the mechanisms of human decision making.

To study how STN activity is modulated by gambling, we analyzed low-frequency ([1–12] Hz) fluctuations of STN LFP recorded by DBS electrodes from PD patients during an economic decision making task. All patients were under dopamine replacement therapy, and half of them were affected by GD. In the task patients were asked to decide between a high risk (HR) and low risk (LR) option, the first being associated to a negative expected value, but to a high reward in case of win. Reaction times were strongly affected by trial type, with GD patients and non-GD patients quicker in taking HR and LR decisions respectively, suggesting that decision is actually determined before options presentation. Analyzing low frequency STN LFP we found that amplitude of fluctuations, recorded during specific intervals preceding option presentation, carried significant information about future choices on single trials in patients affected by GD but not in those not affected.

These results complement previous studies about the role of inhibiting impulsive behavior displayed by the STN activity. Beta-range STN fluctuations were found to be modulated by the level of conflict in decisions [3], while our results suggest that the lower frequencies, which are functionally correlated with different cortical areas [4], play instead a role to prevent pathological risk attraction.


This work was supported by institutional funds from Scuola Superiore Sant’Anna, by the Italian Ministry of Health (GR-2009-1594645 grant), by the Aldo Ravelli Donation for Research on Parkinson Disease, by the Bertarelli Foundation, and by institutional funds from École Polytechnique Federale de Lausanne.


1. Weintraub D, David AS, Evans AH, Grant JE, Stacy M: Clinical spectrum of impulse control disorders in Parkinson’s disease. Mov. Disord. 2015 30: 121–127.

2. Frank MJ, Samanta J, Moustafa AA, Sherman SJ: Hold Your Horses: Impulsivity, Deep Brain Stimulation, and Medication in Parkinsonism. Science 2007 318: 1309–1312.

3. Brittain JS, Watkins KE, Joundi RA, Ray NJ, Holland P, Green AL, Aziz TZ, Jenkinson N A Role for the Subthalamic Nucleus in Response Inhibition during Conflict. J. Neurosci. 2012 32: 13396–13401.

4. Herz DM, Tan H, Brittain JS, Fischer P, Cheeran B, Green AL, FitzGerald J, Aziz TZ, Ashkan K, Little S, et al. Distinct mechanisms mediate speed-accuracy adjustments in cortico-subthalamic networks. eLife 2017 6:

P177 Data-driven computational modeling of CA1 hippocampal principal cells and interneurons

Rosanna Migliore1, Carmen Alina Lupascu1, Francesco Franchina1, Luca Leonardo Bologna1, Armando Romani2, Christian Rössert2, Sára Saray3, Jean-Denis Courcol2, Werner Van Geit2, Szabolcs Káli3, Alex Thomson4, Audrey Mercer4, Sigrun Lange4,5, Joanne Falck4, Eilif Muller2, Felix Schürmann2, and Michele Migliore1

1Institute of Biophysics, National Research Council, Palermo, Italy; 2Blue Brain Project, École Polytechnique Fédérale de Lausanne Biotech Campus, Geneva, Switzerland; 3Institute of Experimental Medicine, Hungarian Academy of Sciences, Budapest, Hungary; 4University College London, London, United Kingdom; 5University of Westminster, London, United Kingdom

Correspondence: Rosanna Migliore (

BMC Neuroscience 2017, 18 (Suppl 1):P177

We present and discuss data-driven models of biophysically detailed hippocampal CA1 pyramidal cells and interneurons of a rat. The results have been obtained by using the Brain Simulation Platform (BSP) of the Human Brain Project and two open-source packages, the Electrophys Feature Extraction Library (eFEL, and the Blue Brain Python Optimization Library (BluePyOpt) [1]. They have been integrated into the BSP in an intuitive graphical user interface guiding the user through all steps, from selecting experimental data to constrain the model, to run the optimization generating a model template and, finally, to explore the model with in silico experiments. Electrophysiological features were extracted from somatic traces obtained from intracellular paired recordings performed using sharp electrodes on CA1 principal cells and interneurons with classical accommodating (cAC), bursting accommodating (bAC) and classical non-accommodating (cNAC) firing patterns. The extracted features, together with user selections for realistic morphological reconstructions and ion channel kinetics, were then used to automatically configure and run the BluePyOpt on the Neuroscience Gateway and/or on one of the HPC systems supporting the BSP operations, such as CINECA (Bologna, Italy) and JSC (Jülich, Germany) in this case. The resulting optimized ensembles of peak conductances for the ionic currents, were used to explore and validate the model behavior during interactive in silico experiments carried out within the HBP Collaboratory. Such a modelling effort has been undertaken in the context of the Human Brain Project and constitutes one of the major steps in the workflow that is being used to build a cellular level model of a rodent hippocampus.


This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 720270


1. Van Geit W, Gevaert M, Chindemi G, Rössert C, Courcol J-D, Muller EB, Schürmann F, Segev I and Markram H (2016) BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience. Front. Neuroinform. 10:17. doi:

P178 The interplay between basal ganglia and cerebellum in motor adaptation

Dmitrii Todorov, Robert Capps, William Barnett, Yaroslav Molkov

Department of Mathematics and Statistics, Georgia State University, Atlanta, Georgia 30303-3083, USA

Correspondence: Dmitrii Todorov (

BMC Neuroscience 2017, 18 (Suppl 1):P178

It is widely accepted that the cerebellum and basal ganglia (BG) and play key roles in motor adaptation (in error based and non-error based one, respectively) [1]. However, despite considerable number of studies, the interactions between BG and cerebellum are not completely understood [1]. In particular, in the experiments it is difficult to dissociate the adaptation performed by cerebellum and by BG. To do so, some studies [2] introduced perception perturbations that were suggested to impair cerebellum’s ability to adapt to errors, and, thus, promoted the BG-based mechanisms. To our knowledge no mathematical model exists that explains the conditions in which visual perturbations make reinforcement learning in the BG the main mechanism of motor adaptation.

We have developed a model that integrates a phenomenological representation of the cerebellum and a previously published firing rate-based description of BG network [3], and mimics the trial-to-trial motor adaptation in 2D reaching arm movements. Cerebellum is implemented as an artificial neural network performing corrections of the motor program, descending from motor cortex to spinal cord, via supervised learning.

Figure 1 below shows the model architecture. Stimulus signal comes from prefrontal cortex (PFC) and is sent to direct and indirect pathways of BG. The strength of PFC → BG connections changes due to reinforcement learning mediated by substantia nigra pars compacta (SNc) dopaminergic input, whose activity is defined by the reward prediction error (RPE) signal. Direct and indirect pathways converge at globus pallidus internus (GPi)/substantia nigra pars reticulata (SNr), which together project to premotor cortex (PMC)/Thalamus to perform action selection. There are also direct PFC → PMC connections representing habitual cue-action associations. The PMC/Thalamus then project to the motor cortex (MC) and to the cerebellum. Cerebellum output represents a correction, which adds to the motor command descending from the MC to the spinal cord. This correction is calculated as a linear transformation of the motor command. The transformation matrix is updated by the supervised learning algorithm, accounting for the vector error provided by the visual feedback. The corrected signal goes to the spinal cord neuron network that controls a two-joint arm to perform center-out reaching movements. The perceived movement endpoint of the is used to compute the vector error and/or the reward.

Figure 1. Model architecture

Our model simulations suggest that when the perception of the vector error provided to the cerebellum is significantly perturbed, the faulty cerebellar corrections adversely affect or even completely destroy motor adaptation. We speculate and show via simulations that error-based learning in cerebellum has an adaptive critic component which effectively suppresses error-based mechanisms to enable reinforcement-based motor adaptation.


1. Izawa J, Shadmehr R. Learning from sensory and reward prediction errors during motor adaptation. PLoS Comput Biol. 2011; 7(3):e1002012.

2. Gutierrez‐Garralda JM, Moreno‐Briseño P, Boll MC, Morgado‐Valle C, Campos‐Romo A, Diaz R, Fernandez‐Ruiz J. The effect of Parkinson’s disease and Huntington’s disease on human visuomotor learning. European Journal of Neuroscience 2013;38(6):2933–40.4.

3. Kim T, Hamade KC, Todorov D, Barnett WH, Capps RA, Latash EM, Markin SN, Rybak IA, Molkov YI. Reward based motor adaptation mediated by basal ganglia. Frontiers in Computational Neuroscience. 2017;11.

P179 Microscopic and macroscopic dynamics of neural populations with delays

Federico Devalle1,2, Diego Pazó3, Ernest Montbrió1

1Center for Brain and Cognition, Universitat Pompeu Fabra, 08018 Barcelona, Spain; 2Department of Physics, Lancaster University, LA1 4YB Lancaster, UK; 3Instituto de Fisica de Cantabria (IFCA), CSIC-Universidad de Cantabria, 39005 Santander, Spain

Correspondence: Federico Devalle (

BMC Neuroscience 2017, 18 (Suppl 1):P179

Bridging descriptions of brain activity across different scales is a major challenge for theoretical neuroscience. Numerous experimental techniques are available to measure brain activity, ranging from single cells recordings to population measurements of the average activity of large ensembles of neurons. It is often in these population-level recordings (e.g. EEG, MEG…), that important phenomena are observed. A particularly relevant example are gamma oscillations, a temporal coherent activity with frequency between 30 and 100 Hz. A large body of experimental and computational works indicates that the interplay between synaptic processing and recurrent inhibition is the key ingredient to generate such oscillations, in a mechanism commonly referred to as Interneuronal Gamma oscillations (ING) [1, 2]. Here, we analyse the dynamics of a network of quadratic integrate-and-fire neurons with time-delayed synaptic interactions, both in their excitable and self-oscillatory regime. Time delays have been indeed shown to approximate the effect of synaptic kinetics [3]. Using the so-called Lorentzian ansatz [4, 5], we derive a set of two delayed firing rate equations (FREs). Due to their analytical tractability, the FREs allow us to find exact boundaries of stability for the parameters regions of oscillatory (collective synchrony-CS) and asynchronous dynamics. Moreover, for inhibitory coupling, we observe a more complex oscillatory state, the so-called quasiperiodic partially synchronized state (QPS). Here, neurons are quasiperiodic, and have a mean frequency different from the global frequency of the entire population, which corresponds to fast brain oscillations (f ~ 80 Hz). Interestingly, macroscopically this state strongly resembles the sparsely synchronized state observed in networks of leaky integrate-and-fire neurons subjected to strong recurrent inhibition and noise [6]. However, microscopically, these two states have qualitatively different dynamics, suggesting a dichotomy between microscopic and macroscopic dynamics. For a certain region of parameters, the QPS coexists also with the CS. Moreover, sufficiently increasing inhibition, the QPS undergoes a series of period doubling bifurcation that eventually leads to chaos. Notably, only the collective dynamics is chaotic, while microscopically neurons are non-chaotic. Finally, we find that while excitation always leads to collective synchronous oscillations, inhibition fails to synchronize neural activity when a precise degree of heterogeneity is exceeded, consistently with previous numerical studies of heterogeneous, inhibitory spiking neural networks [7].


We acknowledge support by MINECO (Spain) under project No. ~FIS2014-59462-P, and the project COSMOS of the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No.642563.


1. Whittington MA, Traub RD, Jefferys JG: Synchronized oscillations in interneuron networks driven by metabotropic glutamate receptor activation. Nature 1995, 373:612–615.

2. Whittington MA, Traub RD, Kopell N, Ermentrout B, Buhl EH: Inhibition-based rhythms: experimental and mathematical observations on network dynamics. Int J Psychophysiol 2000, 38:315–336

3. Roxin A, Montbrió E: How effective delays shape oscillatory dynamics in neuronal networks. Physica D 2011, 240: 323–345.

4. Montbrió E, Pazó D, Roxin A: Macroscopic description for Networks of Spiking Neurons. Phys Rev X 2015, 5: 021028

5. Pazó D, Montbrió E: From Quasiperiodic Partial Synchronization to Collective Chaos in Populations of Inhibitory Neurons with Delay. Phys Rev Lett 2016, 116: 238101

6. Brunel N, Hakim V: Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Comput 1999, 11:1621.

7. Wang XJ, Buzsáki G: Gamma Oscillations by Synaptic inhibition in a Hippocampal Interneuronal Network Model. J Neurosci 1996, 16(20):6402–6413.

P180 Motivation signal in anterior cingulate cortex during economic decisions

Gabriela Mochol1, Habiba Azab2, Benjamin Y. Hayden2, Rubén Moreno-Bote1

1Center for Brain and Cognition and Department of Information and Communications Technologies, University Pompeu Fabra, Barcelona, 08005, Spain; 2Department of Brain and Cognitive Sciences and Center for Visual Sciences, University of Rochester, Rochester, NY 14618, USA

Correspondence: Gabriela Mochol (

BMC Neuroscience 2017, 18 (Suppl 1):P180

Anterior cingulate cortex (ACC) plays regulatory and cognitive roles. Its functions are associated with conflict and performance monitoring, regulation of strategy and response selection, all of which depend on reward monitoring and its anticipation [1]. It has been shown previously that in the condition when the reward was certain and its proximity was cued, animal’s error rate decreases together with the number of trial remaining to the reward [2]. Concurrently, the firing rate of ACC neurons gradually increased or decreased along with reward expectancy. It happened when the reward was certain and correct decisions could only bring animal closer to the reward. However, when certainty about outcome was removed and no notion of reward proximity was provided the progressive modulation of behavior and ACC activity disappeared.

Here we tested whether such motivation signal can be also found in the circumstances when the reward is no longer certain and the animal choices brings reward closer or further away but the information about reward closeness reminds - the situation more common in the economic decisions of everyday life. We recorded single unit activity from dorsal ACC while monkey performed token gambling task. On each trial, monkeys gambled to gain certain number of tokens, but they could also lose tokens. The collection of six tokens resulted in a jackpot reward delivery. The number of collected tokens was displayed on the monitor and was known to the animal. The animal learnt the task and exhibited risk seeking behavior as previously reported [3]. The analysis of behavioral data revealed that animal performance (percent of correct responses) depended on the number of previously collected tokens. The relation was not monotonic with the drop of performance after reward administration. At the same time, the significant fraction of recorded neurons exhibited tuning towards the number of previously collected tokens.

Our preliminary results suggest that ACC monitors rewards in risky conditions, and that neuronal signals could be directly related to the motivation of the animal.


The Spanish Ministry of Economy and Competitiveness IJCI-2014-21937 grant (to G. M.); the Marie Curie FP7-PEOPLE-2010-IRG grant PIRG08-GA - 2010-276795, and the Spanish Ministry of Economy and Competitiveness PSI2013-44811-P grant (to R. M. B.)


1. Heilbronner SR, Hayden BY: Dorsal Anterior Cingulate Cortex: A Bottom-Up View. Annu Rev Neurosci 2016, 39: 149–170.

2. Shidara M, Richmond BJ: Anterior Cingulate: Single Neuronal Signals Related to Degree of Reward Expectancy. Science 2002, 296(5573):1483–1490.

3. Azab H, Hayden BY: Shared roles of dorsal and subgenual anterior cingulate cortices in economic decisions. bioRxiv 2016. [].

P181 A simple computational model of altered neuromodulation in cortico-basal ganglia dynamics underlying bipolar disorder

Pragathi Priyadharsini Balasubramani1, Srinivasa V. Chakravarthy2, Vignayanandam R. Muddapu2

1Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA; 2Bhupat and Jyoti Mehta School of Biosciences, Department of Biotechnology, IIT- Madras, Chennai, TN, India

Correspondence: Srinivasa V. Chakravarthy (

BMC Neuroscience 2017, 18 (Suppl 1):P181

Bipolar disorder (BPD) is characterized by oscillations alternating between manic and depressive episodes causing swings in moods. The length of an episode in a patient’s mood cycle (time period) can vary from hours to years. Some medications popularly used for stabilizing mood include selective serotonin reuptake inhibitors and lithium therapy. This computational study focuses on the serotonergic system dysfunction, and particularly, understanding their contribution to cortico-basal ganglia network (CGBN) dynamics for stability and recurrence of moods. To this end, we try to model the disorder in a decision-making framework that tries to choose between actions of positive or negative affects. We propose a computational model that explores the effects of impaired serotonergic neuromodulation on the dynamics of CBGN and relate this impairment to the manic and depressive episodes of BPD. The proposed model of BPD is derived from an earlier model, that describes the roles of dopamine and serotonin in the action selection dynamics of CBGN. In that model, rewarding actions are selected based on the Utility function, which combines Value and Risk functions as follows (eqn. 1).
$$ U_{t} (s_{t} ,a_{t} ) = Q_{t} (s_{t} ,a_{t} ) - \alpha \;sign(Q_{t} (s_{t} ,a_{t} ))\;\sqrt {h_{t} (s_{t} ,a_{t} )} $$
where U, Q and h represent Utility, Value and Risk respectively, for a given state, s, and action, a, at time, t. The parameter α, which represents risk preference, is associated with serotonin action in CBGN. Value and Risk are trained by Reinforcement Learning using the Temporal Difference (TD) error, which represents dopamine in CBGN. The lumped model was later extended to a detailed network model of BG. In those models, α was a constant, whereas in the current model it varies as per the following dynamics:
$$ \dot{\alpha } = \tau_{\alpha } ( - \alpha + A_{r} \bar{r} + \alpha_{k} ) $$
$$ \dot{\bar{r}} = \tau_{r} \left( {r - \bar{r}} \right) $$
The variable r-bar tracks the average rewards ‘r’ gained through time, and α-dot defines serotonin dynamics with α k constant (eqns. 2, 3) indicating basal risk sensitivity levels. The parameter A r denotes the amplitude of reward sensitivity, and thus the reward history is proposed to modulate α dynamics. When the model is run in a simple two arm bandit task - one rewarding (+ve reward) and the other punitive (-ve reward) with probability 0.5, under normal conditions the network shows high preference for rewarding actions. But for certain ranges of reward sensitivity (A r) and basal risk sensitivity (α k) the model exhibits oscillations reminiscent of BPD mood oscillations (Fig. 1). There exists clinical and experimental evidence supporting abnormality in serotonin levels and reward sensitivity in case of BPD. Specifically, high reward sensitivity with medium levels of risk sensitivity (serotonin activity correlate, as tonic/basal levels or that induced by medication), can trigger bipolar mood oscillations. This preliminary model can be extended to a detailed network model. Future work will include expanding CBGN with neural models of limbic system, and predicting plausible treatment strategies for effectively dealing with the onset and progression of BPD symptoms.

Figure 1. Action (positive or negative affect) selection in CBGN model: Yellow: rewarding (+ve) action selection as in healthy controls; Green: Oscillations between +ve and –ve actions as in BPD; Blue: -ve action selection as in depression

P182 Theta/alpha coordination of pre-motor and parietal networks during free behavior in rats

Medorian D. Gheorghiu1, Bartul Mimica2, Jonathan Withlock2, Raul C. Mureșan1

1Romanian Institute of Science and Technology, Cluj-Napoca, Cluj 400552, Romania; 2Centre for Neural Computation, Kavli Institute for Systems Neuroscience, Trondheim, NO-7491, Norway

Correspondence: Medorian D. Gheorghiu (

BMC Neuroscience 2017, 18 (Suppl 1):P182

Activity of posterior parietal cortex (PPC) neurons exhibits self-motion tuning to both ongoing and impeding movements, which may reflect behavioral planning [1]. A major input to PPC originates from the frontal medial agranular cortex (AGm), which is believed to be involved in complex motor planning. In the monkey, Pesaran and colleagues [2] showed that fronto-parietal coherence is stronger in free-choice tasks than in instructed trials, probably activating different decision-related circuits in these areas. Therefore, we hypothesize that in the rat the interaction between AGm and PPC may be instrumental in coordinating decision making and motor planning. Here, we are investigating the coupling strength between PPC and AGm in the theta/alpha frequency band by computing pairwise spectral coherence and phase delays across the two areas (see Figure 1) during goal-directed spatial navigation in rats. Two tasks were implemented: an instructed or “known” task where the rat had to run straight to a fixed well named “Home”; an “exploratory” task where the rat had to search for reward delivered in “Target” wells located randomly across the arena and then run back to the Home well.

Results: As the rat stopped running and started licking at the target well, there was an increase in theta coupling strength accompanied by a gradual decrease in frequency (Figure 1A). Using the phase information, we computed the delay of PPC relative to AGm. The delay decreased sharply from ~5.5 to ~2.5 ms when the rat arrived at the target location (see Figure 1B), and it was gradually resetting in the last 5 s that the rat spent at that location (see Figure 1D). As suggested by anatomical evidence, AGm was leading PPC indicating a causal interaction where AGm coordinates the activity in PPC.

Conclusions: Our results indicate a complex regulation of oscillatory behavior in PPC and AGm during free behavior in rats. In particular, a pronounced ongoing oscillation in the theta/alpha band is expressed throughout the task and seems to be coordinated across the two areas. AGm leads PPC and both the frequency of the oscillation and the time delay between the two areas change as a function of behavioral events.

Figure 1. A and C. Time-resolved spectral coherence between PPC and AGm in the 6-10 Hz frequency band, aligned to the initiation (A) and cessation (C) of licking at the target well. B and D. phase delays in ms between PPC and AGm aligned to the initiation (B) and cessation (D) of licking at the target well


This work was supported by CNCS - UEFISCDI (PN-II-RU-TE-2014-4-0406 and PN-III-P3-3.6-H2020-2016-0012).


1. Withlock J., Robert J. Sutherland, Menno P. Witter, May-Britt Moser, Edvard I. Moser: Navigating from hippocampus to parietal cortex. PNAS 2008 vol. 105.39: 14755–14762;

2. Pesaran B, Nelson MJ, Andersen RA: Free choice activates a decision circuit between frontal and parietal cortex. Nature 2008: 406–409.

P183 Information theoretic approach towards identifying changes in cellular-level functional connectivity and synchrony across animal models of schizophrenia

Jennifer L. Zick1,2, Kelsey Schultz4, Rachael K. Blackman1,2,3, Matthew V. Chafee1,3, Theoden I. Netoff1,4

1Graduate Program in Neuroscience, University of Minnesota, Minneapolis, MN 55455 USA; 2Medical Scientist Training Program (MD/PhD), University of Minnesota, Minneapolis, MN 55455 USA; 3Brain Sciences Center, VA Medical Center, Minneapolis, MN 55417 USA; 4Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455 USA

Correspondence: Jennifer L. Zick (

BMC Neuroscience 2017, 18 (Suppl 1):P183

Schizophrenia has long been described as a syndrome of disordered connectivity in the brain. While originally based on clinical symptomatology, neurophysiological evidence for this concept has been found in imaging studies in humans with schizophrenia. It has also been found that cortical pyramidal neurons have a reduced density of the synaptic spines necessary for cellular communication in postmortem brain tissue recovered from people with schizophrenia. However, functional evidence for disconnectivity at the level of local neuronal circuits is limited. To address this question, we characterized neuronal dynamics between groups of simultaneously recorded cortical neurons in data obtained from both primate and mouse models of schizophrenia. Neural data were obtained from multielectrode recording arrays inserted into the parietal and prefrontal cortices of macaque monkeys while the animals performed a cognitive control task that measures a specific cognitive impairment in human patients with schizophrenia. Phencyclidine, an NMDA receptor (NMDAR) antagonist that has long been used as a pharmacological model of psychosis, was administered systemically on alternating days with injections of saline. In the mouse experiments, analogous data were obtained from medial prefrontal cortex in awake head-fixed mice during locomotion. Data from Nestin-promoted Dgcr8+/− mutant mice (DiGeorge syndrome critical region 8; a gene strongly associated with schizophrenia in humans and shown to produce schizophrenia-like symptomatology in mice) is compared with that obtained from wildtype littermate controls.

Cross-correlation analysis was performed on spike trains from pairs of simultaneously recorded neurons to characterize changes in synchrony between conditions. In the primate neural data, cross correlations frequently displayed a prominent “zero-lag” peak representing a large number of coincident action potentials between cells in the control condition that could be a result of common input. In the phencyclidine condition, there was a reduction in synchronous firing between pairs of cells. A similar rate-independent reduction in precise synchrony was also found in medial prefrontal cortical neuronal ensemble recordings obtained from Dgcr8 mice as compared to controls, suggesting that this is may be a consistent finding related to the root pathophysiology of schizophrenic processes.

To characterize deficits in synaptic communication between neurons in the disease state, we employed higher-order transfer entropy (TE) metrics to identify pairs of cells that exhibited effective connectivity (Ito et al, 2011, PLOS One). Consistent with the disconnection hypothesis of schizophrenia, we found that acute administration of PCP resulted in a reduction in the percent of cell pairs identified as significantly functionally connected by TE analysis, as well as a reduction in the overall distribution of population shared information. This result suggests a cellular basis for the reduced information-processing capabilities seen in schizophrenics performing prefrontal cortex-dependent tasks, as well as synaptic disconnection. Furthermore, this result is supported by a similar reduction in both number of functionally connected cell pairs and overall shared information in prefrontal cortex in the Dgcr8+/− mouse genetic model of schizophrenia.

In summary, these results display a reduction in both zero-lag synchrony and cellular-level functional connectivity in two very distinct animal models of schizophrenia. It is well known that coincident firing of action potentials facilitates connectivity between neurons, and asynchrony results in disconnection. Thus, the results presented here support the notion that alterations in precise spike timing may be an underlying driving factor towards reduced functional connectivity in schizophrenia, providing a new mechanistic model for disease pathophysiology.


This material is based upon work supported by the NIH (R01 MH1107491; Chafee); NRSA F30 MH108205-01A1 (Zick); NSF Career Award (TIN); Medical Scientist Training Program NIH T32-008244

P184 Neural Suppression with Deep Brain Stimulation using a Linear Quadratic Regulator

Nicholas Roberts1, Vivek Nagaraj2,, Andrew Lamperski3, Theoden I. Netoff1

1Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA; 2Graduate Program in Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA; 3Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA

Correspondence: Nicholas Roberts (

BMC Neuroscience 2017, 18 (Suppl 1):P184

Current neuromodulation techniques for seizure suppression, such as vagus nerve or deep brain stimulation, have shown some clinical efficacy. Yet their application is complicated by the large parameter space of electrical stimulation settings inherent to these systems. A physician must skillfully choose stimulation parameters such as frequency, amplitude, and pulse width for each individual patient in order to effectively reduce their incidence of seizures. We demonstrate an algorithm capable of automatically generating a continuous stimulation waveform to suppress neural activity and minimize total stimulation energy.

We treat the suppression of neural activity as a linear-quadratic-Gaussian (LQG) control problem. The resulting optimal controller consists of a Kalman filter and a linear-quadratic regulator (LQR). The effectiveness of the LQG controller in suppressing seizure biomarkers was first verified in a computational model of epilepsy called Epileptor [1], which simulates local field potential (LFP) recordings within a seizure focus. We built a model of the generated LFPs using the Ho-Kalman algorithm [2] for subspace system identification. The Kalman filter estimated the state of the system and a feedback control signal provided by the LQR successfully prevented seizures during stimulation, even while varying the Epileptor model parameters.

We then implemented the LQG controller in an in vivo rodent model. We stimulated the ventral hippocampal commissure while recording in the hippocampus. The Ho-Kalman algorithm was again used to build a dynamical systems model of the LFP activity based on the evoked response to Gaussian white noise stimulation. We used a three-phase experiment to test the LQG controller: 2 min of baseline activity; 2 min of closed-loop neural stimulation; and 2 min post-stimulation to check if LFPs return to baseline levels. This same stimulation waveform was then replayed in “open-loop,” without state estimation from the Kalman filter. The LFP power from 1-100 Hz was used to measure performance. Our results show a significant decrease in LFP power during closed-loop stimulation. Open-loop stimulation produced negligible change in LFP power. The LQG controller was confirmed to be an effective tool for minimizing LFP activity within a selected frequency band. The mathematical models of neural dynamics it uses are subject specific and determine stimulation waveforms based on state to suppress neural activity.


1. Jirsa VK, Stacey WC, Quilichini PP, Ivanov AI, Bernard C: On the nature of seizure dynamics. Brain 2014, 137 (pt. 8):2210–2230.

2. Miller DN, & de Callafon RA: Identification of linear time-invariant systems via constrained step-based realization. IFAC Proceedings Volumes 2012, 45 (16): 1155–1160.

P185 Reinforcement learning for phasic disruption of pathological oscillations in a computational model of Parkinson’s disease

Logan L. Grado1, Matthew D. Johnson1,2, Theoden I. Netoff1

1Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN, 55455, United States; 2Institute for Translational Neuroscience, University of Minnesota, Minneapolis, MN, 55455, United States

Correspondence: Logan L. Grado (

BMC Neuroscience 2017, 18 (Suppl 1):P185

Deep brain stimulation (DBS) is an effective therapy for motor symptoms of PD, and is often used as a complement to medication in patients who have progressed to severe stages of PD. However, programming these devices is difficult and time consuming, and DBS therapy is limited by side effects and partial efficacy [1]. Furthermore, traditional continuous DBS (cDBS) does not account for fluctuations in motor symptoms caused by factors such as sleep, attention, stress, cognitive and motor load, and current drug therapy [2], and as the patient’s state changes, so does the need for stimulation. Current cDBS strategies are incapable of adapting to the needs of patients: once the clinician sets the parameters, they do not change until the next programming visit. In this study, we have created a reinforcement learning (RL) algorithm capable of learning online how best to stimulate to reduce pathological oscillations in silico. We have developed the reinforcement learning DBS (RL-DBS) algorithm for tuning DBS parameters, and have tested it on a biophysically realistic mean-field model of the basal ganglia-thalamocortical system (BGTCs) [3], simulating parkinsonian neural activity. The RL-DBS algorithm decides when to deliver stimulus pulses based upon the real-time amplitude and phase of the pathological oscillation in order to reduce the amplitude of that oscillation. The algorithm learns which actions lead to the highest cumulative reward (i.e. reduction of oscillation amplitude). After training on the model, the RL-DBS algorithm is able to learn both phase and amplitude selectivity to optimally reduce the pathological oscillation. The algorithm learns the expected reward for both actions (not stimulating and stimulating) as a function of the phase/amplitude of the oscillation (Figure. 1A, Figure. 1B). The algorithm then decides which action to execute based upon the action difference (Figure. 1C). Additionally, the algorithm learns to deliver bursts of stimulation phase-locked to the oscillation.

We created an adaptive RL-DBS algorithm capable of learning on-line how to reduce the power of a pathological oscillation in a computation model of PD. The algorithm has the potential to deliver individualized, adaptive DBS therapy that can improve the quality of life for PD patients.

Figure. 1. Learned reward maps A, B and action difference C as a function of the phase and amplitude of the oscillation. A and B show the learned reward for no stimulation and stimulation respectively, while C shows the action difference. The algorithm selects the action that with the highest expected reward. The action difference reveals that the algorithm learns both phase- and amplitude-selective stimulation


Research supported by the Systems Neuroengineering NSF IGERT Program (DGE-1069104), NIH R01-NS094206, NIH P50-NS098573, and NSF CBET-1264432.


1. G. Deuschl, S. Paschen, and K. Witt: Clinical outcome of deep brain stimulation for Parkinson’s disease. Handb. Clin. Neurol., vol. 116, pp. 107–128, 2013.

2. J. a Obeso, M. C. Rodríguez-Oroz, M. Rodríguez, J. L. Lanciego, J. Artieda, N. Gonzalo, and C. W. Olanow: Pathophysiology of the basal ganglia in Parkinson’s disease. Trends Neurosci., vol. 23, no. 10 Suppl, pp. S8–S19, 2000.

3. S. J. van Albada and P. a Robinson. Mean-field modeling of the basal ganglia-thalamocortical system. I Firing rates in healthy and parkinsonian states. J. Theor. Biol., vol. 257, no. 4, pp. 642–63, Apr. 2009.

P186 Metrics for detection of delayed and directed coupling

David P. Darrow1, Theoden I. Netoff2

1Department of Neurosurgery, University of Minnesota, Minneapolis, MN 55455, USA; 2Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA

Correspondence: David P. Darrow (

BMC Neuroscience 2017, 18 (Suppl 1):P186

Detecting delayed coupling in dynamical systems remains a challenging frontier in Neuroscience. Frequently used tools such as cross-correlation have been shown to be robust against measurement noise but fail to identify coupling direction. [1] More recently developed tools such as multivariate granger causality and various forms of transfer entropy provide methods of detecting direction of coupling but may be less resilient to measurement noise and require more substantial quantities of data depending on the signal to noise ratio. With widespread use of these tools, it is important to have a complete understanding of the limitations of each metric and the circumstances of optimal use in experimental design.

To test these metrics over a salient parameter space, a linear, delayed vector autoregressive model was created with probabilistic and complex coupling over probabilistic time delays. The model was run with various measurement noise strengths, numbers of nodes, and number of available data points. Correlation, cross-correlation, mutual information, multivariate granger causality (MVGC), and transfer entropy (TE) were computed and compared to true coupling adjacency matrices using an L-2 metric.

Significant differences were found between reconstruction results between metrics. MVGC was found to outperform all other metrics when the signal to noise ratio exceeded 0.23. Transfer entropy and correlation fared worse than maximum cross-correlation and mutual information, as summarized in Figure 1. Reconstruction error was found to be minimally affected by number of nodes for metrics other than MVGC and TE, where MVGC outperformed all others. Similarly, MVGC and TE required a minimum number of samples to converge, and the required number of points was found to be a function of the number of nodes.

Figure 1. Reconstruction error of time-lagged coupling as a function of measurement noise with standard deviations

Conclusions: Based on this work, significant disparity exists between the performance of existing methods to detect delayed coupling. Many common tools fail to detect delayed coupling. However, even with a minimal density of time points to number of nodes, MVGC efficiently recovers complex and delayed coupling. Careful consideration should be given to metrics used in experiments where coupling may be delayed or spread out over time. Measurement noise and data sample density requirements may affect experimental design.


1. Netoff TI, Carroll TL, Pecora LM, Schiff SJ. 11 detecting coupling in the presence of noise and nonlinearity. Handbook of Time Series Analysis: Recent Theoretical Developments and Applications. John Wiley & Sons; 2006;

2. Barnett L, Seth AK. The MVGC multivariate Granger causality toolbox: a new approach to Granger-causal inference. J. Neurosci. Methods. Elsevier; 2014;223:50–68.

3. Lindner M, Vicente R, Priesemann V, Wibral M. TRENTOOL: a Matlab open source toolbox to analyse information flow in time series data with transfer entropy. BMC Neurosci. BioMed Central Ltd; 2011;12:119.

4. Barnett L, Barrett AB, Seth AK. Granger causality and transfer entropy Are equivalent for Gaussian variables. Phys. Rev. Lett. 2009;103:2–5.

P187 Insurgence of network bursting events in formed neuronal culture networks: a computational approach

Davide Lonardoni1, Hayder Amin1, Stefano Di Marco2, Alessandro Maccione1, Luca Berdondini1†, Thierry Nieus1,3†

1Neuroscience and Brain Technology Department, Fondazione Istituto Italiano di Tecnologia, Genova, Italy, 16163; 2Scienze cliniche applicate e biotecnologiche, Università dell’Aquila, L’Aquila, Italy, 67100; 3Dept. of Biomedical and Clinical Sciences “Luigi Sacco”, University of Milan, Milan, Italy

Correspondence: Davide Lonardoni (

co-senior authors

BMC Neuroscience 2017, 18 (Suppl 1):P187

A common property of developing neuronal systems is their intrinsic ability to generate spatiotemporally propagating spiking activity involving a large number of highly synchronously firing neurons. Primary neuronal cultures are among the experimental preparations that allow the investigation of the principles underlying the generation of such spontaneous coordinated spiking activity: cell cultures self-organize during development up to the stage where they elicit stereotyped network-wide spiking activity, called network bursts. The high spatial resolution of the high-density CMOS multi-electrode arrays revealed that network bursts correspond to a coordinated propagation of action potentials throughout the network [1]. Specifically, these propagations could be well clustered into few groups differing for their ignition sites (i.e. the starting point) and propagation paths (i.e. the mean trajectory followed by the spiking activity) [2]. This finding suggests the presence of regions in charge of triggering such spontaneous events. Following this direction, we investigated what were the main determinants underlying the generation of network bursts in cell cultures at the mature stage. To this end, we implemented a network model made of principal cells (excitatory) and fast spiking (inhibitory) neurons endowed with the proper synaptic currents (AMPA, NMDA, GABA). With minimal topological constraints on the coupling between neuronal pairs (i.e. a network structure based on the reciprocal distance among neurons), the model expressed realistic spontaneous activities that mimicked the experimental findings.

The results obtained in this study, by combining experimental datasets with our neural network computational model, shows that while the synaptic contribution is mainly involved in shaping the network burst, the key player in the generation of network bursts could be found in the local properties of the neuronal network.

Specifically, with functional connectivity analysis, we found and detected, both in simulation and in experiments, a few and specific ‘hot spots’ of the networks that matched with the ignition sites of the propagations. In particular, in the model, the neurons of to the hot spots were much more responsive than any other region to mild stimulations delivered to these regions. Although the connectivity was truly uniform by design we found that the ‘hot spots’ were characterized by local graph properties (i.e. higher clustering, lower path length respect to the remaining network) that favor the amplification of asynchronous firing and determine the onset of a network event. Our modeling study suggests that the ‘hot spots’ might naturally result from the simple constraints on the network topology and the sparseness of the network.


We acknowledge the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of The European Commission for the SICODE project, under FET-Open grant number: FP7-284553 and for the NAMASEN Marie-Curie Initial Training Network, under FET-Open grant number: FP7-264872.


1. Berdondini L, Imfeld K, Maccione A, Tedesco M, Neukom S, Koudelka-Hep M, Martinoia S: Active pixel sensor array for high spatio-temporal resolution electrophysiological recordings from single cell to large scale neuronal networks. Lab Chip 2009, 9(18):2644–2651.

2. Gandolfo M, Maccione A, Tedesco M, Martinoia S, Berdondini L: Tracking burst patterns in hippocampal cultures with high-density CMOS-MEAs. J Neural Eng 2010, 7(5):056001.

P188 Brian2GeNN: Free GPU Acceleration for Brian 2 Users

Marcel Stimberg1, Dan F. M. Goodman2, Thomas Nowotny3

1Sorbonne Universités, UPMC Univ Paris 06, INSERM, CNRS, Institut de la Vision, Paris, France; 2Department of Electrical and Electronic Engineering, Imperial College, London, UK; 3School of Engineering and Informatics, University of Sussex, Brighton, UK

Correspondence: Thomas Nowotny (

BMC Neuroscience 2017, 18 (Suppl 1):P188

Over the last decade graphics processing units (GPUs) have evolved into powerful, massively parallel co-processors that are increasingly used for scientific computing and machine learning. But it has also become quite clear that writing efficient code for GPU accelerators is difficult even with APIs designed for general purpose computing, such as CUDA and OpenCL. As a consequence, frameworks are being developed for making GPU acceleration available for specific applications without complex parallel code design. Examples include Matlab GPU extensions [1], TensorFlow GPU support [2], Theano GPU extensions [3] and so on. Here we present the first public release of Brian2GeNN [4], a software package that connects the popular Brian 2 simulator [5] to the GPU enhanced neuronal networks (GeNN) framework [6] to provide effortless GPU support for computational Neuroscience investigations to Brian 2 users.

Brian2GeNN was first announced at CNS*2014 and has undergone a long phase of maturation and development until its first public release this year. It is a Python based package that allows users to deploy their Brian 2 models to a device named “genn”, using the simple command “set_device(‘genn’)”. This triggers the use of Brian2GeNN, which generates code that can be executed on GPUs using GeNN. Brian2GeNN supports all common features of Brian 2 with few exceptions such as multi-compartment models, multiple networks or heterogeneous delays.

On this poster, we present the basic principles of how Brian2GeNN works and benchmark examples of its performance with a number of different benchmark models and using a number of diverse GPU accelerators. We can demonstrate that depending on the model and the accelerator, achieved speedups can vary considerably. Brian2genn is Open Source and freely available on GitHub under GPL v2.


The development of Brian2GeNN was partially supported by EPSRC, grant EP/J019690/1.


1. Mathworks web pages [], accessed 03-03-2017.

2. TensorFlow web pages [], accessed 03-03-2017.

3. Theano documentation [], accessed 03-03-2017.

4. Brian2genn repository [], accessed 03-03-2017

5. Stimberg M, Goodman DFM, Benichoux V, Brette R: Equation-oriented specification of neural models for simulations. Front. Neuroinf. 2014, doi:

6. E. Yavuz, J. Turner and T. Nowotny (2016). GeNN: a code generation framework for accelerated brain simulations. Scientific Reports 2016, 6:18854. doi:

P189 Spike counts in the visual cortex consistently encode both stimuli and behavioral choices in a change-detection task

Veronika Koren1,2, Valentin Dragoi3, Klaus Obermayer1,2

1Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, 10587, Germany; 2Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany; 3Department of Neurobiology and Anatomy, University of Texas Medical School, Houston, Texas, 77030, US

Correspondence: Veronika Koren (

BMC Neuroscience 2017, 18 (Suppl 1):P189

In visual discrimination tasks, the subject collects information about sensory stimuli and makes behavioral decisions accordingly. In this study, we are searching for coding strategies in visual cortices of the macaque (macaca mulatta) that relate to both stimuli and behavior. Multi-units within a single cortical column are recorded in V1 and V4 areas simultaneously while the subject is performing a change detection task with matching and non-matching stimuli. We assess systematic differences in distribution of spike counts for matching vs. non-matching stimuli (detection probability) and for correct vs. incorrect behavioral performance (choice probability, [1]) on the single cell and on the population level. In addition, we estimate pair-wise correlations of spike counts. The spiking signal is weakly but significantly predictive on the type of stimulus (matching vs. non-matching stimuli with correct behavioral responses) as well as on different behavioral choices with correct and incorrect behavioral performance (correct vs. incorrect behavioral responses on non-matching stimuli). In both areas, the effect is limited to the superficial layers of the cortical column. Detection and choice probability are consistent, the behavioral choice “match” being characterized by higher spike counts in both cases. In V1, but not in V4, the signal corresponding to the choice”match” is even statistically invariant with changes in both the type of the stimulus and the behavioral performance. In incorrect trials, neural activity in V1 is in addition characterized by a systematic bias in spike counts already at the beginning of the trial. The bias is consistent with the future behavioral choice and is only present in the deep cortical layers. Comparing the distribution of correlation coefficients across pairs of neurons with matching and non-matching stimuli, distribution of coefficients in V4 is less variable with matching stimuli, in particular for short (0-0.5 mm) and middle-range (0.5-1 mm) inter-neuron distances. This effect could be interpreted as a fast adaptation of neural responses to two consecutive presentations of the same stimuli [2]. A change in long-range (>1 mm) correlations in V4 is observed when comparing trials with correct and incorrect behavioral performance, correlations in incorrect trials showing higher variability. In V1, we did not observe any systematic changes in spike-count correlations with different stimuli. However, correlations are significantly more variable in trials with incorrect compared to correct behavioral performance. This effect is once again limited to deep cortical layers. Higher variability of correlations in V1 might be a signature of spontaneously generated network state that is more likely leading to incorrect behavioral performance. Finally, we test the interactions between choice probabilities and spike-count correlations. Choice probabilities and correlations do not interact in V1, but weakly interact in the V4 area, where cells with similar choice probabilities tend to be more strongly correlated. In summary, we observe various differences in the first and second order statistics of spike counts in both V1 and V4 areas. The first order statistics is related to coding of both stimuli and behavioral choices while correlations would rather modulate the efficacy of encoded signals.


This work was supported by the Deutsche Forschungsgemeinschaft (GRK1589/2).


1. Britten KH, Newsome WT, Shadlen MN, Celebrini S, Movshon JA: A relationship between behavioral choice and the visual responses of neurons in macaque MT. Visual Neurosci 1996, 13(1):87–100.

2. Gutnisky DA, Dragoi V: Adaptive coding of visual information in neural populations. Nature 2008, 452(7184): 220–224.

3. Hansen BJ, Chelaru MI, Dragoi V: Correlated variability in laminar cortical circuits. Neuron 2012, 76(3): 590–602.

4. Nienborg H, Cumming BG: Decision-related activity in sensory neurons may depend on the columnar architecture of cerebral cortex. J.Neurosci. 2014, 34(10): 3579–85.

P190 Local topology of connectome stabilizes critical points in mean field model

Samy Castro1,2, Mariano Fernandez3, Wael El-Deredy4, Patricio Orio1,5

1Centro Interdisciplinario de Neurociencia de Valparaíso, Universidad de Valparaíso, Valparaíso, 2360102, Chile; 2Programa de Doctorado en Ciencias, mención en Neurociencia, Facultad de Ciencias, Universidad de Valparaíso, Valparaíso, 2360102, Chile; 3Laboratorio de Electrónica Industrial, Control e Instrumentación, Universidad Nacional de La Plata, La Plata, Argentina; 4Escuela de Ingeniería Biomédica, Universidad de Valparaíso, 2362905, Valparaíso, Chile; 5Instituto de Neurociencia, Universidad de Valparaíso, Facultad de Ciencias, Universidad de Valparaíso, Valparaíso, 2360102, Chile

Correspondence: Samy Castro (

BMC Neuroscience 2017, 18 (Suppl 1):P190

The interplay between structural connectivity (SC) and neural dynamics is still not yet fully understood. Applying topological analysis, the connectome approach links this anatomical network to brain function. Here we adopt a computational approach to find topology features related to the stability on global neural dynamics. A previous study of a mean field model based on the human cortex network, shows at least 2 global neural states, with either a low or high firing rate pattern [1, 3]. These 2 possible states, or bistability, emerge in the model within a range of the global coupling parameter G, limited by critical values G - and G +[1, 3]. Also, at this bistable range, this model achieves the highest correlations with empirical resting state fMRI data. How the network connectivity pattern shapes the critical G values has not been yet investigated. Our aim is to identify local or global topology features related to the critical G values. We studied 4 different SC networks: a cortical parcellation of human brain [2], a human binary equivalent, a Random Network (RN) having the same degree distribution as human SC, and an equivalent Watts & Strogatz Small World (SW) network. For each of the analyzed networks, values in their critical G points have small or null variability. Then, we selectively prune the edges of the networks and calculate their critical G values to show the effect of structure pattern in maintaining the bistable dynamics. The edges were pruned selectively based on either the degree or the k core decomposition measure; interpreted as a local or global topology feature, respectively. Also, the pruning procedure is applied to the edges on one of 3 specific ways: i) high degree/k core nodes, ii) random cuts, and iii) low degree/no k core nodes. The highest shifts in critical G values are achieved when the edges of high degree or k core nodes are pruned. In contrast, when we prune those edges belong to low degree or no k core nodes, the shifts in the critical G points are irrelevant. We interpret this as that the model can use either local or global connectivity pattern in order to stabilize the critical G points. Furthermore, our study show that shifts in the critical G points are statistically equivalent when the degree distribution (but not k core structure) is shared, such as in the binary human SC compared to the RN. Therefore, in our simulation the degree distribution, interpreted as a local connectivity feature, determines the critical G points for bistability, capturing the essential structural pattern of the network. We also show that it is possible to obtain bistability in other types of networks, suggesting that structure dynamic relationships may obey a topological principle.


SC is recipient of a Ph.D. fellowship from CONICYT. PO is partially funded by the Advanced Center for Electronic Engineering (FB0008 CONICYT, Chile). The Centro Interdisciplinario de Neurociencia de Valparaíso (CINV) is a Millennium Institute supported by the Millennium Scientific Initiative of the Ministerio de Economía (Chile).


1. Deco G, McIntosh AR, Shen K, Hutchison RM, Menon RS, Everling S, Hagmann P, Jirsa VK: Identification of optimal structural connectivity using functional connectivity and neural modeling. J Neurosci. 2014, 34(23):7910–7916.

2. Hagmann P, Cammoun L, Gigandet X, Meuli R, Honey CJ, Van Wedeen J, Sporns O: Mapping the structural core of human cerebral cortex. PLoS Biol. 2008, 6(7):1479–1493.

3. Deco G, Ponce-Alvarez A, Mantini D, Romani GL, Hagmann P, Corbetta M: Resting-state functional connectivity emerges from structurally and dynamically shaped slow linear fluctuations. J Neurosci. 2013, 33(27): 11239–11252.

P191 How chaos in neural oscillators determine network behavior

Kesheng Xu1, Jean Paul Maidana1, Patricio Orio1,2

1Centro Interdisciplinario de Neurociencia de Valparaíso, Universidad de Valparaíso, Valparaíso, Chile; 2Facultad de Ciencias, Instituto de Neurociencia, Universidad de Valparaíso, Valparaíso, Chile

Correspondence: Patricio Orio (

BMC Neuroscience 2017, 18 (Suppl 1):P191

Chaotic dynamics of neural oscillations has been shown at the single neuron and network levels, both in experimental data and numerical simulations. Theoretical works suggest that chaotic dynamics enrich the behavior of neural systems, by providing multiple attractors in a system. However, the contribution of chaotic neural oscillators to relevant network behavior has not been systematically studied yet. We investigated the synchronization of neural networks composed of conductance-based neural models that display subthreshold oscillations with regular and burst firing [1]. In this model, oscillations are driven by a combination of persistent Sodium current, a hyperpolarization-activated current (Ih) and a calcium-activated potassium current, very common currents in the CNS. By small changes in conductance densities, the model can be turned into either chaotic or non-chaotic modes [2]. We study synchronization of heterogeneous networks where conductance densities are drawn from either chaotic or non-chaotic regions of the parameter space. Measuring mean phase synchronization in a small-world network with electrical synapses, we characterize the transition from unsynchronized to synchronized state as the connectivity strength is increased. First, we draw densities from fixed-size regions of the parameter space and find the transition to synchronized oscillations is always smooth for chaotic oscillators but not always smooth for the nonchaotic ones. However, non-smooth transitions were found to be associated to a change in firing pattern from tonic to bursting. Nevertheless, we noticed that chaotic oscillators display a wider distribution of firing frequencies than non-chaotic oscillators, thus making more heterogeneous networks. Next, we draw the conductance densities from the parameter space in a way that maintained the same distribution of firing frequencies (hence the heterogeneity of the network) for both chaotic and non-chaotic. In this case, synchronization curves are very similar, being second order phase transition for both cases. However, we cannot discard that non-chaotic oscillators become chaotic (or vice versa) when in a network, because of the extra parameter associated to the electrical synapse. Finally, when the chaos-inducing Ih current is removed, the transition to synchrony occurs at a lower value of connectivity strength but with a similar slope.

Our results suggest that the chaotic nature of the individual oscillators may be of minor importance to the synchronization behavior of the network. Ongoing work is being conducted to measure the chaotic nature of the whole network, and how it is related to the synchrony behavior.


KX is funded by Proyecto Fondecyt 3170342. PO is partially funded by the Advanced Center for Electrical and Electronic Engineering (FB0008 Conicyt, Chile). The Centro Interdisciplinario de Neurociencia de Valparaíso (CINV) is a Millennium Institute supported by the Millennium Scientific Initiative of the Ministerio de Economía (Chile).


1. Orio P., Parra A., Madrid R., González O., Belmonte C., Viana F. Role of Ih in the Firing Pattern of Mammalian Cold Thermoreceptors. J Neurophysiol 2012, 108:3009–3023

2. Xu K., Maidana JP, Caviedes M, Quero D, Aguirre P and Orio P. Hyperpolarization-activated current induces period-doubling cascades and chaos in a cold thermoreceptor model. Front Comput Neurosci 2017, 11:12. doi:

P192 STEPS 3: integrating stochastic molecular and electrophysiological neuron models in parallel simulation

Weiliang Chen1, Iain Hepburn1, Francesco Casalegno2, Adrien Devresse2, Aleksandr Ovcharenko2, Fernando Pereira2, Fabien Delalondre2, Erik De Schutter1

1Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan; 2Blue Brain Project, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland

Correspondence: Weiliang Chen (

BMC Neuroscience 2017, 18 (Suppl 1):P192

Stochastic spatial molecular reaction-diffusion simulators, such as STEPS (STochastic Engine for Pathway Simulation) [1], often face great challenges when simulating large scale complex neuronal pathways, due to the massive computation required by the models. This issue becomes even more critical when combining with cellular electrophysiological simulation, one of the main focuses in computational neuroscience research. One example is our previous research on stochastic calcium dynamics in Purkinje cells [2], where a biophysical calcium burst model was simulated on approximate ¼ of a Purkinje cell dendritic tree morphology using the serial implementation of spatial Gillespie SSA and electric field (EField) solver in STEPS 2.0. Even with a state-of-the-art desktop computer, it still took months to finish the simulation, significantly slowing down research progress.

One possible, yet not trivial approach to speedup such simulation is parallelization. In CNS2016 we reported our early parallel implementation of an Operator-Splitting solution for reaction-diffusion systems, which achieved super-linear speedup in simulation of the buffer components of the above published model on full Purkinje cell morphology. While the performance of our parallel implementation was promising, the test model had no calcium presented in the system and only buffers were simulated. Since buffers were uniformly distributed in the geometry, the loading of each computing process was relatively balanced, resulting in a close to ideal scenario for parallel computation. The membrane potential computation, as well as voltage-dependent reactions in the published model, were omitted due to the lack of a parallel EField solver at the time. In a recent publication [3], we further extended the model by applying a dynamically updated calcium influx profile extracted from the published calcium burst simulation. Our result shown that in a realistic scenario with dynamic calcium influx, data recording, and without special load balancing, our parallel reaction-diffusion solution can still achieve more than 500 times of speedup with 1000 computing processes comparing to the conventional serial SSA solution.

STEPS 3 is the first public release out of the collaboration between the CNS Unit of OIST and the Blue Brain Project of EPFL. The ongoing collaboration aims to deliver a scalable parallel solution for future integrated stochastic molecular and electrophysiological neuron modelling. Combining the parallel TetOpSplit molecular solver developed by OIST and EPFL’s parallel EField solver based upon the PETSc library, our new release addresses the limitations of above test cases, and allows full scale parallel simulation of the complete Purkinje cell calcium burst model. It also contains new changes that are essential to parallel STEPS modelling and simulation pipeline, such as the improved python binding using Cython technology. In this poster, we will use this model as an example to showcase the general procedure of converting a serial STEPS simulation to its parallel counterpart using these new changes. We will also analyze the performance and scalability of our integrated solution, and discuss the direction of future STEPS development.


1. Hepburn, I., Chen, W., Wils, S., and De Schutter, E. (2012). STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies. BMC Systems Biology 6, 36. doi:

2. Anwar, H., Hepburn, I., Nedelescu, H., Chen, W., and De Schutter, E. (2013). Stochastic calcium mechanisms cause dendritic calcium spike variability. J. Neurosci. 33, 15848–15867. doi:

3. Chen, W., and De Schutter, E. (2017). Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers. Front. Neuroinform. 11, 137–15. doi:

P193 A conductance-based model of cerebellar molecular layer interneurons

Peter Bratby1, Erik de Schutter1

1Okinawa Institute of Science and Technology Graduate University, 1919-1 Tancha, Onna-son, Kunigami-gun, Okinawa 904-0495, Japan

Correspondence: Peter Bratby (

BMC Neuroscience 2017, 18 (Suppl 1):P193

The cortex of the cerebellum is one of the most well-characterized regions of the brain, comprising three distinct layers whose connectivity is well understood. Numerical simulations of parts of the cerebellar cortex, including the granular layer and Purkinje cell layer, have been instrumental in revealing the computational properties of the cerebellum. However, one important part of the cortex - the molecular layer - has yet to be modeled in detail.

The molecular layer is comprised of many thousands of parallel fibers (the long unmyelinated axons of granule cells), Purkinje cell dendrites and a network of inhibitory interneurons termed stellate cells and basket cells. The inhibitory interneurons were originally classified according to their morphology, although modern molecular techniques have indicated that they are likely to belong to a single class of neuron, the molecular layer interneuron (MLI). As well as forming excitatory connections onto Purkinje cells, parallel fibers make disynaptic connections via MLIs. Furthermore, MLIs form chemical and electrical connections with each other via GABAergic synapses and gap junctions. Thus, the MLIs form a sophisticated inhibitory network whose properties are important in shaping the output of the cerebellum itself.

We develop a detailed conductance-based model of an MLI, and present the results of a simulation of a small MLI network. The neuron model, developed using NEURON simulation software, comprises somatic and dendritic compartments containing distinct voltage- and calcium-dependent ion channels. Two types of synapse are simulated, representing chemical synapses and gap junctions. The connectivity and cellular geometry of the network model conforms with morphological reconstructions, and the model parameters were tuned in order to reproduce known electrophysiological properties of MLIs, including spontaneous spiking activity, modest spike frequency adaptation and the presence of a slow depolarization wave.

P194 An Ultrasensitive ON/OFF Switch Mechanism Controls the Early Phase of Cerebellar Plasticity

Andrew R. Gallimore, Erik De Schutter

Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University, Onna-son, Okinawa, Japan

Correspondence: Andrew R. Gallimore (

BMC Neuroscience 2017, 18 (Suppl 1):P194

The expression of postsynaptic long-term depression (LTD) and long-term potentiation (LTP) in cerebellar Purkinje cells results from the internalisation or insertion, respectively, of postsynaptic AMPA receptors (AMPAR) [1]. LTD is induced by concurrent parallel fiber and climbing fiber stimulation of Purkinje cells, and is regulated by a complex intracellular signaling network that suppresses phosphatase activity leading to activation of a positive feedback loop that maintains PKC activity for at least 30 min [2]. LTP is dependent on nitric oxide [3], produced during parallel fiber stimulation [4], which nitrosylates N-ethylmaleimide-sensitive factor (NSF) and promotes exocytosis of AMPARs by actively disrupting the interaction between AMPAR-GluR2 and protein interacting with C-kinase 1 (PICK-1) [5, 6].

We report the largest and most sophisticated model of bidirectional synaptic plasticity to date at the PF-PC synapse. Our unified molecular model replicates both PF-PC LTD and NO/NSF-dependent LTP, as well as the sharp calcium threshold separating them. The importance of the positive feedback loop in LTD expression is now well-established. However, the control of feedback loop activation and deactivation has, until now, remained obscure. Model simulations reveal that the feedback loop is activated by an ultrasensitive ‘on-switch’ controlled by CaMKII activation. Furthermore, as predicted by experiments showing that the feedback loop is not required once the early phase of LTD induction is complete [2, 7], our model reveals a rapid and automatic ‘switch-off’ mechanism controlled by phosphatase activity. We are also able to replicate several experimental observations that have so far remained unexplained. These include reconciling conflicting data regarding the importance of nitric oxide in LTD induction: nitric oxide supports loop activation by augmenting phosphatase inhibition, but is not required when the calcium signal is high or sustained [4]. In addition, experiment has shown that selective inhibition of the cytosolic phosphatase, PP2A, elicits robust LTD, whereas inhibition of other phosphatases does not [8]. We show that only PP2A inhibition causes CaMKII-independent activation of the feedback loop and thus LTD induction, revealing the importance of PP2A in suppressing spontaneous loop activation under basal conditions.


1. Wang YT, Linden DJ: Expression of cerebellar long-term depression requires postsynaptic clathrin-mediated endocytosis. Neuron 2000, 25(3):635–647.

2. Tanaka K, Augustine GJ: A positive feedback signal transduction loop determines timing of cerebellar long-term depression. Neuron 2008, 59(4):608–620.

3. Lev-Ram V, Wong ST, Storm DR, Tsien RY: A new form of cerebellar long-term potentiation is postsynaptic and depends on nitric oxide but not cAMP. Proceedings of the National Academy of Sciences of the United States of America 2002, 99(12):8389–8393.

4. Bouvier G, Higgins D, Spolidoro M, Carrel D, Mathieu B, Lena C, Dieudonne S, Barbour B, Brunel N, Casado M: Burst-Dependent Bidirectional Plasticity in the Cerebellum Is Driven by Presynaptic NMDA Receptors. Cell Reports 2016, 15(1):104–116.

5. Huang Y, Man HY, Sekine-Aizawa Y, Han YF, Juluri K, Luo HB, Cheah J, Lowenstein C, Huganir RL, Snyder SH: S-nitrosylation of N-ethylmaleimide sensitive factor mediates surface expression of AMPA receptors. Neuron 2005, 46(4):533–540.

6. Hanley JG, Khatri L, Hanson PI, Ziff EB: NSF ATPase and alpha-/beta-SNAPs disassemble the AMPA receptor-PICK1 complex. Neuron 2002, 34(1):53–67.

7. Tsuruno S, Hirano T: Persistent activation of protein kinase C alpha is not necessary for expression of cerebellar long-term depression. Molecular and Cellular Neuroscience 2007, 35(1):38–48.

8. Launey T, Endo S, Sakai R, Harano J, Ito M: Protein phosphatase 2A inhibition induces cerebellar long-term depression and declustering of synaptic AMPA receptor. Proceedings of the National Academy of Sciences of the United States of America 2004, 101(2):676–681.

P195 The use of hardware accelerators in the STochastic Engine for Pathway Simulation (STEPS)

Guido Klingbeil, Erik de Schutter

Computational Neuroscience Unit, Okinawa Institute of Science and Technology, 1919-1 Tancha, Onna-son, Kunigami-gun, Okinawa 904-0495, Japan

Correspondence: Guido Klingbeil (

BMC Neuroscience 2017, 18 (Suppl 1):P195

STEPS is a stochastic reaction-diffusion simulator. Its emphasis is on accurately simulating signaling pathways [1].

The Human Brain Project (HBP) is a European Project set out to gain long-sought insights into our brain and the processes that fundamentally make us human. A parallelised version of STEPS will be part of the Brain Simulation Platform of the Human Brain Project by efficiently simulating reaction-diffusion models in realistic morphologies [2]. The HPB will model the brain at unprecedented detail. It is becoming apparent that such large scale and computationally expensive models are required to either capture more realistic morphologies or to simulate more complex systems [3].

Hardware accelerators such as NVidia’s graphics processing units (GPU) or Intel’s Xeon Phi are one approach to mitigate the high computational cost of such models. They are, in general, massively parallel multicore co-processors and have become a cornerstone of modern high performance computing [4].

The hardware architecture of these two accelerator families differ significantly and thus require different software approaches. While both are programmable via the common programming interface OpenCL, important features such as unified memory or remote direct memory access (RDMA) are often only supported in the native hardware architecture specific programming frameworks [5, 6]. These not only need to be integrated into an overall parallel software system performing a coherent spatial simulation but also need to scale well over several accelerators and compute nodes.

Previous research has shown that we can exploit the computational power of accelerators to improve spatially homogenous stochastic simulations by two orders of magnitude while avoiding the limitation imposed to the size of the reaction system to be simulated by the small fast memory space [7].

STEPS implements a spatial version of Gillespie’s stochastic simulation algorithm computing reaction-diffusion systems on a mesh of tetrahedral sub-volumes [1, 8]. Operator splitting techniques allow to separate the reaction of molecules within a sub-volume from the diffusion of molecules between them.

We develop a layered hybrid software architecture using classic central processing units as well as multiple accelerators, integrated into STEPS. Multiple sub-volumes are assigned to an accelerator. To accommodate the different hardware characteristics, NVidia GPUs are applied within a sub-volume and the Intel Xeon Phi at the level of the operator splittings. Furthermore, due to differences in the performance characteristics of the accelerators the use of load balancing at the tetrahedral mesh level will be important.

Our architecture will be a plug-in solution to STEPS not requiring any changes to the interfaces towards the user or other software systems of STEPS itself.


1. Hepburn et al.: STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies. BMC Syst Bio 2012, 6:36.

2. The Human Brain Project Brain Simulation Platform [].

3. Anwar et al.: Stochastic Calcium Mechanisms Cause Dendritic Calcium Spike Variability. J Neurosci 2013, 33(40):15848–15867.

4. TOP500 Supercomputer Site [http://www.top500.orgError! Hyperlink reference not valid.].

5. Khronos OpenCL Working Group: The OpenCL Specification, V 2.1, 2015.

6. NVidia: CUDA C programming guide, V 8.0, 2017, [].

7. Klingbeil et al.: Stochastic simulation of chemical reactions with cooperating threads on GPUs. (in preparation).

8. Gillespie: Exact stochastic simulation of coupled chemical reactions. J Phys Chem 1977, 81(25):2340–2361.

P196 A model of CaMKII sensitivity to the frequency of Ca2+ oscillations in Cerebellar Long Term Depression

Criseida Zamora and Erik De Schutter

Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa 904-0895, Japan

Correspondence: Criseida Zamora (

BMC Neuroscience 2017, 18 (Suppl 1):P196

Cerebellar Long Term Depression (LTD) is a form of synaptic plasticity involved in motor learning. The LTD signaling network includes a PKC-ERK-cPLA2 positive feedback loop and mechanisms of AMPAR receptor trafficking. Experimental studies suggest that Ca2+/calmodulin-dependent protein kinase II (CaMKII) is required for the LTD induction [1]. Additionally, theoretical and experimental work has shown that CaMKII is sensitive to the frequency of Ca2+ oscillations [2, 3]. The activation and autophosphorylation of CaMKII by Ca2+ and calmodulin (CaM) are thought to influence its ability to decode Ca2+ oscillations. However, the molecular mechanism by which this sensitivity contributes to LTD is not fully understood.

The CaMKII enzyme is a multimeric complex conformed by 12 subunits, each of which contains a catalytic domain, a regulatory domain, and a carboxyl-terminal association domain. Due to the combinatorial complexity of activation of this enzyme, we chose to model four-subunits. We propose a model for the activation of CaMKII by Ca2+ in LTD signaling network. These reactions include: activation of the enzyme by Ca2+/CaM binding, intersubunit autophosphorylation at threonine residue Thr286, Ca2+-independent activation state through autophosphorylation and secondary intersubunit autophosphorylation at threonine residue Thr305/306. Noise in the signaling networks plays an important role in cellular processes. CaMKII models including its activation have been developed [3], but they have not included the intrinsic stochasticity of molecular interactions.

Our lab recently developed a stochastic model of the LTD signaling network including a PKC-ERK-cPLA2 feedback loop, Raf-RKIP-MEK interactions and AMPAR trafficking [4]. We have extended this model by adding the molecular network regulating CaMKII activity and its activation. This new model was solved stochastically by STEPS (STochastic Engine for Pathway Simulation) [5] to simulate the influence of noise on the LTD signaling network.

Through stochastic modeling we observed that CaMKII can decode the frequency of Ca2+ spikes into different amounts of kinase activity during LTD induction. This result is congruent with previous studies of CaMKII sensitivity to Ca2+ oscillations [2]. Furthermore, we observed that PKC activity is highly sensitive to the frequency, amplitude, duration and the number of Ca2+ oscillations and consequently has an important effect on LTD activation. The LTD signaling network involves phosphatases and phosphodiesterases related with CaMKII activity, such as PP2A and PDE1. Our stochastic model may be useful in understanding the role of these enzymes in the CaMKII sensitivity to the frequency of Ca2+ oscillations.


1. Hansel C, de Jeu M, Belmeguenai A, Houtman SH, Buitendijk GH, Andreev D, De Zeeuw CI, Elgersma Y: αCaMKII is essential for cerebellar LTD and motor learning. Neuron 2006, 51:835–843.

2. Paul De Koninck and Howard Schulman: Sensitivity of CaM Kinase II to the Frequency of Ca2 + Oscillations. Science 1998, 279: 227–230.

3. Geneviève Dupont, Gerald Houart, Paul De Koninck: Sensitivity of CaM Kinase II to the Frequency of Ca2 + Oscillations: a simple model. Cell Calcium 2003, 34: 485–497

4. Iain Hepburn, Anant Jain, Himanshu Gangal, Yukio Yamamoto, Keiko Tanaka-Yamamoto and Erik de Schutter. A Model of Induction of Cerebellar Long-Term Depression Including RKIP Inactivation of Raf and MEK. Front Mol Neurosci 2017, 10: 19.

5. Hepburn I, Chen W, Wils S, De Schutter E: STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies. BMC Syst Biol 2012, 6:36.

P197 Exploring the response to climbing fiber input in Purkinje neurons by a new experimental data based model

Yunliang Zang, Erik De Schutter

Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University, Onna-son, Okinawa, Japan

Correspondence: Yunliang Zang (

BMC Neuroscience 2017, 18 (Suppl 1):P197

Purkinje neurons receive powerful climbing fiber (CF) input from Inferior Olive (IO) neurons to provide an instructive signal for cerebellar learning. The initial observation that CF input causes all or none responses has been questioned in recent years. However, the mechanisms of initiation and propagation of dendritic calcium spikes evoked by CF input are still poorly understood. Here, we build a new Purkinje cell model based on available experimental data to explore dendritic and somatic responses to CF input in the Purkinje cell under different conditions. All the ionic current models are well constrained according to the experimental data.

Model ionic currents regulate the electrophysiological properties of the Purkinje cell consistent with experimental observations. Our model reproduces a plethora of experimental observations, properties that are critical for the model to be able to predict responses to excitatory and inhibitory inputs. Both simple spike and complex spikes initiate first in the axonal initial segment (AIS). The first derivative and second derivative of the somatic simple spike are in agreement with experimental data.

Using this model, we can explain the discrepancies between experimental observations from different groups about the spatial propagation range of dendritic calcium spikes. Dendritic spikelets can initiate and propagate in a branch-specific manner and depolarization of dendrites can cause secondary spikelets. We find that the timing of occurrence of a spikelet is critical to determine whether it can affect somatic firing or not. The branch-specific dendritic spikelets can combine with contaminant excitatory input and inhibitory inputs to affect somatic firing output more efficiently. Our results indicate that voltage-dependent and branch specific spikelets may enrich CF instructive signals for cerebellar learning.

P198 Effects of network topology perturbations on memory capacity in a hippocampal place cell model

Patrick Crotty, Eric Palmerduca

Department of Physics and Astronomy, Colgate University, Hamilton, NY 13346, USA

Correspondence: Patrick Crotty (

BMC Neuroscience 2017, 18 (Suppl 1):P198

The relationship between the structure, or topology, of a neural network and its dynamics remains largely unexplored. This relationship may be particularly significant for the place cell network in region CA3 of the hippocampus. Place cells are believed to encode position by firing when the animal is in a specific spatial location [1]. Multiple “charts” mapping place cells to locations for several different environments may be stored simultaneously in the network [2]. Given hippocampal neurogenesis and synaptic plasticity, the place cell network should be robust to small perturbations in its topology: it shouldn’t “forget” charts if the pattern of synaptic connections changes slightly. Conversely, if Alzheimer’s or another neurodegenerative disease attacks the place cell network, declines in the chart capacity could provide clues about the presence and progression of the disease. Using a computational model based on a place cell network model published by Azizi et al. [3], we investigated the effects that random removal of synapses in the network had on chart capacity. When small numbers of synapses were removed, the chart capacity was not measurably affected, but larger numbers removed caused the chart capacity to decline (see the Figure 1). Moreover, the decline in the chart capacity depended on how the synapses were selected. If they were selected with uniform probability, the chart capacity remained unaffected out to about 10% removed and then fell sharply. But if neurons, rather than synapses, were first selected with uniform probability, and then synapses randomly removed from the selected neurons, the chart capacity began to fall linearly at about 5% removed. These results suggest that the place cell network chart capacity is indeed stable to small perturbations in its topology, and that the effects of larger disruptions depend on the underlying mechanisms, i.e., whether it is the synapses or the cells themselves that are targeted by a disease.

Figure 1. The chart capacity (M) as a function of the fraction of the synapses removed from the network (p), using two different synapse-removal algorithms. For the blue curve, synapses are selected and removed with equal probability. For the red curve, neurons are selected with equal probability, and then a random synapse is removed from the selected neuron. The dashed line is a linear fit to the random-neuron (red) curve, with slope -0.27


We thank A. Azizi and S. Cheng for helpful discussions.


1. O’Keefe J, Dostrovsky J: The hippocampus as a spatial map: preliminary evidence from unit activity in the freely-moving rat. Brain Research 1971, 34:171–175.

2. Alme CB, Miao C, Jezek K, Treves A, Moser EI, and Moser M: Place cells in the hippocampus: eleven maps for eleven rooms. Proceedings of the National Academy of Sciences 2015, 111(52):18428–18435.

3. Azizi A, Wiskott L, Cheng S: A computational model for preplay in the hippocampus. Frontiers in Computational Neuroscience 2013, 7(161):1–15.

P199 A NEST-simulated cerebellar spiking neural network driving motor learning

Alberto Antonietti1, Claudia Casellato1, Csaba Erö2, Egidio D’Angelo3, Marc-Oliver Gewaltig2, Alessandra Pedrocchi1

1Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy; 2Blue Brain Project, Ecole Polytechnique Fédérale de Lausanne (EPFL), Biotech Campus, Geneva, Switzerland; 3Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy

Correspondence: Alberto Antonietti (

BMC Neuroscience 2017, 18 (Suppl 1):P199

The brain organization is optimized to drive adaptive behavior. A key role in the control loop is played by the cerebellum, which implements prediction, timing and learning of motor commands, through complex plasticity mechanisms [1]. However, how plasticity is engaged during the behavior is still unclear. Cerebellar properties emerge in sensorimotor paradigms, such as the Eye Blink Classical Conditioning (EBCC). In silico simulations based on computational models are fundamental to investigate the physiological mechanisms. We developed a cerebellar network running on NEST. NEST is a simulator for spiking neural network models [2], focused on the dynamics, size and structure of neural systems by the generation of networks of single-point neurons. We built a network tailored on the mouse cerebellum. The network is made of 71,440 neurons: 250 Mossy Fibers (MF), 5’000 Glomeruli (Glom), 65’600 Granular Cells (GR), 100 Golgi Cells (GO), 400 Purkinje Cells (PC), 40 Inferior Olive cells (IO), 50 Deep Cerebellar Nuclei (DCN). The connectivity ratios used for the 11 types of synaptic connections are reported in Table 1. Three of these synaptic types could undergo specific plastic modifications, in particular Long Term Potentiation and Depression on different time scales. The numbers of the cells and the connectivity were taken from the neurophysiological literature. The model was tested with a simple closed-loop simulation of the EBCC, to check the functionalities of the network in a learning task [3]. In the EBCC, a Conditioned Stimulus (CS) precedes an Unconditioned Stimulus (US) by a fixed time interval. The cerebellum is able, after repeated presentations of CS and US paired during the acquisition phase, to anticipate the US onset, this action is called Conditioned Response (CR). During the extinction phase, only the CS is provided. The network, thanks to the distributed plasticity, was able to learn the CS-US temporal association during the acquisition trials, with a fast acquisition towards 80% values, and to rapidly unlearn the association during the extinction trials (Figure 1). We will extend this model to a large-scale reproduction of the mouse cerebellum, testing more complex paradigms.

Table 1. Connectivity between the neural groups (Convergence and Divergence). In italics the plastic sites






# Synapses

















































DCN (30%)


















Total number of synapses


Figure 1. Behavioral outcome during the EBCC protocol, with 80 trials of Acquisition and 20 trials of Extinction. 10 simulations were performed. Solid line: the median outcome; grey area: the interquartile intervals


This work was supported by EU grants: Human Brain Project (HBP 604102) and HBP-Regione Lombardia.


1. D’Angelo E et al.: Modeling the Cerebellar Microcircuit: New Strategies for a Long-Standing Issue. Front. Cell. Neurosci. 2016; 10:176.

2. Gewaltig MO and Diesmann M: NEST (neural simulation tool). Scholarpedia 2007, 2(4):14303.

3. Antonietti et al.: Spiking Neural Network With Distributed Plasticity Reproduces Cerebellar Learning in Eye Blink Conditioning Paradigms. IEEE Trans. Biomed. Eng. 2016. 63:1.210–219.

P200 Spike-based probabilistic inference with correlated noise

Ilja Bytschok1, Dominik Dold1, Johannes Schemmel1, Karlheinz Meier1, Mihai A. Petrovici1,2

1Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, 69120 Heidelberg, Germany; 2Department of Physiology, University of Bern, Bühlplatz 5, 3012 Bern, Switzerland

Correspondence: Ilja Bytschok (

BMC Neuroscience 2017, 18 (Suppl 1):P200

It has long been hypothesized that the trial-to-trial variability in neural activity patterns plays an important role in neural information processing. A steadily increasing body of evidence suggests that the brain performs probabilistic inference to interpret and respond to sensory input [1, 2, 3]. The neural sampling hypothesis [4] interprets stochastic neural activity as sampling from an underlying probability distribution and has been shown to be compatible with biologically observed dynamical regimes of spiking neurons [5]. In these studies, high-frequency Poisson spike trains were used as a source of stochasticity, which is a common way of representing diffuse synaptic input. However, this discounts the fact that cortical neurons may share a significant portion of their presynaptic partners, which can have a profound impact on the computation these neurons are required to perform. This is not only relevant in biology, but also for artificial implementations of neural networks [6], where bandwidth constraints limit the amount of available independent noise channels.

In neural sampling, the firing activity of a network of N Leaky Integrate-and-Fire (LIF) neurons is represented by a vector of binary random variables (RVs) z  {0, 1} N . In such a network, synaptic weights can be adjusted such that the network samples from a Boltzmann distribution p(z) [5]. In particular, the weights W ij control the pairwise correlations r ij between RVs. When receiving correlated noise, the correlations r ij are changed in a way that cannot be directly countered by changes in W ij . We show, however, that this is contingent on the chosen coding: when changing the state space from {0, 1} N to {−1, 1} N , correlated noise has the exact same effect as changes in W. Unfortunately, the {−1, 1}-coding is incompatible with neuronal dynamics, because it would require neurons to influence each other while they are silent.

However, the translation of the problem to the {−1, 1} N space allows the formulation of a two-step compensation procedure. We show how, by chaining a bijective map from noise correlations to interaction strengths W ij in {−1, 1} N with a second bijective map from (W ij , b ij ) in {−1, 1} N to (W ij b ij ) in {0, 1} N it is possible to find a synaptic weight configuration that compensates for correlations induced by shared noise sources. For an artificial embedding of sampling networks, this allows a straightforward transfer between platforms with different architecture and bandwidth constraints.

Furthermore, the existence of the above mapping provides an important insight for learning. Since in the {−1, 1}-coding the correlated noise can be compensated by parameter changes and because the {−1, 1}-coding can be transformed into a {0, 1}-coding while keeping the state probabilities invariant, a learning rule for Boltzmann machines will also find that distribution in the {0, 1}-coding, which we demonstrate in software simulations. In other words, spiking networks performing neural sampling are impervious to noise correlations when appropriately trained. This means that, if such computation happens in cortex, network plasticity does not need to take particular account of shared noise inputs.


Authors Bytschok, Dold and Petrovici contributed equally to this work. This research was supported by EU grants #269921 (BrainScaleS), #604102 (Human Brain Project) and the Manfred Stärk Foundation.


1. Körding K, Wolpert D: Bayesian integration in sensorimotor learning. Nature 2004

2. Fizser J, Berkes P, Orbán G, Lengyel M: Statistically optimal perception and learning: from behavior to neural representations. Trends in Cognitive Sciences 2010

3. Rich EL, Wallis JD: Decoding subjective decisions from orbitofrontal cortex. Nature Neuroscience 2016

4. Buesing L, Bill J, Nessler B, Maass W: Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons. PLoS Comput Biol 2011

5. Petrovici MA, Bill J, Bytschok I, Schemmel J, Meier K: Stochastic inference with spiking neurons in the high-conductance state. Physical Review E 2016

6. Furber S: Large-scale neuromorphic computing systems. Journal of Neural Engineering 2016

P201 Optimal refractoriness from a rate-distortion perspective

Hui-An Shen, Simone Carlo Surace, Jean-Pascal Pfister

Institute of Neuroinformatics, UZH and ETHZ, Zurich, CH-8057, Switzerland

Correspondence: Jean-Pascal Pfister (

BMC Neuroscience 2017, 18 (Suppl 1):P201

The information transfer from neuron to neuron through chemical synapses undergoes two stages. In the presynaptic neuron, the (analog) membrane potential is encoded into a (digital) spike while in the postsynaptic neuron, this digital information is turned back into an (analog) depolarisation. It has been argued that for a given inhomogeneous Poisson encoder, the optimal decoder has dynamics that is consistent with short-term plasticity [1]. However, the optimal encoder is not known. Here, by studying the rate-distortion performance, we explore how presynaptic refractoriness influences the performance of the optimal postsynaptic decoder. First, we generalize the results of [2] and [3] by expressing the mutual information as a function of the mean natural estimation loss, in the presence of refractoriness. This expression provides a numerically stable and fast method of computing mutual information between two high-dimensional random variables. Next, we show with numerical simulations that for a fixed firing rate ranging from 20-120 Hz, there is an optimal level of refractoriness that minimizes the distortion, i.e. the mean squared error of the optimal postsynaptic decoder. To test our theory, we compare this optimal level of refractoriness with an HVC neuron in Zebra Finch to which the model has been fitted [4].


1. Pfister JP, Dayan P, Lengyel M: Synapses with short-term plasticity are optimal estimators of presynaptic membrane potentials. Nat Neurosci. 2010, 13(10):1271–1275.

2. Atar R, Weissman T: Mutual information, relative entropy, and estimation in the Poisson channel. IEEE Transactions on Information theory 2012, 58(3):1302–1318.

3. Liptser RS, Shiryaev AN: Statistics of Random Processes II, 2nd Edition. New York: Springer-Verlag; 2001.

4. Surace SC, Pfister JP: A statistical model for in vivo neuronal dynamics. PloS one 2015, 10(11):e0142435.

P202 Towards online accurate spike sorting for hundreds of channels

Baptiste Lefebvre,, Marcel Stimberg, Olivier Marre, Pierre Yger

Institut de la Vision, INSERM UMRS 968, CNRS UMR 7210, Paris, France

Correspondence: Pierre Yger (

BMC Neuroscience 2017, 18 (Suppl 1):P202

Understanding how assemblies of neurons encode information requires recording of large populations of cells in the brain. In recent years, multi-electrode arrays and large silicon probes have been developed to record simultaneously from thousands of electrodes packed with a high density. To tackle the fact that these new devices challenge the classical way to perform spike sorting, we recently developed a fast and accurate spike sorting algorithm (available as an open source software, called SpyKING CIRCUS), validated both with in vivo and in vitro ground truth experiments [1]. The software, performing a smart clustering of the spike waveforms followed by a greedy template-matching reconstruction of the signal, is able to scale to up to 4225 channels in parallel, solving the problem of temporally overlapping spikes. It thus appears as a general solution to sort, offline, spikes from large-scale extracellular recordings.

In this work, we aim at implementing this algorithm in an “online” mode, sorting spikes in real time while the data are acquired, to allow closed-loop experiments for high density electrophysiology. To achieve such a goal, we built a robust architecture for distributed asynchronous computations and we propose a modified algorithm that is composed of two concurrent processes running continuously: 1) “a template-finding” process to extract the cell templates (i.e. the pattern of activity evoked over many electrodes when one neuron fires an action potential) over the recent time course; 2) a “template-matching” process where the templates are matched onto the raw data to identify the spikes. The main challenge is to have a continuous update of the set of templates, with hundreds of electrodes and possible drifts over the time course of the experiment. A key advantage of our implementation is to be parallelized over a computing cluster to use optimally the computing resources: all the different processing steps of the algorithms (whitening, filtering, spike detection, template identification and fit) can be distributed according to the computational needs. During the clustering, the most computationally demanding step, templates are detected and tracked over time using a modified version of the density based clustering algorithm [2] able to handle data streams. Our software is therefore a promising solution for future closed-loop experiments involving recordings with hundreds of electrodes.


1. P. Yger et al., Fast and accurate spike sorting in vitro and in vivo for up to thousands of electrodes, BioRxiv 2016.

2. A. Rodriguez et al., Clustering by fast search and find of density peaks, Science 2014.

P203 Modeling orientation preference in the apical and basal trees of L2/3 V1 neurons

Athanasia Papoutsi1, Jiyoung Park2, Ryan Ash2, Stelios Smirnakis2, Panayiota Poirazi1

1IMBB, FORTH, Heraklion, Crete, 70013, Greece; 2Neurology, Baylor College of Medicine, Houston, Texas, 77030, USA

Correspondence: Athanasia Papoutsi (

BMC Neuroscience 2017, 18 (Suppl 1):P203

Pyramidal neurons receive inputs in two anatomically and functional distinct domains [1], the apical and the basal tree. Inputs to the basal tree, due to their proximity to the soma, greatly influence neuronal output, whereas the more remote apical tree has less potential to influence somatic activity. How these inputs co-operate to form the functional output of the neurons is currently unknown. In this work, we focused on how inputs to the apical and basal trees shape orientation tuning in L2/3 V1 neurons. In particular, we investigated how dendritic integration of orientation tuned inputs to the apical versus basal trees allows for the emergence of stable neuronal orientation preference. Towards this goal, a model L2/3 V1 pyramidal neuron was implemented in the NEURON simulation environment. The passive and active properties of the model neuron were extensively validated against experimental data. Synaptic properties, number and distribution were also constrained according to available data (Figure 1A). Using this model neuron, we investigated a) the differences in the mean orientation preferences of the two trees and b) the distribution of orientation preferences to individual synapses that allow for the emergence of orientation tuning (Figure 1B). Given the parameter combinations that allow for the emergence of orientation tuning (Figure 1C), we found that neuronal orientation tuning follows in large part the orientation tuning of the basal tree. In addition, we have further identified how apical versus basal dendritic tree ablation would affect neuronal tuning in the different conditions implemented. Model results provide insights regarding the ‘tolerance’ to different input properties at the apical and basal tree in order to achieve stable orientation preference.

Figure 1. A. Top: From the pool of synapses, 25% were stimulus driven (black dots). Bottom: Indicative trace showing fluctuations of the membrane potential in the presence of background synaptic activity. Spikes are truncated for visualization purposes. B. Each tree was characterized by a μ ± σ orientation preference that was determined by the preferences of the individual synapses. Here it is shown the portion of synapses with same/different orientation preferences from the μtree, for σtree = 3, 15, 30, 45 and 60°. Grouping to the reported value (x-axis) includes ± 10° differences. C. Orientation tuning curve of the model neuron (mean ff ± sem). Right: Indicative voltage traces of the neuronal responses for different bar orientations (0°, 30°, 60° and 90°)


1. Larkum ME: A cellular mechanism for cortical associations: an organizing principle for the cerebral cortex. Trends Neurosci 2012:1–11.

P204 Dual recordings in the mouse auditory brainstem and midbrain reveal differences in the processing of vocalizations

Richard A. Felix1, Alexander G. Dimitrov1,2, Christine Portfors1

1Department of Integrative Biology and Neuroscience, Washington State University Vancouver, Vancouver WA 98686, USA; 2Department of Mathematics and Statistics, Washington State University Vancouver, Vancouver WA 98686, USA

Correspondence: Alexander G. Dimitrov (

BMC Neuroscience 2017, 18 (Suppl 1):P204

Background: A normal functioning auditory system must rely on fast and precise neuronal responses in order to accurately represent temporal information in complex sounds. Impairments in temporal processing contribute to a variety of listening disorders, yet our understanding of mechanisms that govern these processes remains limited. We examined how enhanced spike timing at the level of the inferior colliculus (IC) in the midbrain might underlie efficient encoding of vocalizations compared to the cochlear nucleus (CN), an earlier site in the ascending auditory pathway.

Methods: We recorded neuronal responses to conspecific vocalizations in the IC and CN of awake, normal-hearing mice that expressed Channelrhodopsin in VGlut2-positive neurons. We used an optrode that combined the recording of single unit activity with light delivery to the CN. Once a recording was established in the CN, a second electrode was placed in the IC and dual recordings were established at locations with matching frequency tuning. The CN was stimulated with light in the absence of sound to measure effects in the IC and then responses to sound stimuli were simultaneously recorded at each site. We assessed the extent of functional connectivity between CN and IC recording sites, the temporal precision of evoked spiking, and the neuronal selectivity to vocalization stimuli, using statistical and information-theoretic tools.

Results: We found that stimulating the CN with light caused evoked activity in the IC when the two recording sites had matched frequency tuning, suggesting that tonotopic organization reliably predicts functional connectivity between the sites. Despite matching frequency tuning, IC neurons exhibited greater selectivity to a common set of vocalization stimuli compared to the dorsal CN (DCN). Overall, CN responses had higher rates of evoked spiking, while IC responses were more transient and had enhanced spike timing, suggesting a shift toward the extraction of temporal information contained in vocalizations at the level of the midbrain (Figure 1).

Figure 1. Relationship between information content and response consistency in mouse DCN and IC

Conclusion: Neurons in the CN often contributed to activity recorded in the IC. Dual recordings conducted under the same experimental conditions that have a degree of functional connectivity provide a strong paradigm for comparing processing at different stages of the auditory pathway. Enhanced selectivity to vocalizations and temporal precision of responses in the IC suggests that this region may be important for encoding biologically important sounds. When auditory processing is impaired, the IC may be a subcortical site for the generation of auditory disorders typically thought to arise in the cortex.

P205 Modelling of leg decoupling in the stick insect and its possible significance for understanding the workings of the locomotor system

Silvia Daun1,2, Tibor I. Toth1

1Department of Animal Physiology, Institute of Zoology, University of Cologne, Cologne, 50674, Germany; 2Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Center Juelich, Juelich, 52425, Germany

Correspondence: Silvia Daun (

BMC Neuroscience 2017, 18 (Suppl 1):P205

Amputation and temporary restraint of legs are widely used and accepted methods of the study of the locomotor systems of insects. The animal is studied during free walking, and its walking behaviour is compared before and after the amputation. Using the results, conclusions are drawn with regard to the organization of the locomotor system of the animal in question. In the stick insect, such investigations were carried out by [1] and more recently by [2]. In the latter study, it was even observed that the front legs could reversibly be decoupled by the animal itself and used to carry out search movements. Nevertheless, the hind and middle legs continued their coordinated walking. From these and other experimental observations detailed in [1] and [2], the question naturally arising is: what mechanisms underlie the changes found in the experiments. The underlying mechanisms obviously belong to the part of the nervous system that controls and coordinates locomotion. One promising way to study them is by using appropriate mathematical models. We used an existing model of coordinated stepping of the three ipsilateral legs of the stick insect [3] to mimic the various decoupling situations in the stick insect described in [1] and [2]. In the model, the levator-depressor neuro-muscular control networks (LD systems) of the individual legs play a pivotal role in producing coordinated stepping of the legs. We identified three main possibilities of decoupling a single leg: i) disrupting the inter-leg coordination between the legs’ LD systems; ii) blocking the normal function of the central pattern generator of the LD system of the leg to be decoupled; and iii) changing the activity of the levator and depressor motoneurones via their associated pre-motor inhibitory interneurones. Decoupling of the front leg in the model worked with any of the methods i)-iii). It was easily reversible, in accordance with the observations that such reversible decoupling happens in natural conditions when the animal uses its front legs for searching. The hind and middle leg continued their coordinated stepping, like in the experiments [1, 2]. Decoupling of the hind leg was most effective when method iii) was used. In this case, the middle and the front leg continued performing coordinated stepping irrespective of the decoupling method, in agreement with the experimental findings. In the model, the middle leg took over automatically the role of the hind leg as the origin of the coordinated stepping. Decoupling the middle leg yielded mixed results: in some cases, depending on the phase within a stepping period, the coordinated stepping of the front and hind leg was abolished, in others, it was not but its quantitative properties were changed. Both types of results were also found in the experiments [1, 2].

In conclusion, we suggest that, depending on the leg, various mechanisms are possible to decouple it from the system of inter-leg coordination. In all cases, method iii) worked most reliably and efficiently. However, the other mechanisms (methods) may represent redundance and can be activated, if necessary, to bring about decoupling of the leg.


This work was supported by the DFG grants to S. Daun (GR3690/2-1 and GR3690/4-1).


1. Graham D: The effect of amputation and leg restraint on the free walking coordination of the stick insect Carausius Morosus. J Comp Physiol 1977, 116:91–116.

2. Grabowska M, Godlewska E, Schmidt J, Daun-Gruhn S: Quadrupedal gaits in hexapod animals - inter-leg coordination in free walking adult stick insects. J Exp Biol 2012, 215:4255–4266.

3. Toth TI, Daun-Gruhn S: A three-leg model producing tetrapod and tripod coordination patterns of ipsilateral legs in the stick insect. J Neurophysiol 2016, 115:887–906.

P206 Spatio-temporal dynamics of key signaling molecules in growth cones

Joanna Jędrzejewska-Szmek1, Nadine Kabbani1,2, Kim T. Blackwel1,3

1Krasnow Institute, George Mason University, Fairfax, VA 22030, USA; 2School of Systems Biology, George Mason University, Fairfax, VA 22030, USA; 3Bioengineering Department, George Mason University, Fairfax, VA 22030, USA

Correspondence: Joanna Jędrzejewska-Szmek (

BMC Neuroscience 2017, 18 (Suppl 1):P206

Growth cones, guided by environmental cues, are necessary for proper neural functioning. The cues are detected by membrane-bound receptors, which in turn activate a plethora of signaling pathways. A majority of these pathways is governed by calcium, flowing into the growth cone through the plasmalemma or from the calcium stores. Both the magnitude of calcium increase and identity of calcium source seem to determine neural growth and retraction [1]. Calcium exerts its control through a variety of signaling molecules that interact non-linearly. This picture is further complicated by recent findings showing that the ionotropic alpha7 nicotinic receptor (a7nAChR) also has a metabotropic function and couples to heteromeric Gq proteins. A7nAChR action via the Gq pathway results in calcium release from the endoplasmic reticulum (ER) modulating cytoskeletal motility and structural growth [2–4].

Experimental evidence shows that both low and high cytosolic calcium results in growth cone repulsion, and medium cytosolic calcium results in attraction. It also shows that calcium influx through the plasmalemma results in repulsion and calcium influx from the internal stores results in growth. To investigate and unify these seemingly contradictory observations experimental observations, we developed a stochastic reaction-diffusion model of calcium, cAMP and Gq activated pathways. The model allows for evaluating the role of the transient calcium influx through the channel pore (the ionotropic contribution) compared to the role of calcium release caused by activation of the Gq subtype of GTP binding protein. Using the model, we investigated whether combined metabotropic and ionotropic action of a7nAChR, resulting in prolonged increase of cytosolic calcium, is responsible for experimentally observed growth attenuation.

To test whether we can predict neurite outgrowth and retraction in response to various environmental stimuli and to elucidate contribution of molecular gradients we looked at combined action of key signaling molecules. We show that combined activation of calcium and cAMP activated targets such as PP2B and PP1, CaMKII, PKA and calpain can explain the non-monotonic dependence of structural growth on calcium levels. Elucidating the mechanisms underlying synaptic growth will allow for better understanding of mechanisms of neural development and regeneration


The joint NIH-NSF CRCNS program through NSF grant 1515686


1. Henley J, Poo M-m: Guiding neuronal growth cones using Ca2+ signals. Trends in Cell Biol 2004, 14:320–330. doi:

2. Nordman JC, Kabbani N: Microtubule dynamics at the growth cone are mediated by α7 nicotinic receptor activation of a Gαq and IP3 receptor pathway. FASEB J 2014, 28:2995–3006. doi:

3. King JR, Nordman JC, Bridges SP, Lin MK, Kabbani N. Identification and characterization of a G protein-binding cluster in α7 nicotinic acetylcholine receptors. J Biol Chem 2015, 290:20060–70. doi:

4. King JR, Kabbani N: Alpha 7 nicotinic receptor coupling to heterotrimeric G proteins modulates RhoA activation, cytoskeletal motility, and structural growth. J Neurochem 2016, 138:532–45. doi:

P207 A simulation of EMG signal generation following TMS

Bahar Moezzi1,2, Natalie Schaworonkow3, Lukas Plogmacher3, Mitchell R. Goldsworthy2,4, Brenton Hordacre2, Mark D. McDonnell1, Nicolangelo Iannella1,5, Michael C. Ridding2, Jochen Triesch3

1Computational and Theoretical Neuroscience Laboratory, School of Information Technology and Mathematical Sciences, University of South Australia, Adelaide, Australia; 2Robinson Research Institute, School of Medicine, University of Adelaide, Adelaide, Australia; 3Frankfurt Institute for Advanced Studies, Frankfurt, Germany; 4Discipline of Psychiatry, School of Medicine, University of Adelaide, Adelaide, Australia; 5School of Mathematical Sciences, University of Nottingham, Nottingham, UK

Correspondence: Bahar Moezzi (

BMC Neuroscience 2017, 18 (Suppl 1):P207

Transcranial magnetic stimulation (TMS) is a technique that allows noninvasive manipulation of neural activity and is used extensively in both clinical and basic research settings [1]. The effect of TMS on motor cortex is often measured by electromyography (EMG) recordings from a small hand muscle, such as the first dorsal interosseous (FDI). However, the details of how TMS generates responses measured with EMG are not completely understood. Here, we aim to develop a biophysically detailed computational model to study the potential mechanisms underlying the generation of EMG signals in response to TMS.

Our model comprises a feed-forward network of cortical layer 2/3 cells, which drive morphologically detailed layer 5 corticomotoneuronal cells based on [2]. The cortical layer 5 cells in turn project to a pool of motoneurons and eventually the muscle. The EMG signal is the sum of motor unit action potentials. Model parameters are tuned to match results from EMG recordings from the FDI muscle performed in four human subjects.

The model successfully reproduces several properties of the experimental data. The simulated EMG signals match experimental EMG recordings in shape and size, and vary with stimulus and contraction intensities as in experimental data. They exhibit cortical silent periods that are close to the biological values, and reveal an interesting dependence on inhibitory synaptic transmission characteristics. Our model predicts neural firing patterns along the entire pathway from cortical layer 2/3 cells down to spinal motoneurons. In conclusion, our model successfully reproduces major features of EMG recordings and should be considered as a viable tool for analyzing and explaining EMG signals following TMS.


1. Hallett M: Transcranial magnetic stimulation and the human brain. Nature 2000, 406:147–150.

2. Rusu CV, Murakami M, Ziemann U, Triesch J. A model of TMS-induced I-waves in motor cortex. Brain Stimul 2014, 7:401–414.

P208 The effect of LTP, LTD and non-specific LTD on the Recognition of Sparse Noisy Patterns in Simplified and Detailed Purkinje Cell Models

Reinoud Maex1, Karen Safaryan2, Volker Steuber3

1Department of Cognitive Sciences, Ecole Normale Supérieure, rue d’Ulm 25, 75005 Paris, France; 2Department of Physics and Astronomy, Knudsen Hall, University of California, Los Angeles, CA, 90095-0001, USA; 3Centre for Computer Science and Informatics Research, University of Hertfordshire, College Lane, Hatfield, AL10 9AB, United Kingdom

Correspondence: Reinoud Maex (

BMC Neuroscience 2017, 18 (Suppl 1):P208

Classic theories of cerebellar learning suggest that parallel fibre (PF) activity patterns in cerebellar cortex can be stored and recalled based on long-term depression (LTD) of PF - Purkinje cell synapses [1, 2]. As in other theories of learning in neural systems, it is commonly assumed that the weight changes are limited to activated synapses. However, it has been shown that a non-specific form of PF LTD can spread to neighbouring synapses that are inactive during learning [3]. Moreover, long-term potentiation (LTP) of PF synapses has also been found to contribute to cerebellar learning [4].

We have previously studied the effect of non-specific LTD (nsLTD) on pattern recognition and have shown that nsLTD can provide robustness against local spatial noise in the input patterns [5]. Here we extend our previous work by studying the functional role of LTP, and we investigate other determinants of the pattern recognition performance such as the sparsity and number of patterns and different types of pattern noise. We compare results from numerical simulations of a morphologically realistic conductance based Purkinje cell model (as in [2]) with those of a simple linear artificial neural network (ANN) unit. Further, to better understand the results of the numerical simulations, we perform a mathematical analysis of the pattern recognition performance of the ANN unit. As in previous work, we quantify the pattern recognition performance by calculating a signal-to-noise (s/n) ratio [2, 5].

The simulations and analysis of the ANN unit predict that adding LTP to the learning rule does not affect the pattern recognition performance, given that the mean and variance of responses, which appear in the enumerator and denominator of the s/n ratio, respectively, are equally affected by LTP. In contrast, however, the pattern recognition performance of the Purkinje cell model was sensitive to the average synaptic weight, which determined both the spontaneous spike rate and the response to pattern presentation. Adding LTP in the Purkinje cell model made nsLTD equivalent or superior to LTD at all noise levels. Moreover, the LTP based normalisation of weights prevented the Purkinje cell responses from becoming too weak and increased the number of patterns that could be stored for a given s/n ratio by a factor of 4. Finally, we show that our previous conclusions hold over a large range of pattern loadings and sparsities, and that local additive pattern noise can further increase the beneficial effect of nsLTD.


1. Marr D: A theory of cerebellar cortex. J Physiol 1969, 202:437–470.

2. Steuber V, Mittmann W, Hoebeek FE, Silver RA, De Zeeuw CI, Hausser M, De Schutter E: Cerebellar LTD and pattern recognition by Purkinje cells. Neuron 2007, 54:121–136.

3. Wang SS, Khiroug L, Augustine GJ: Quantification of spread of cerebellar long-term depression with chemical two-photon uncaging of glutamate. Proc Natl Acad Sci USA 2000, 97:8635–8640.

4. Schonewille M, Belmeguenai A, Koekkoek SK, Houtman SH, Boele HJ, van Beugen BJ, Gao Z, Badura A, Ohtsuki G, Amerika WE, Hosy E, Hoebeek FE, Elgersma Y, Hansel C, De Zeeuw CI: Purkinje cell-specific knockout of the protein phosphatase PP2B impairs potentiation and cerebellar motor learning. Neuron 2010, 67:618–628.

5. Safaryan K, Maex R, Adams RG, Davey N, Steuber V: Non-specific LTD at parallel fibre - Purkinje cell synapses in cerebellar cortex provides robustness against local spatial noise during pattern recognition. BMC Neuroscience 2011, 12:P314.

P209 Modeling causality of the smoking brain

Rongxiang Tang1, Yi-Yuan Tang2

1Department of Psychology, Washington University in St. Louis, St. Louis, MO 63130, USA; 2Department of Psychological Sciences, Texas Tech University, TX 79409, USA

Correspondence: Yi-Yuan Tang (

BMC Neuroscience 2017, 18 (Suppl 1):P209

Previous studies indicated that brain areas including prefrontal cortex (e.g., medial prefrontal cortex, mPFC), posterior cingulate cortex (PCC) and insula involved in smoking addiction [1]. However, functional connectivity among these regions only shows the correlative relationship but does not reveal the causal relationship such as the changes in information flow in these distributed brain areas involved in smoking. In prior studies [2-3], we applied a newly developed spectral dynamic causal modeling (spDCM) to resting state fMRI to demonstrate the causal relationships among the core regions in smoking addiction. Our results suggested that compared to nonsmokers, smokers had reduced effective connectivity from PCC to mPFC and from right inferior parietal lobule (R-IPL) to mPFC, a higher self-inhibition within PCC and a reduction in the amplitude of endogenous neuronal fluctuations driving the mPFC [2]. Given that Granger causality (GC) and DCM are two main causality methods and have distinct but complementary ambitions that are usefully considered in relation to the detection of functional connectivity and the identification of models of effective connectivity [4-5], therefore it’s important to use a same dataset to compare two models.

We used the dataset of college students previously reported in our study [2]. All fMRI data were collected using a 3-Telsa Siemens Skyra scanner and processed using the Data Processing Assistant for Resting-State fMRI, which is based on SPM and Resting-State fMRI Data Analysis Toolkit [2-3]. For fMRI analyses, we conducted the standard procedures included slice timing, motion correction, regression of WM/CSF signals and spatial normalization [3]. A standard GC analysis was also applied to test the causality among key regions involved in smoking [5-6]. Based on previous literature, in this study we specified four regions of interest within default mode network (DMN) - medial prefrontal cortex (mPFC), posterior cingulate cortex (PCC), and bilateral inferior parietal lobule (Left IPL and Right IPL), same coordinates as in previous spDCM studies [2]. Our results showed the similar causal relationship among these brain areas.

Conclusions: GC and DCM are complementary: both are concerned with directed causal interactions. GC models dependency among observed responses, while DCM models coupling among the hidden states generating observations. Despite this fundamental difference, the two approaches may be converging.


This work was supported by the Office of Naval Research.


1. Goldstein RZ, Volkow ND: Dysfunction of the prefrontal cortex in addiction: Neuroimaging findings and clinical implications. Nat Rev Neurosci 2011, 12:652–669.

2. Tang R, Razi A, Friston KJ, Tang YY: Mapping smoking addiction using effective connectivity analysis. Frontiers in Human Neuroscience. 2016, 10:195.

3. Razi A, Kahan J, Rees G, Friston KJ: Construct validation of a DCM for resting state fMRI. Neuroimage 2015, 106:1–14.

4. Friston K, Moran R, Seth AK: Analysing connectivity with Granger causality and dynamic causal modelling. Curr Opin Neurobiol. 2013, 23:172–8.

5. Seth AK: A MATLAB toolbox for Granger causal connectivity analysis. J Neurosci Meth 2010, 186:262–273.

6. Zhao Z, Wang X, Fan M, Yin D, Sun L, Jia J, Tang C, Zheng X, Jiang Y, Wu J, Gong J: Altered effective connectivity of the primary motor cortex in stroke: a resting-state fmri study with Granger causality analysis. PLoS One. 2016, 11:e0166210.

P210 Modelling of calcium waves in astrocytic networks induced by neural activity

Darya V. Verveyko1, Alexey R. Brazhe2, Andrey Yu Verisokin1, Dmitry E. Postnov3

1Department of Theoretical Physics, Kursk State University, Kursk, 305000, Russian Federation; 2Department of Biophysics, Lomonosov Moscow State University, Moscow, 119991, Russian Federation; 3Department of Physics, Saratov State National Research University, Saratov, 410012, Russian Federation

Correspondence: Darya V. Verveyko (

BMC Neuroscience 2017, 18 (Suppl 1):P210

We propose two-compartment model of calcium dynamics in astrocyte network, basing on Ullah model [1]. In order to count the specific features of different parts of astrocyte network we mark out three types of modelling space: astrocyte soma with thick branches, thin branches, and extracellular space. We have developed two variants of equation set which are different in relative contribution of specific ionic currents. We suppose that activation of astrocyte calcium dynamics is mediated by the extracellular space, specifically, via diffusion of synaptic glutamate released due to the neuronal activity, which we describe as some random signal incorporating noise effects.

We have performed a number of simulation runs at different parameter sets for individual astrocyte and multi-cell network. One of simulation examples within the computational multi-cell template is given in Figure 1. The global wave emerging in one of the points passes through the wide region of astrocyte network. The formation of the wave has a high degree of regularity and periodicity. There are also local regimes where excitation waves damp passing through a small number of cells.

Figure 1. Calcium global wave in multi-cell ensemble. A. The representative snapshots of spatial patterns. Numbers from 1 to 8 indicate the cells according to its involvement in firing pattern. B, C. The time courses of cytosolic Ca2+ and IP3 concentrations, respectively

Conclusions: We have suggested the advanced model of astrocyte network dynamics, which fits well the recent experimental findings [2]. Specifically, we have suggested the development of model equations for intra-astrocyte calcium dynamics, which takes into account its specific topological features. We have tested the suggested approach for both individual cell image and multi-cellular structure. The obtained results confirm that our model is able to reproduce the evolution of spatio-temporal dynamics under neuronal activity represented by spatially uncorrelated and randomized in time process of glutamate injection. In multicellular system, a persistent self-organized rhythmicity of calcium activity in groups was found which can be explained by some interplay between the refractory time of calcium excitability and noise-triggered processes.


This work is partially supported by the Ministry of Education and Science of the Russian Federation within the research project №3.9499.2017 included into the basic part of research funding assigned to Kursk State University.


1. G. Ullah, P. Jung, A.H. Cornell-Bell: Anti-phase calcium oscillations in astrocytes via inositol (1, 4, 5)-trisphosphate regeneration. Cell Calcium 2006, 39: 197–208.

2. M. Falcke: Reading the patterns in living cells - the Physics of Ca2+ signaling. Adv. in Phys. 2004, 53(3): 255–44

P211 Simulated voltage clamp: offline biophysical reconstruction of fast ionic currents in large cells with uncompensated series resistance

Cengiz Günay1,2, Gabriella Panuccio3, Michele Giugliano3, Astrid A. Prinz1

1Dept. Biology, Emory University, Atlanta, Georgia 30322, USA; 2School of Science and Technology, Georgia Gwinnett College, Lawrenceville, Georgia 30043, USA; 3Theoretical Neurobiology & Neuroengineering Lab, Dept. Biomedical Sciences, University of Antwerp, Antwerp, Belgium

Correspondence: Cengiz Günay (

BMC Neuroscience 2017, 18 (Suppl 1):P211

Characterization of ion channel kinetics from voltage-clamp experiments is inherently biased by the non-linear voltage error introduced by the resistance of the recording pipette in series with the membrane resistance (series resistance, Rs) [1]. Modern patch-clamp amplifiers provide built-in circuits for on-line Rs compensation. However, because of the nature of these circuits, it is theoretically impossible to achieve 100% Rs compensation without losing stability of the recording. Moreover, fast ionic voltage-dependent currents, like sodium (Na+) currents, require a high band-width operation of the Rs compensation circuit, which in turn might result in sudden oscillations of the cell membrane voltage (Vm). Consequently, Rs compensation is currently a trade-off between a commonly accepted error tolerance and the crucial need for preventing oscillations. Here, we build a novel “simulation method” as a new component to a previously developed computational framework [2] to overcome these limitations. In contrast to the amplifier’s strategy to force a flat voltage waveform, which is required for generating conventional current-voltage plots of peak ionic currents, we allow arbitrary voltage waveforms by simulating voltage-clamp in a computational neuron model and then curve fitting its output to match recordings to directly estimate Hodgkin-Huxley model parameters of the channel. The kinetics parameters so obtained are used to reconstruct the unbiased current trace. We demonstrate our method using voltage-clamp recordings of Na+ currents from ‘giant’ layer V pyramidal cells of the rat primary somatosensory cortex in the presence of uncompensated, significantly high (10-20 MΩ) Rs along with the low input resistance (~40 MΩ) typical of these cells, so as to maximize the compound voltage clamp errors. As shown in Figure 1, the model computes non-linear artifact currents and predicted actual Vm values. When Rs compensation is a major concern for the reliability voltage-clamp data, our approach is capable of overcoming the limitations posed by currently available hardware- and software-based Rs compensation methods, thus allowing to fully reconstructing the actual current kinetics.

Figure 1. Offline subtraction of estimated amplifier-unaccounted passive currents. A. Raw recordings of Na+ currents contaminated by uncompensated artifacts (top) recorded during the corresponding voltage steps (bottom trace). B. Passive artifacts subtracted from the current traces (top) and actual Vm (bottom) estimated using the model simulation method. Note how the actual Vm differs significantly from the desired holding voltage-steps (see panel A)


Career Award at the Scientific Interface (CASI) from the Burroughs Wellcome Fund awarded to AAP.


1. Sakmann, B., and Neher, E. Single-Channel Recording. 2nd Edition, (Springer Science & Business Media, Plenum Press, New York, 1995).

2. Günay C, Edgerton JR, Li S, Sangrey T, Prinz AA, and Jaeger D. Database analysis of simulated and recorded electrophysiological datasets with PANDORA’s toolbox. Neuroinformatics 2009, 7: 93–111.

P212 Representing and implementing cognitive sequential interactions

Pablo Varona1, Mikhail I. Rabinovich2

1Grupo de Neurocomputación Biológica, Dpto. de Ingeniería Informática, Escuela Politécnica Superior, Universidad Autónoma de Madrid, Madrid, Spain; 2BioCircuits Institute, University of California, San Diego, CA, USA

Correspondence: Pablo Varona (

BMC Neuroscience 2017, 18 (Suppl 1):P212

Cognition as observed by imaging experiments involves sequential activations of different brain regions [1]. The sequential nature of most aspects of cognition is also reflected in the progression of successive components of decision-making and behavior. In this work, we present a family of models that describe hierarchical relationships among cognitive processes represented with robust sequential dynamics. These models build heteroclinic networks based on the winnerless competition principle where asymmetric inhibition shapes key properties for sequential information processing. The robustness of the sequential dynamics in these networks relies on stable heteroclinic channels, sequences of metastable states and their vicinity connected by separatrices that link them in a chain.

The models described in this work are implemented with generalized Lotka-Volterra equations whose variables can represent information perception items and also cognitive resources such as attention, working-memory and emotion [2–5]. Their hierarchical interactions give rise to binding and chunking processes. We discuss applications of these models in three different contexts: (i) the characterization of decision-making in terms of the sequential evolution of incoming information and the hierarchical organization of cognitive resources in time; (ii) the use of these models to build joint robot-human interactions which result in an increased joint creativity of such team; (iii) the use of these models to drive closed-loop stimulation in novel experiments to reveal healthy and pathological dynamics of cognitive processes in normal subjects and in subjects with cognitive impairments. The considered dissipative models are in general structurally stable and suitable for bifurcation analysis, which helps their interpretation in relationship with experimental data. Their robustness and computational efficiency also make them adequate for real-time implementations in the proposed applications.

Overall, we stress the need to interpret brain imaging experiments in the context of theoretical studies that describe information flows corresponding to sequential cognitive processes. The coarse-grained information of current imaging techniques can be matched to the variables represented in the proposed network models. The results of such analyses can lead to novel insights linking networks graphs to cognitive dynamics, and the development of novel technology for rehabilitation purposes and artificial cognition.


This work was funded by MINECO/FEDER DPI2015-65833-P ( and ONRG grant N62909-14-1-N279 (PV) and by ONR MURI 14-13-1-0205 and MURI N00014-13-1-0678 (MIR)


1. Daselaar SM, Rice HJ, Greenberg DL, Cabeza R, LaBar KS, Rubin DC. The spatiotemporal dynamics of autobiographical memory: Neural correlates of recall, emotional intensity, and reliving. Cereb. Cortex. 2008; 18:217–29.

2. Rabinovich MI, Afraimovich VS, Bick C, Varona P. Information flow dynamics in the brain. Phys. Life Rev. 2012; 9:51–73.

3. Rabinovich MI, Tristan I, Varona P. Hierarchical nonlinear dynamics of human attention. Neurosci. Biobehav. Rev. 2015; 55:18–35.

4. Rabinovich MI, Simmons AN, Varona P. Dynamical bridge between brain and main. Trends Cogn. Sci. 2015; 19:453–461.

5. Varona P, Rabinovich MI. Hierarchical dynamics of informational patterns and decision making. Proc. R. Soc. B. 2016; 283:20160475.

P213 An integrated neuro-mechanical model of C. elegans locomotion

Jack Denham, Thomas Ranner, Netta Cohen

School of Computing, University of Leeds, Leeds, LS2 9JT, UK

Correspondence: Jack Denham (, Thomas Ranner (T., Netta Cohen (

BMC Neuroscience 2017, 18 (Suppl 1):P213

Across the animal kingdom, the generation and modulation of motor behaviour is attributed to Central Pattern Generators (CPGs) or neural circuits that endogenously produce oscillations. The ubiquity of CPGs prompts the use of coupled oscillator models to describe neural activity and the generation of behaviour. However, CPGs have not been identified in the forward locomotion system of the small roundworm Caenorhabditis elegans. In this case, a proprioceptive mechanism, in which motor-neurons respond to local body stretch, is thought to drive sustained body undulations. Since the wavelength and frequency of oscillations has been shown to depend on the visco-elasticity of the surrounding medium [1], it is important to include environmental effects in such locomotion models [1, 2]. This requires the integration of the nervous system and body mechanics in a continuous feedback loop which is able to adapt in response to environmental changes. Here, a biologically grounded model describing neural activity (adapted from [1]) is integrated into a novel continuum soft-body model [2]. We present a dynamical systems description of the local pattern generation mechanism with fictive proprioceptive feedback and compare this with the actual feedback in whole body simulations. The closed loop neuro-mechanical model is demonstrated to produce realistic travelling waves down the body in silico. The effect of the material properties of the body is investigated.


1. Boyle JH, Berri S, Cohen N: Gait modulation in c. elegans: an integrated neuro-mechanical model. Frontiers in computational neuroscience 2012, 6:10.

2. Cohen N, Ranner T: A new computational method for a model of C. elegans biomechanics: Insights into elasticity and locomotion performance, arXiv:1702.04988, 2017.

P214 A computational approach to understanding functional synaptic diversity: the role of nanoscale topography of Ca2+ channels and synaptic vesicles

Maria Reva1, Nelson Rebola1, Tekla Kirizs2, Zoltan Nusser2, David DiGregorio1

1Laboratory of Dynamic Neuronal Imaging, Neuroscience Department, Institute Pasteur, Paris, France, 75015; 2Institute of Experimental Medicine, Hungarian Academy of Sciences, Budapest, Hungary, 1083

Correspondence: Maria Reva (

BMC Neuroscience 2017, 18 (Suppl 1):P214

Understanding the spatial relationship between the synaptic vesicles and the voltage-gated Ca2+ channels (VGCCs) is critical for deciphering the determinants of synaptic strength, time course, and plasticity. Furthermore, synaptic strength, within a homogeneous population of synapses, is highly heterogeneous, but the underlying mechanisms are poorly understood. We hypothesize that variations in the nanoscale organization of VGCCs and synaptic vesicles contribute to the diversity of synaptic function observed throughout the brain [1]. Because VGCCs and synaptic vesicles can be as close as 10-20 nm, direct experimental observation of the spatio-temporal dynamics driving synaptic vesicle fusion is still challenging. We have taken a computational approach to simulate the spatio-temporal dynamics of Ca2+ -triggered vesicle fusion to examine channel-vesicle topologies that is consistent with experimental findings.

To understand the influence of topography on synaptic diversity, we performed Monte Carlo (MC) simulations designed to predict the different functional behavior of inhibitory and excitatory terminals within the cerebellar cortex. Model parameters were constrained to experimental data (such as single channel open probability, Ca2+ buffers kinetics, etc.) leaving only topographical arrangements of VGCCs and location of the release sensor as variables. In addition, we have analyzed replicas in which the VGCC subunit Cav2.1 was labeled. Using Ripley’s analysis and mean nearest neighbor distances (NND) calculations we concluded that the distribution of the Cav2.1 subunit was significantly different from complete spatial randomness in both excitatory and inhibitory axon terminals. Then using cluster analysis, we determined that inhibitory terminals exhibited small clusters, while the labeling on excitatory boutons seemed more amorphous. We therefore considered an arrangement based on a few simple rules: VGCCs and vesicles were placed randomly within the AZ, but with a minimal separation, we called this the exclusion zone (EZ) model. The EZ model produced channel NND distributions that were consistent with the electron microscopy data. We then performed reaction diffusion MC simulations, considering perimeter coupled model for inhibitory terminals and the exclusion topology for excitatory terminals. Our simulations predicted well the experimental data of Ca2+ chelator inhibition of synaptic release (EGTA inhibition) and release probability.

Our results suggest that inhibitory terminals use small clusters of VGCC to drive the fusion of vesicles located in their periphery (perimeter release model) as described previously at the excitatory calyx of Held synapses [2]. In contrast, excitatory synapses made by cerebellar parallel fibers require a more random placement of up to 3 times more VGCCs within the AZ, as well as random placement of vesicles with an exclusion zone of >40 nm. We therefore suggest that nanoscale distribution of VGCCs and synaptic vesicles differs among synapses and is a key factor underlying functional synaptic diversity.


1. Chabrol FP, Arenz A, Wiechert MT, Margrie TW, DiGregorio DA: Synaptic diversity enables temporal coding of coincident multisensory inputs in single neurons. Nat Neurosci 2015, 18(5): 718–727.

2. Nakamura Y, Harada H, Kamasawa N, Matsui K, Rothman JS, Shigemoto R, Silver RA, DiGregorio DA, Takahashi T: Nanoscale distribution of presynaptic Ca(2 +) channels and its impact on vesicular release during development. Neuron 2015, 85(1): 145–158.

P215 Is object saliency perceived different cross-culturally: a computational modelling study

Eirini Mavritsaki1,2, Panos Rentzelas1

1Department of Psychology, Birmingham City University, Birmingham, UK; 2School of Psychology, University of Birmingham, Birmingham, UK

Correspondence: Eirini Mavritsaki (

BMC Neuroscience 2017, 18 (Suppl 1):P215

Research on cross-cultural differences of visual attention has identified that cultural membership influence performance in object perception [1, 2]. Participants with collectivist background focus more on the background (distractors) and omit the target relevant information while participants from the individualists’ background tend to attend the target and omit the background information. Previous modelling work from our lab [3] predicted that in Visual Search task cultural memberships influences the performance of the tasks. The results showed that simulated efficiency of participants from the individualist group is significantly higher than simulated efficiency from participants from the collectivists group when the task is to identify a target amongst distractors in a classical easy visual search. Work in our lab then confirmed these predictions. Preliminary behavioral data supports the idea that the effect remains even if the target is more salient than the distractors. This difference is simulated and explored further by investigating the changes in the effect for different levels of saliency using the binding Search over Time and Space (bsSoTS) computational model [4, 5] as predictor of behavior.

bsSoTS is based on integrate-and-fire neurons that are tighter connected when they encode a specific characteristic of an item presented in one position on the Visual Field and loosely connected when they present the same characteristics but items presented in different positions on the visual field. Moreover, the model incorporates a number of synaptic currents and processes that allowed us to successfully simulate the Visual Search experiment [4, 5]. In research, cultural membership is usually investigated between collectivists (Asian cultures) and individualists’ groups (Western Europeans cultures) [1, 2]. The experiments that bsSoTS simulated so far are based on individualists’ groups [4, 5]. To simulate therefore the difference in behavior between collectivists and individualists, we need to simulate the difference observed in collectivists cultures. To do that we tested the coupling between the neurons that encode a specific item presented in one position on the Visual Field as a saliency parameter. The same parameter was used in preliminary modelling work in our lab [3].

The results showed that the saliency parameter successfully simulates the behavioral results. Additionally, further behavioral work is proposed by investigating the relationship between the different saliency levels and the observed effect.


1. Nisbet RE, Masuda T: Culture and point of view. Proceedings of the National Academy of Sciences of the United States of America 2003, 100: 11163–11170.

2. Nisbet RE, Peng K, Choi I, Norenzayan A: Culture and systems of thought: Holistic versus analytic cognition. Psychological Review 2001, 108: 291–310.

3. Mavritsaki E, Rentzelas P: Cross-cultural differences in visual attention: A computational modelling study. BMC Neuroscience, 16: 204.

4. Mavritsaki E, Humphreys GW: Temporal binding and segmentation in Visual Search: A computational neuroscience analysis. Journal of Cognitive Neuroscience 2015, 28: 1553–1567

5. Mavritsaki E, Heinke D, Allen HA, Deco G, Humphreys GW: Bridging the gap between physiology and behavior: Evidence from the sSoTS model of human visual attention. Psychological Review 2011, 118: 3–41.

P216 NeuroNLP: a natural language portal for aggregated fruit fly brain data

Nikul H. Ukani1, Adam Tomkins2, Chung-Heng Yeh1, Wesley Bruning3, Allison L. Fenichel4, Yiyin Zhou1, Yu-Chi Huang5, Dorian Florescu2, Carlos Luna Ortiz2, Paul Richmond6, Chung-Chuan Lo5, Daniel Coca2, Ann-Shyn Chiang5, Aurel A. Lazar1

1Department of Electrical Engineering, Columbia University, New York, NY 10027, USA; 2Department of Automatic Control & Systems Engineering, The University of Sheffield, Sheffield, S1 3JD, UK; 3Department of Computer Science, Columbia University, New York, NY 10027, USA; 4Data Science Institute, Columbia University, New York, NY 10027, USA; 5Brain Research Center, National Tsing Hua University, Hsinchu 30013, Taiwan; 6Department of Computer Science, The University of Sheffield, Sheffield, S1 4DP, UK

Correspondence: Aurel A. Lazar (

BMC Neuroscience 2017, 18 (Suppl 1):P216

NeuroNLP, a key application on the Fruit Fly Brain Observatory [1] platform (FFBO,, provides a modern web-based portal for navigating fruit fly brain circuit data. Increases in the availability and scale of fly connectome data demand new, scalable and accessible methods to facilitate investigation into the functions of the complex circuits being uncovered. Combining data from multiple sources into a single database, with a common data model, NeuroNLP facilitates access to data from various sources simultaneously. It is built on top of the NeuroArch database [2] which codifies fly connectome data from both the FlyCircuit database [3] and the Janelia Fly Medulla data [4]. The former hosts meso-scale connectome data on the whole-brain level and the latter contains detailed, micro-scale synaptic information about the Medulla neuropil. NeuroNLP allows users to probe biological circuits in the NeuroArch database with plain English queries, such as “show glutamatergic local neurons in the left antennal lobe” and “show neurons with dendrites in the left mushroom and axons in the fan-shaped body”, replacing the cumbersome menus prevalent in today’s neurobiological databases. This enables in-depth exploration and investigation of the structure of brain circuits, using intuitive natural language queries that are capable of revealing latent structure and information. Equipped with powerful 3D visualization, NeuroNLP standardizes tools and methods for graphical rendering, representation, and manipulation of brain circuits, while integrating with existing databases such as the FlyCircuit. It currently supports queries to show, add, filter and remove neurons based on 1) the parent neuropil, 2) neuron type (local or projection), 3) dendritic/axonal arborization, 4) neurotransmitter and 5) related postsynaptic or presynaptic neurons. The graphical user interface complements the natural language queries with additional controls for exploring neural circuits. Designed with an open-source, modular structure, it is highly scalable and extensible to additional databases and languages. Accessible through a laptop or smartphone (Figure 1) at, NeuroNLP significantly increases the accessibility of fruit fly brain data, streamlining the way we explore and interrogate distal data sources to open new avenues of research, and enrich neuroscience education.

Figure 1. Smartphone screenshot of NeuroNLP showing 16 lobula plate tangential cells. Each neuron can be cross-linked to the FlyCircuit Database (left panel)


1. Ukani NH, Yeh C-H, Tomkins A, Zhou Y, Florescu D, Ortiz CL, Huang Y-C, Wang C-T, Richmond P, Lo C-C et al., The Fruit Fly Brain Observatory: from structure to function. Neurokernel Request for Comments, Neurokernel RFC #7, 2016. DOI:

2. Givon LE, Ukani NH, Lazar AA, NeuroArch: A Graph dB for Querying and Executing Fruit Fly Brain Circuits, Neurokernel Request for Comments, Neurokernel RFC #4, 2015. DOI:

3. Chiang A-S, Lin C-Y, Chuang C-C, Chang H-M, Hsieh C-H, Yeh C-W, Shih C-T, Wu J-J, Wang G-T, Chen Y-C et al., Three-dimensional reconstruction of brain-wide wiring networks in Drosophila at single-cell resolution. Cell 2011, 21(1):1–11.

4. Takemura S, Xu CS, Lu, Z, Rivlin PK, Parag T, Olbris DJ, Plaza S, Zhao T, Katz WT, Umayam L et al., Synaptic circuits and their variations within different columns in the visual system of Drosophila. PNAS 2015, 112(44):13711–13716.

P217 Towards prediction of plasticity response to paired cTBS from resting state network connectivity

Bahar Moezzi1, Brenton Hordacre1, Mitchell R. Goldsworthy1,2, Michael C. Ridding1

1Robinson Research Institute, School of Medicine, University of Adelaide, Adelaide, Australia; 2Discipline of Psychiatry, School of Medicine, University of Adelaide, Adelaide, Australia

Correspondence: Bahar Moezzi (

BMC Neuroscience 2017, 18 (Suppl 1):P217

Paired continuous theta burst stimulation (cTBS) is a non-invasive brain stimulation technique that can induce neuroplastic change in the primary motor cortex [1]. The response shows high intersubject variability and having a marker that might predict response would be useful in many situations. Our hypothesis is that a more strongly connected cortical network is associated with a greater plasticity response. To test this hypothesis, we quantify the correlation between graph theoretical measures of EEG connectivity data and the plasticity response to paired cTBS. We use state of the art methodologies in order to provide biological markers of response to paired cTBS to be used in their prediction.

We tested eighteen healthy adults (8 male, 1left handed) with a mean age of 24.2 (SD 6.0). Three minutes of continuous resting state EEG with open eyes was acquired. Baseline MEPs (n = ?) were recorded and then paired cTBS was applied to the left primary motor cortex, followed by three blocks of 20 TMS pulses. Surface EMG was used to record the motor evoked potential from the right first dorsal interosseous (FDI) muscle. We preprocessed EEG data and removed artefacts.

Graph theory provides a method to characterize the brain as a set of nodes interconnected by a set of edges [2]. It is suggested that an intracortical electrical source approach in graph theoretical analysis of EEG data is superior to the analysis at the surface level. Debiased weighted phase lag index is used as a measure of functional connectivity in the source space among the regions of interest. The connectivity matrix is thresholded and a graph is constructed. Several graph theoretical measures including degree, density, distance, clustering coefficient and characteristic path length are computed. Each participant’s plasticity response to paired cTBS is correlated with that participant’s graph theoretical measures (at each region of interest).

Preliminary analysis shows that the distance from the site of stimulation associates with the response to paired cTBS, while degree, density, clustering coefficient and characteristic path length do not. These findings suggest that graph theoretical measures of network connectivity may have some utility in predicting the neuroplasticity response to paired cTBS.


1. Goldsworthy MR, Pitcher JB, Ridding MC: Neuroplastic modulation of inhibitory motor cortical networks by spaced theta burst stimulation protocols. Brain stimul 2013, 6:340–345.

2. Bullmore ET, Sporns O: Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Rev Neurosci 2009, 10:186–98.

P218 Mathematical Analysis of Transient “domino effect” like Brain Dynamics

Jennifer L. Creaser1, Congping Lin1, Peter Ashwin1, Jonathan T. Brown2, Thomas Ridler2

1Department of Mathematics, University of Exeter, Exeter, EX4 4QD, UK; 2Institute of Biomedical and Clinical Sciences, University of Exeter Medical School, Exeter, EX4 4PS, UK

Correspondence: Jennifer L. Creaser (

BMC Neuroscience 2017, 18 (Suppl 1):P218

There has been much research into complex neurological diseases such as, for example, epilepsy and Alzheimer’s disease, however much remains unknown. It has become clear that such diseases are associated with abnormal brain network function including hyperexcitability. Brain network models used to study excitability, are often characterized by different dynamic regimes, such as alternating rest and excited states. The transient dynamics responsible for transitions between dynamic states are often discounted or overlooked in favour of the long term asymptotic behaviour. However, analysis of these transitions is instrumental in understanding, for example, the onset and evolution of epileptic seizures.

We consider a model of seizure initiation represented by a network of diffusively coupled bi-stable neurones driven by noise. Nodes in the network can switch between the quiescent attractor and active attractor due to noise fluctuations. We focus on the case of sequential escapes of nodes and the associated escape times. Understanding the factors controlling sequential transitions between stable/unstable attractors is important as they have been implicated in a diverse range of brain functions associated with neuronal timing, coding, integration as well as coordination and coherence [1, 2]. Network properties such as the coupling and excitability of nodes in such systems can promote (or suppress) escape of others on the network. We aim to quantify and characterise the escape times in terms of the coupling and excitability of nodes.

We apply our theoretical framework to investigate escape times to the propagation of epileptiform activity in parasagittal brain slices containing mouse medial entorhinal cortex (mEC). We observe sequential recruitment of electrodes to the ictal-like state and can determine the escape time, that is the equivalently average burst start time of each electrode. The sequential recruitment of electrodes to the ictal-like state could be seen as sequential escapes to an excited state in the underlying functional brain networks. We explore differences in intrinsic (node) excitability across the mEC by incorporating an excitability gradient into our prototypical bi-stable model. Figure 1 shows preliminary findings comparing the average burst start time observed in experiments (grey) and computed with the bi-stable model (black). In this presentation, I will address the question how a network’s structure and its properties influence sequential recruitment/escape of nodes in a network.

Figure 1. The average start time of ictal activity relative to ventral-most channel recorded from along the dorso-ventral axis of the mEC in vitro using a 16-shank silicon probe array (grey) with the average start time for each channel computed using 1000 simulations of a unidirectionally coupled 16 node bi-stable system with a linear excitability gradient (black)


1. Rabinovich, MI, Pablo V: Robust transient dynamics and brain functions. Front Comput Neurosci 2011, 5: 24–33.

2. Rabinovich, MI, Ramon H, Gilles L: Transient dynamics for neural processing. Science 2008, 321(5885): 48–50.

P219 Synchronized neocortical dynamics during NREM sleep

Daniel Levenstein1,2, Brendon O. Watson2,3, György Buzsáki1,2, John Rinzel1,4

1Center for Neural Science, New York University, New York, NY, 10003, USA; 2NYU Neuroscience Institute, New York University, New York, NY, 10016, USA; 3Dept. of Psychiatry, Weill Cornell Medical Center, New York, NY, 10065, USA; 4Courant Institute for Mathematical Sciences, New York University, New York, NY, 10012, USA

Correspondence: Daniel Levenstein (

BMC Neuroscience 2017, 18 (Suppl 1):P219

During periods of behavioral quiescence such as NREM sleep, quiet wakefulness, and under anesthesia, neocortical populations can show ‘synchronized dynamics’ [1]: low-frequency alternations between low-rate spiking (UP states) and population-wide inactivity (DOWN states). Previous work has indicated that these dynamics are mediated by the interaction of recurrent excitation and neuronal adaptation [1–3]. Using a Wilson-Cowan model (Figure 1A), we show that synchronized regimes are seen during low levels of drive to a recurrent adapting neural population. Due to the possibility for both noise-induced and adaptation-induced transitions, this type of oscillation can show a range of spectral properties and UP/DOWN state dwell time statistics, which fit into 4 broad classes of synchronized regimes (Figure 1B). Using a nonparametric distribution-matching method, we find that this idealized model is able to reproduce the dwell time statistics of UP/DOWN states from multiple behavioral contexts in vivo.

During NREM sleep [4], DOWN states are coincident with large deflections in the LFP/EEG in a stereotyped pattern termed the ‘slow oscillation’. Unlike synchronized dynamics in other behavioral states (e.g. [5]), we find that the NREM slow oscillation is best represented by an ‘ExcitableUP’ regime, in which noise or perturbation of a stable UP state can induce brief DOWN states (Figure 1C). Our model reveals a mechanistic basis for multiple features of NREM sleep that are thought to be related to mnemonic and homeostatic functions [6]: impulse-initiated slow waves and sequential activity at the DOWN->UP transition accompanied by gamma-band activity.

Figure 1. Synchronized dynamics in an adapting Wilson-Cowan model. A. Model schematic and equations. B. Synchronized regimes available to the model. (Left) Phase plane. (Right) Simulated time courses and dwell time distributions. C. State diagram in I-W reveals parameter domain for each synchronized regime. Color indicates similarity to NREM sleep. Solid/dashed line: saddle-node/Hopf bifurcations


1. Harris KD, Thiele A: Cortical state and attention. Nature Reviews Neuroscience 2011. 12(9):509–523.

2. Parga N, Abbott LF: Network model of spontaneous activity exhibiting synchronous transitions between up and down States. Frontiers in Neuroscience 2007; 1(1):57–66.

3. Compte A, Sanchez-Vives MV, McCormick DA, Wang XJ: Cellular and network mechanisms of slow oscillatory activity and wave propagations in a cortical network model. J. Neurophys 2003; 89(5):2707–2725.

4. Watson BO, Levenstein D, Greene JP, Gelinas JN, Buzsáki G: Network Homeostasis and State Dynamics of Neocortical Sleep. Neuron 2016; 90(4):839–852.

5. Mochol G, Hermoso-Mendizabal A, Sakata S, Harris KD, de la Rocha, J: Stochastic transitions into silence cause noise correlations in cortical circuits. PNAS 2015; 112(11):3529–3534.

6. Levenstein D, Watson BO, Rinzel J, Buzsáki G. Sleep regulation of the distribution of cortical firing rates. Current Opinion in Neurobiology 2017. In press.

P220 Accumulation process and multi-layer mechanisms of perceptual alternation in auditory streaming

Rodica Curtu1, Anh Nguyen1, John Rinzel2

1Department of Mathematics, The University of Iowa, Iowa City, IA 52242, USA; 2Courant Institute of Mathematical Sciences, New York University, New York, NY 10003, USA

Correspondence: Rodica Curtu (

BMC Neuroscience 2017, 18 (Suppl 1):P220

In daily life, the auditory system sorts the mixture of sounds from different sources into specific acoustic information by grouping acoustic events over time and forming internal representations of sound streams. A particular set of stimuli that have been used intensively to study that phenomenon consists of sequences of alternating high (A) and low (B) pure tones presented as repeated triplets, ABA_ABA_….Depending on the frequency separation (df) between the two tones, subjects report either of two percepts: “integration” (a single, coherent stream of high and low tones, like a galloping rhythm) or “segregation” (two parallel distinct streams). In our lab, the psychophysical experiment was conducted on 15 human subjects of normal hearing. They were prompted to listen to repeating sequences of ABA_ triplets at df = 3, 5, 7 semitones difference, with a total of 675 trials per df condition. Each sequence was comprised of sixty 500 ms-long triplets, resulting in a 30 s-long presentation. Subjects were asked to press and hold different buttons on a keypad when they perceived integration and segregation, respectively. Data analysis revealed time course and statistical distribution of perceptual switching. After the stimulus onset, it takes several seconds for the trial-averaged probability of stream segregation to build up, and the first percept is typically integration. Also, subjects report spontaneous alternations between the two percepts, and the percept durations are gamma-distributed. Furthermore, a previous study reveals that there are similarities between build-up functions of stream segregation from psychophysical experiments (psychometric functions) and those from multi-unit recordings from monkeys’ primary auditory cortex (area A1) (neurometric functions) [1]. In this presentation, we first demonstrate that a signal-detection model introduced in [1] to compute neurometric functions, is not sufficient to produce realistic percept durations as reported experimentally. In particular, mean spike counts extracted from cortical recordings [1] were used to generate neuronal responses, which were used as inputs to a signal-detection model. We showed that this model produces percept durations whose distribution is exponential (not gamma) and whose means are significantly smaller than those reported experimentally. We propose an extension to this model in the form of a multi-stage feedforward auditory network with components: i) area “A1” whose local outputs (mean spike counts) are subject to threshold-based binary classifiers (binary neurons); ii) An ensemble of binary neurons (BN) receiving local input from “A1”; and iii) Two competing units (“the accumulators”) whose activities depend on accumulated evidence from neuronal ensemble BN for each of the two percepts, integration and segregation. The suppressed neuronal unit accumulates evidence against the current percept while the dominant unit gradually reduces its activity. Both are drifting towards their given thresholds.

Conclusion: The proposed evidence accumulation model is able to reproduce qualitatively and quantitatively switching behavior between integration and segregation in auditory streaming. At each df the model produced percept durations whose distribution is gamma-like and whose means are comparable to those obtained in our psychophysical experiment.


This material is based upon work supported by the National Science Foundation under Grant Number CRCNS 1515678


1. C. Micheyl, B. Tian, R. Carlyon, R. Rauschecker: Perceptual organization of tone sequences in the auditory cortex of awake macaques. Neuron 2005, 48:139–148.

2. D. Barniv, I. Nelken: Auditory streaming as an online classification process with evidence accumulation. PLOS ONE 2015.

3. R. Cao, A. Pastukhov, M. Mattia, J. Braun: Collective Activity of Many Bistable Assemblies Reproduces Characteristic Dynamics of Multistable Perception. J Neurosci 2016, 36(26):6957–6972.

P221 The Necessity of Sleep and Wake: Synaptic Homeostasis via System-Level Plasticity and the Ascending Arousal System

Sahand Assadzadeh1,2, Peter A. Robinson1,2

1School of Physics, The University of Sydney, NSW 2006, Sydney, Australia; 2Center for Integrative Brain Function, The University of Sydney, NSW 2006, Sydney, Australia

Correspondence: Sahand Assadzadeh (

BMC Neuroscience 2017, 18 (Suppl 1):P221

One of the important functions of sleep is believed to be the regulation of synaptic weights in the brain. Mounting experimental evidence has found that on average, synapses that are upscaled during wakefulness are downscaled during sleep, providing a possible mechanism through which synaptic stability is maintained in the brain. This is often referred to as the synaptic homeostasis hypothesis (SHH) [1]. However, the questions of how and why sleep is necessary to fulfill this function remain unanswered. Neural field theory (NFT) has shown that synaptic plasticity dynamics depend strongly on network level effects, such as the overall system frequency response, with especially enhanced plasticity at resonances [2]. NFT is used to study the system-level effects of plasticity in the corticothalamic system, where arousal states are represented parametrically by the connection strengths of the system, among other physiologically based parameters (Fig. 1). Here it is found that the plasticity dynamics have no fixed points or closed cycles in the parameter space of the connection strengths; but parameter subregions exist where flows have opposite signs. Remarkably, these subregions coincide with previously identified regions corresponding to wake and slow-wave sleep, thus demonstrating the role of state-dependent activity on the sign of synaptic modification. We then show that a closed cycle in the parameter space is possible by coupling the plasticity dynamics to that of the ascending arousal system (AAS), which moves the brain back and forth between sleep and wake, and thus between the opposite-flow subregions to form a closed loop. In this picture, both wake and sleep are necessary to stabilize connection weights in the brain, because each modifies synaptic strengths in an opposite direction relative to the other.

Figure 1. Evolution of connection strengths around a wake-sleep cycle forming a closed loop in arousal state space. The blue line represents plastic effects during wakefulness that result an increase of the corticothalamic and corticocortical loop gains in the corticothalamic system, with red lines corresponding to the opposite effect observed during slow-wave sleep. Thin lines indicate the action of the AAS in switching between wake and sleep states


This work was supported by the Australian Research Council under Center of Excellence for Integrative Brain Function Grant CE140100007 and Laureate Fellowship Grant FL140100025.


1. Tononi G, Cirelli C. Sleep and the Price of Plasticity: From Synaptic and Cellular Homeostasis to Memory Consolidation and Integration. Neuron. 2014; 81(1): 12–34.

2. Robinson PA. Neural field theory of synaptic plasticity. J Theor Biol. 2011; 285(1): 156–163.

P222 Low- and high-mode waking states in the corticothalamic system

Paula Sanz-Leon1,2, Peter A. Robinson1,2

1School of Physics, University of Sydney, Sydney, New South Wales, Australia; 2Center for Integrative Brain Function, University of Sydney, Sydney, New South Wales, Australia

Correspondence: Paula Sanz-Leon (

BMC Neuroscience 2017, 18 (Suppl 1):P222

A neural field model of the corticothalamic system has multistable regions of five steady-state solutions, up to three of which are linearly stable [1]; and, up to two of which lie within firing rate levels that are considered moderate, yet normal, in adult human physiology [2]. This confirms the existence of additional arousal states beyond the traditional steady states which have been identified with either normal or seizure-like activity [2]. The signature of these additional states, which we call H-mode states, is an overall increased level of activity up to 35 s−1 [blue dots in Figs 1(a) and 1(b)] with respect to the canonical waking states, or L-mode states (black dots). More specifically, compared to the L-states (illustrated as black dots), the H-states exhibit enhanced thalamic activity. In Fig. 1(c) mean firing rates are arranged in parallel coordinates where the coordinates correspond to cortical (ϕe), reticular (ϕr), and relay nuclei (ϕs) firing rates. This type of plot allows for the identification of trends within a group, and for the comparison with another group. Here, we observe that the qualitative behavior of the H-states (blue lines) is similar to the one of the L-states (black lines): ϕe < ϕr and ϕs < ϕr. However, in the H-states, despite the large dispersion of relay activity, cortical activity remains relatively constant. In Fig. 1(d), we show the power spectra for both L- and H-states (illustrated in black and blue lines, respectively). The H-states (i) have higher power density than the L-states over all the frequency range (0 < f < 45 Hz); and (ii) have a 5-order of magnitude increase in the power in the high-beta and gamma bands (20-35 Hz) with respect to the baseline spectra of waking states. This last result is consistent with focused and hyperarousal states found in the literature [3]. In hyperarousal increased thalamic activity is linked to high levels of attention and gamma enhancements expected due to increased activity in the relay nuclei of the thalamus.

Figure 1. Comparison of L-mode states and H-mode states from multistable regions of the corticothalamic system. Black dots and lines correspond to properties of L-states (fa < 20 s−1), while blue dots and lines are those of the H-states (fa around 30 s−1). Panels A and B are the steady states in ϕer and ϕes space, respectively. Panel C shows a parallel coordinate plot of the corticothalamic firing rates. Panel D shows the spectral signature the L-states and H-states


1. Sanz-Leon P and Robinson PA: Multistability in the corticothalamic system. J. Theor. Biol. 2017 (under review)

2. Robinson PA, Rennie CJ, Wright JJ, Bahramali H, Gordon E, Rowe DL: Prediction of electroencephalographic spectra from neurophysiology. Phys. Rev. E 2001; 63:021903.

3. GrØnli J, Rempe MJ, Clegern WC, Schmidt M and Wisor JP. Beta EEG reflects sensory processing in active wakefulness and homeostatic sleep in quiet wakefulness. J. Sleep Res. 2001; 25:257–268.

P223 Closed-loop temporally structured light stimulation in weakly electric fish

Caroline G. Forlim1,2, Lírio O. B. de Almeida 3, Ángel Lareo4, Reynaldo D. Pinto3, Pablo Varona4, Francisco B. Rodríguez4

1Clinic and Policlinic for Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany; 2Departamento de Física Geral, Universidade de Sao Paulo, Sao Paulo, 05508-090, Brazil; 3Instituto de Física de Sao Carlos, Universidade de Sao Paulo, Sao Carlos, 13560-970, Brazil; 4Escuela Politécnica Superior, Universidad Autonoma de Madrid, Madrid, 28049, Spain

Correspondence: Caroline G. Forlim (, Francisco B. Rodríguez (

BMC Neuroscience 2017, 18 (Suppl 1):P223

Closed-loop stimulation is a promising technique for neuroscience studies, especially in behavioral experiments [1, 2]. Weakly electric fish discharge short electric pulses or waves through an electric organ and detect small changes in the electric field using electroreceptors [1, 3]. These fish live in turbid waters and use electrical sensing as an additional sense to increase visual details. In addition, their electric pulses are also used to communicate by changing their inter pulse intervals depending on the behavioral context [3]. Recently, attention has been paid to the visual system [4]. However, most experiments assessing vision were conducted with periodic flashlights lasting just a few seconds and moreover, in restrained animals.

We developed the first closed-loop setup that uses temporally structured light as a stimulus for long periods in freely swimming fish. In these closed-loop protocols, the light pulses are triggered based on the real time monitored electrical activity, resulting in stimulus with similar complex temporal structure as the electrical signaling of the fish. The setup can be easily adapted to different stimulus modalities such as mechanical, acoustic and electrical stimulation allowing studies of multisensory integration.

Our validation protocol consisted of 15 min control session followed by 15 min light pulse stimulation in Gnathonemus petersii. The light stimuli were either triggered by the fish’s own electrical activity and therefore with complex temporal structure or periodic. It is important to emphasize that the main differences between these two stimuli is the temporal structure, the closed-loop share similar complex temporal structure as the electrical signaling and the periodic does not, no temporal structure is encoded in the light stimulus. We show that, for long light stimulation periods, fish decreased the discharge rate. The decrease in discharge was more accentuated when light stimuli were triggered by the fish’s electrical activity as opposed to periodic stimuli, meaning that probably the information encoded in the temporal structure was somehow meaningful for the fish and that the brain processed it distinctly from a simple periodic structure.

To the best of our knowledge, this is the first study on how light can influence the fish electrical system for long periods of time. The results give rise to important questions on the influence of light in electrocommunication and the processing of multisensory information, which can be addressed using the proposed methodology.


This work was funded by Spanish projects of Ministerio de Economía y Competitividad/FEDER TIN2014-54580-R, DPI2015-65833-P, ONRG grant N62909-14-1-N279, Spanish-Brazilian Cooperation PHB2007-0008 and 7ª Convocatoria De PROYECTOS de COOPERACION INTERUNIVERSITARIAUAM-SANTANDER con America Latina and Brazilian Agency of Conselho Nacional de Desenvolvimento Científico e Tecnológico and Fundação de Amparo à Pesquisa do Estado de São Paulo.


1. Forlim CG, Pinto RD, Varona P, Rodríguez FB. Delay-Dependent Response in Weakly Electric Fish under Closed-Loop Pulse Stimulation. PLoS ONE 2015;10:e0141007. doi:

2. Lareo A, Forlim CG, Pinto RD, Varona P, Rodriguez F. de B. Temporal Code-Driven Stimulation: Definition and Application to Electric Fish Signaling. Front Neuroinform 2016;10:41. doi:

3. Bullock TH, Hopkins CD, Popper AN, Fay RR, editors. Electroreception. vol. 21. Springer New York; 2005.

4. Pusch R, Kassing V, Riemer U, Wagner HJ, von der Emde G, Engelmann J. A grouped retina provides high temporal resolution in the weakly electric fish Gnathonemus petersii. J Physiol Paris 2013;107:84–94.

P224 Information-theoretic analysis of temporal code-driven stimulation applied to electroreception

Ángel Lareo1, Caroline Garcia Forlim2, Reynaldo D. Pinto3, Pablo Varona1, Francisco B. Rodríguez1

1Grupo de Neurocomputación Biológica, Departamento de Ingeniería Informática, Escuela Politécnica Superior, Universidad Autónoma de Madrid, Madrid, Spain; 2Clinic and Policlinic for Psychiatry and Psychotherapy, University Medical Center, Hamburg-Eppendorf, Hamburg, Germany; 3Lab. Neurodynamics/Neurobiophysics - Dept. Physics and Interdisciplinary Sciences - Institute of Physics of São Carlos, Universidade de São Paulo, São Paulo, Brazil

Correspondence: Ángel Lareo (, Francisco B. Rodríguez (

BMC Neuroscience 2017, 18 (Suppl 1):P224

Biological systems can encode information in a sequential manner, and temporal encoding gives rise to complex temporal patterns of activity. Thus, information processing in those systems can be analyzed studying the temporal structure of event trains. This is the approach followed by a recently defined real time stimulation methodology, temporal code-driven stimulation (TCDS) [1]. TCDS is a closed-loop stimulation protocol that first digitizes and binarizes a biological signal and then delivers the stimulus when a predefined code is detected. This code represents the sequential activity in the signal whose meaning is the goal of the system study. The methodology can use the study of changes in the information processing of a given biological system among different sessions: code-driven stimulation sessions, control sessions without stimulation and open-loop stimulation sessions.

In order to test this methodology, an implementation of TCDS using hard real time has been applied to electroreception using the weakly electric fish Gnathonemus Petersii. The electromotor neurons of this animal generate electrical signal pulses which can be measured in a water tank using appropriate hardware [2, 3]. These signals follow a temporal coding scheme [4] where information is encoded in the inter-pulse interval (IPI) [5]. Thus, it constitutes a convenient animal model to test closed-loop stimulation methods in an alive and freely-behaving biological system. The TCDS protocol binary digitizes the signal of the fish detecting the presence or absence of a pulse event during the binarization period and uses this codification to stimulate after detecting a preselected code from the fish’ activity. Analysis of information processing in weakly electric fish is done in previous studies in terms of IPIs distribution [1].

We complement the analysis of the TCDS protocol with a measure based on information theory: Transitions between codes. As a proof of concept, we used 4-bit codes and selected as the trigger a code with mean probability of occurrence during control sessions. Codes were grouped by the number of pulses in them, defining three sets: low, medium and high number of pulses. Preliminary results applying TCDS to electroreception in weakly electric fish indicates that it distinctly conditions the response of the system when stimulating after a predetermined code. This conclusion is also drawn by analyzing the probability of transitions between codes, as an increase in low-low transition probability is detected when the system is stimulated with the code 0101.


We acknowledge support from MINECO/FEDER TIN2014-54580-R, DPI2015-65833-P ( and ONRG grant N62909-14-1-N279.


1. Lareo A, Forlim CG, Pinto RD, Varona P, Rodriguez F: Temporal Code-Driven Stimulation: Definition and Application to Electric Fish Signaling. Frontiers in Neuroinformatics 2016, 10:41.

2. Forlim CG, Pinto RD: Automatic realistic real time stimulation/recording in weakly electric fish: Long time behavior characterization in freely swimming fish and stimuli discrimination. PLoS ONE 2014, 9:e84885 + .

3. Forlim CG, Pinto RD, Varona P, Rodriguez FB: Delay-dependent response in weakly electric fish under closed-loop pulse stimulation. 2015, 10.

4. Baker CA, Kohashi T, Lyons-Warren AM, Ma X, Carlson BA: Multiplexed temporal coding of electric communication signals in mormyrid fishes. The Journal of experimental biology 2013, 216:2365–2379.

5. Carlson BA: Electric signaling behavior and the mechanisms of electric organ discharge production in mormyrid fish. Journal of Physiology-Paris 2002, 96:405–419.

P225 Gain control mechanism based on lateral inhibition of antennal lobe improves pattern recognition performance under wide concentration variability

Aaron Montero1, Thiago Mosqueiro2, Ramon Huerta1,2, Francisco B. Rodriguez1

1Grupo de Neurocomputación Biológica, Dpto. de Ingeniería Informática, Escuela Politécnica Superior, Universidad Autónoma de Madrid, Madrid, 28049, Spain; 2BioCircuits Institute, University of California, San Diego, La Jolla, CA 92093-0402, USA

Correspondence: Aaron Montero (, Francisco B. Rodriguez (

BMC Neuroscience 2017, 18 (Suppl 1):P225

Many animals depend on odor information for living. Although different levels of concentration produce variation in the activation patterns observed in olfactory receptor neurons, most animals can correctly recognize the identity of odors regardless of their concentration. It is not clear yet what mechanisms olfactory systems employ to recognize the same stimulus regardless of their concentrations. Experiments suggest that in insects this concentration invariance appears in the Antennal Lobe, where the activity of Projection Neurons remains nearly constant, even though the concentration changes [1]. One hypothesis is that the Local Neurons are responsible to down regulate the levels of activity (also known as gain control) by laterally inhibiting the Projection Neurons [2]. We examine the impact of this gain control mechanism on pattern recognition by designing a biologically plausible model based on the interactions between Local and Projection Neurons. For this purpose, we used a computational model that represents the olfactory system of insects by a single hidden layer network [3, 4, 5]. We consider three layers: Antennal Lobe, Kenyon cells and Mushroom Body Output Neurons. In order to simulate the activation patterns of Antennal Lobe for different concentration levels, we used Gaussian functions with a variable height and width, where their centers encode the identity of the odor. We used datasets of 3000 patterns divided into 10 pattern classes and 3 concentration levels. To model the intrinsic variations observed in real olfactory systems, we added a multiplicative white noise to these Gaussians with 3 different levels (small, medium, large). The performance of a network with this gain control mechanism presented significantly lower classification error rate than a network without gain control, with an improvement of ~45%. A network with this gain control achieved a classification error of ~0% for sets of patterns with small and medium noise and <5% for large noise. These results suggest that gain control mechanism does not only suppress outbursts of activity from input layers but also greatly improves learning in Mushroom Bodies. Finally, because this mechanism does not depend on any synaptic plasticity, in agreement with the biological literature, it can also be applied to chemical sensors in electronic devices for controlling changes in environmental conditions [6, 7].


This research was supported by TIN2014-54580-R, BES-2011-049274, NIH grant R01GM113967 and CNPq grant 234817/2014-3.


1. Stopfer M, Jayaraman V, and Laurent G: Intensity versus identity coding in an olfactory system. Neuron 2003, 39:991–1004.

2. Olsen SR, Wilson RI: Lateral presynaptic inhibition mediates gain control in an olfactory circuit. Nature 2008 452(7190):956–960.

3. Huerta R and Nowotny T: Fast and robust learning by reinforcement signals: Explorations in the insect brain. Neural Comput. 2009, 21:2123–2151.

4. Montero A, Huerta R, and Rodriguez FB: Regulation of specialists and generalists by neural variability improves pattern recognition performance. Neurocomputing, 2015, 151:69–77.

5. Montero A, Huerta R, Rodriguez FB: Specialist neurons in feature extraction are responsible for pattern recognition process in insect olfaction. Artificial Computation in Biology and Medicine - International Work-Conference on the Interplay Between Natural and Artificial Computation (IWINAC), Elche, Spain; 2015. part I p. 58–67.

6. Trincavelli M, Vergara A, Rulkov N, Murguia JS, Lilienthal A, Huerta R: Optimizing the operating temperature for an array of mox sensors on an open sampling system. AIP Conference Proceedings, 2011, 1362:225.

7. Huerta R, Mosqueiro T, Fonollosa J, Rulkov NF, Rodriguez-Lujan I: Online decorrelation of humidity and temperature in chemical sensors for continuous monitoring. Chemometr Intell Lab Syst, 2016, 157:169–176.

P226 Maximum Relative Area as a Feature for Adaptability in ERP-based BCI Systems

Vinicio Changoluisa1,2, Pablo Varona1, Francisco B. Rodriguez1

1Grupo de Neurocomputación Biológica, Dpto. de Ingeniería Informática. Escuela Politécnica Superior, Universidad Autónoma de Madrid, Madrid, Spain; 2Universidad Politécnica Salesiana, Quito, Ecuador

Correspondence: Vinicio Changoluisa (, Francisco B. Rodriguez (

BMC Neuroscience 2017, 18 (Suppl 1):P226

Adaptive Brain Computer Interfaces (BCI) are an important research topic in the last years. However, a critical and pending problem is their variable performance even within subjects. In event-related potentials (ERP)-based BCIs the variability of amplitude and latency impair the detection of the ERP components. In order to overcome those problems, target and non-target stimuli are repeated several times (trials). Repetitions can cause fatigue and a decrease in task performance. Therefore, achieving high accuracy with a few stimuli is a challenge. We propose a methodology that contributes to the management of variability in ERP-based BCIs through the characterization of the maximum relative voltage area (maxRAUC) in the region of the EEG signal where a ERP component can be located. We call maxRAUC relative since it is a maximum value within each trial, not the maximum value of all trials. This method calculates maxRAUC incrementally in time for each stimulus. The one with the highest value is considered a target stimulus. In this way, the differences between a target and a non-target stimulus are maximized. Electrodes having the highest maxRAUC in the ERP region of the signal are potentially likely to have better characteristics for detecting ERP effectively. Our method was tested with a linear classifier (LDA) based on the Krusienski method (KM) [1] and the dataset_IIb of the BCI competition ( This dataset contains the data of one user, divided into three sessions: two training sessions (called 10 and 11) and one session to test the classifier. Users were stimulated through P300 Speller Paradigm described in the competition. The electrodes with the largest maxRAUC were found in the central and frontal lobes. We checked the influence of these electrodes on the system’s adaptability and evaluated the classifier with two configurations: the first, with 8 electrodes used in KM; and the second, by replacing Fz and Cz by the electrodes among those with the higher maxRAUC of each session. With this electrode selection, the accuracy of the classifier improved and reached 100% success with a low number of trials, see Table 1. We also validated the robustness of our method by combining data from training sessions 10 and 11.

Table 1. Trials needed to achieve 100% success in each session. Common Electrodes (CE): Pz, P3, P4, PO7, PO8, Oz. We emphasize the best results with italic font


Session 10

Session 11

Session 10 + 11

Electrode configuration







KM electrodes

Cz + Fz + CE


Cz + Fz + CE


Cz + Fz + CE


CE + 2 maxRAUC electrode

C1 + FPz + CE


C1 + FC1 + CE


C3 + F1 + CE


CE + 1 maxRAUC electrode

C3 + CE


F1 + CE


F1 + CE


In summary, here we propose a new methodology to extract additional information from EEG electrodes that contributes to manage the adaptability of ERP-based BCIs. This method adapts to the variability of each session and helps to decrease the number of electrodes and trials necessary to achieve a 100% success. The maxRAUC contributes to early detection of ERP and further adaptation. This method can also be applied to other ERP components (N200, N100, etc.) which are considered for future work.


This work was funded by Spanish projects of Ministerio de Economía y Competitividad/FEDER TIN2014-54580-R, DPI2015-65833-P and Predoctoral Research Grants 2015-AR2Q9086 of the Government of Ecuador (SENESCYT).


1. Dean J Krusienski, Eric W Sellers, François Cabestaing, Sabri Bayoudh, Dennis J McFarland, Theresa M Vaughan, and Jonathan R Wolpaw: A comparison of classification techniques for the P300 Speller. Journal of neural engineering 2006, 3(4):299–305.

P227 Intrinsically stochastic neuron models for use in network simulations

Vinícius L. Cordeiro, César C. Ceballos, Nilton L. Kamiji, Antonio C. Roque

Departamento de Física-FFCLRP, Universidade de São Paulo, Ribeirão Preto, SP 14040-901, São Paulo, Brazil

Correspondence: Vinícius L. Cordeiro (

BMC Neuroscience 2017, 18 (Suppl 1):P227

Experimental evidence suggest that neurons are inherently stochastic systems displaying trial-to-trial response variability [1]. This stochasticity may have functional consequences on network behavior, so it is important to construct stochastic single-neuron models to be used in network simulations. There are basically two ways of constructing a stochastic neuron model [2, 3]. One is to consider a deterministic model, e.g. the leaky integrate-and-fire (LIF), Izhikevich or AdEx model [2], and add stochastic terms to the inputs received by the neuron. The other is to model a spike as an intrinsically stochastic event. The second way can be implemented in two different but equivalent manners: by a randomly varying spike threshold as in the escape noise model [4], or by a spike probability function Φ(V), which depends on the membrane potential V as in the simplified version of the Galves-Löcherbach (GL) discrete-time model [5] recently proposed by Brochini et al. [3].

Here we have considered the Brochini et al. [3] version of the GL model (from here onwards simply called GL model) and empirically determined the probability function Φ(V) so that the model can describe stochastic firing behaviors of the two most import cortical cell types, namely regular (RS) and fast (FS) spiking neurons [6]. To determine Φ(V) for these two cell types, biophysically detailed models of RS and FS neurons were chosen from the neuron database ModelDB ( and submitted to realistic patterns of synaptic input. The detailed neuron model simulations were done in NEURON [7]. These simulations generated time series of membrane potential values V t for the detailed RS and FS neuron models. From these time series, we determined action potential onset values V th from the dV/dt versus V phase space using so-called Method II of [8]. For each action potential, the voltage values above threshold were discarded and with the remaining ones we constructed two distribution histograms, one for all voltage values (including V th) and the other for threshold values only. The histograms were superposed as in Figure 12 of [9] to allow an estimate of the probability of firing for each discretization bin.

The resulting probability functions display nonlinear exponential behavior. Based on them we constructed stochastic GL models for RS and FS neurons and submitted them to simulated input currents to obtain frequency-current (FI) curves. These stochastic neuron models can be used in large-scale simulations of cortical network models.


This work was produced as part of the activities of FAPESP Research, Disseminations and Innovation Center for Neuromathematics (grant 2013/07699-0, S. Paulo Research Foundation). NLK is supported by a FAPESP postdoctoral fellowship (grant 2016/03855-5). ACR is partially supported by a CNPq fellowship (grant 306251/2014-0).


1. Longtin A: Neuronal noise. Scholarpedia 2013, 8(9):1618.

2. Gerstner W, Kistler WM, Naud R, Paninski L: Neural Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge University Press 2014.

3. Brochini L, Costa AA, Abadi M, Roque AC, Stolfi J, Kinouchi O: Phase transitions and self-organized criticality in networks of stochastic spiking neurons. Sci Rep 2016, 6:35831.

4. Gerstner W, van Hemmen L: Associative memory in a network of ‘spiking’ neurons. Network 1992, 3:139–164.

5. Galves A, Löcherbach E: Infinite systems of interacting chains with memory of variable length: a stochastic model for biological neural nets. J Stat Phys 2013, 151:896–921.

6. McCormick DA, Connors BW, Lighthall JW, Prince DA: Comparative electrophysiology of pyramidal and sparsely spiny stellate neurons of the neocortex. J Neurophysiol 1985, 54:782–806.

7. Carnevale NT, Hines ML: The NEURON Book. Cambridge University Press; 2006.

8. Sekerli M, Del Negro CA, Lee RH, Butera RJ: Estimating action potential thresholds from neuronal time-series: new metrics and evaluation of methodologies. IEEE Trans Biomed Eng 2004, 51:1665–1672.

9. Azouz R, Gray CM: Cellular mechanisms contributing to response variability of cortical neurons in vivo. J Neurosci 1999, 19:2209–2223.

P228 Modeling action potential and network effects after site-directed RNA editing of sodium channels

William W. Lytton1,2, Andrew Knox3, Joshua J. C. Rosenthal4

1Depts. of Physiology & Pharmacology and Neurology, SUNY Downstate, Brooklyn, NY 11203 USA; 2Dept. of Neurology, Kings County Hospital, Brooklyn, NY 11203 USA; 3Dept. of Neurology, University of Wisconsin, Madison, WI 53705 USA; 4Dept. of Neurobiology, Marine Biological Laboratory, Woods Hole, MA 02543 USA

Correspondence: William W. Lytton (

BMC Neuroscience 2017, 18 (Suppl 1):P228

New techniques now make it possible to modify messenger RNA and thereby modify specific proteins in vivo. Experimentally, we have edited RNA using adenosine deamination to modify the mammalian fast sodium (Naf) channel (NaV1.4) by converting a key lysine residue to arginine in the selectivity region that is part of the aspartate-glutamate-lysine-alanine motif (DEKA to DERA). This change allows the channel to be permeable to both Na and K, effectively changing the reversal potential associated with this conductance to a value intermediate between the Nernst potentials of those two ions. The degree of alteration in the Naf channel can be manipulated, producing a mixed population of native and mutated channels. We modeled the effects of this manipulation on the classical Hodgkin-Huxley model of action potential propagation in the squid axon, as well as in other axonal models closer to mammalian morphology and temperature. As expected, action potential amplitude was reduced at higher percentages of the modified Naf channel, reaching a point where an action potential could no longer be maintained at the maximal conductance provided. Action potential conduction velocity was fast (approximately 10 mm/ms) when using a high-impedance axon termination, and showed little fall off with increased percent of modified channel. Conduction velocity was much slower (approximately 2 mm/ms) when using a low impedance termination, and showed a 20% falloff with increase in percent of the modified channel. These results were seen both at squid axon temperature and Ra (6.3o C and 34.5 O-cm) and at mammalian values (37o C and 250 O-cm). Action potentials were formed at lower sodium channel density and conducted at greater velocity at the low temperature, where the more prolonged activation due to the slower kinetics provided increased effect at neighboring locations.

RNA editing is being used experimentally to erase the mutations that introduce the premature termination codons that lead to cystic fibrosis. This manipulation has potential for clinical use in patients with this deadly genetic disease. Similarly, clinical manipulation of the RNA for the sodium channel has potential for use in intractable epilepsies such as Lennox-Gastaux syndrome, where neither surgical nor pharmacological intervention is generally effective.


The authors would like to acknowledge NIH support from EB02290301 (WL), EB017695 (WL), MH086638 (WL), NS087726 (JR).

P229 Movement-related delta-theta synchronization in young and elderly healthy subjects

Silvia Daun1,2, Svitlana Popovych1,2, Liqing Liu1,2, Bin A. Wang1, Tibor I. Tóth2, Christian Grefkes1,3, Gereon R. Fink1,3, Nils Rosjat1,2

1Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Center Juelich, Juelich, 52428 Germany; 2Heisenberg Research Group of Computational Neuroscience - Modeling Neural Network Function, Department of Animal Physiology, Institute of Zoology, University of Cologne, Cologne, 50674, Germany; 3Department of Neurology, University Hospital Cologne, Cologne, 50937, Germany

Correspondence: Silvia Daun (

BMC Neuroscience 2017, 18 (Suppl 1):P229

The wealth of data showing that human motor performance is affected by normal ageing is contrasted by the dearth of data on ageing effects on the neural processes underlying action. For example, it remains to be elucidated how the different phases of an action (i.e., preparation, initiation and execution) are expressed in neural oscillations and how these are affected by normal ageing. The interest in ageing-related changes of motor performance and the neural basis thereof are governed by the quest for more detailed insights into the possible reorganization of the key phases of an action. For this reason, it is apt and timely to study ageing-dependent effects on the neural organization of motor performance in more detail. The crucial point of such investigations is the study of synchronization, a key mechanism underlying the coordination of distinct neural populations in shaping complex motor tasks.

In an earlier EEG-study [1] on young adults, we found that when generating unilateral index-finger movements, local oscillations in the δ-θ frequency band over the centroparietal, central and frontocentral regions (corresponding to the primary motor area (M1), the supplementary motor area (SMA) and the pre-motor area (PM), respectively) exhibited robust phase locking both prior to and during the movement. The local oscillations were most pronounced in the hemisphere contralateral to the moving hand in both externally and internally triggered actions. A subsequent study [2] using an identical experimental paradigm with a population of older adults found that the local phase locking in the δ-θ frequency band was also present during the motor acts of the older participants.

To investigate the neural processes underlying ageing-related dependence of the motor performance in more detail, we employed inter-regional phase-locking analysis by calculating the phase-locking values (PLVs) from the EEG records of the two data sets mentioned above. PLV measures the extent of instantaneous synchronization between two distinct brain regions.

Our analysis revealed significant PLV in both age groups in the δ-θ frequencies around movement onset. Invariant sub-networks were established by strong PLV between brain areas involved in the motor act, which were different in older and younger subjects. More intra- and inter-hemispheric PLVs occurred in older than in younger subjects. Furthermore, data suggest that older subjects compensate for the diminished connectivity observed between contralateral M1 and SMA, and ipsilateral PM and SMA during movement preparation and execution by establishing additional intra- and inter- hemispheric connections.

Based on the above findings on local and inter-regional phase locking, we built a mathematical model consisting of phase oscillators representing two main regions of the motor network, i.e. SMA and M1. This simple model is capable of reproducing the effects of increased PLI and, independently of this, the effect of increased PLV between both regions. After extending the network model to all core motor regions and fitting the model parameters to the experimental data it will serve as a tool to make predictions on disturbed networks dynamics, e.g. decoupling of nodes.


1. Popovych S, Rosjat N, Tóth TI, Wang BA, Liu L, Abdollahi RO, Viswanathan S, Grefkes C, Fink GR, Daun S: Movement-related phase locking in the delta-theta frequency band. NeuroImage 2016, 139: 439–449.

2. Liu L, Rosjat N, Popovych S, Yeldesbay A, Wang BA, Tóth TI, Grefkes C, Fink GR, Daun S: Movement related intra-regional phase locking in the delta-theta frequency band in young and elderly subjects. Program No. 624.08. 2016. Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience, 2016. Online.

P230 ePyNN: a low cost embedded system for simulating Spiking Neural Networks

Abraham Perez-Trujillo1, Andres Espinal2, Marco A. Sotelo-Figueroa2, Ivan Cruz-Aceves3, Horacio Rostro-Gonzalez1

1Department of Electronics, University of Guanajuato, 36885 Salamanca, Guanajuato, Mexico; 2Department of Organizational Studies, University of Guanajuato, 3625 Guanajuato, Mexico; 3CONACYT, Mathematics Research Center (CIMAT), 36000 Guanajuato, Mexico

E-mail: Horacio Rostro-Gonzalez (

BMC Neuroscience 2017, 18 (Suppl 1):P230

In this work, we present a low cost embedded system to simulate Spiking Neural Networks through PyNN [1]. PyNN is a Python library widely used in the neuroscience community to simulate at software and hardware level several existent simulators (NEURON, NEST, PCSIM and BRIAN) by acting as an interface to unify the different instructions and neuron model definitions. At hardware level, serves as a high-level interface to directly map spiking neuron models on the SpiNNaker neuromorphic system [2]. Albeit, SpiNNaker and other systems such as TrueNorth have demonstrated tremendous capabilities to process information such as the brain does, these systems are still unreachable for the large community who wants to implement or validate simplest models on a hardware platform. In this regard, we developed ePyNN, which is the PyNN simulator embedded on a Raspberry Pi 3 board, which has a 1.2 GHz 64-bit quad-core ARMv8 CPU. Here, we have been able to implement a neural network with the  if_curr_exp  model, which is a leaky integrate-and-fire model with fixed threshold and exponentially-decaying post-synaptic conductance to generate real time locomotion patterns expressed as spike trains for a hexapod robot [3, 4]. Specifically, we designed a network of 12 neurons, where each of them controls one of the degrees of freedom (servomotors) of the robot with a specific topology, which was offline performed by an evolutionary approach. Finally, the ePyNN has been successfully validated on a real hexapod robot (Figure 1C) for three different locomotion gaits (walk, jog and run) running in real time (Figure 1 A, B).

Figure 1 A. Biological Patterns B. Generated patterns C. Robot + ePyNN platform


This research has been supported by the CONACYT project “Aplicación de la Neurociencia Computacional en el Desarrollo de Sistemas Robóticos Biológicamente Inspirados” (No 269798).


1. Davison AP, Bruderle D, Eppler J, Kremkow J, Muller E, Pecevski D, Perrinet L, Yger P: PyNN: A Common Interface for Neuronal Network Simulators. Front Neuroinform 2008, 2:11.

2. Furber SB, Galluppi F, Temple S, Plana LA: The SpiNNaker Project. Proceedings of the IEEE 2014, 102(5):652–665.

3. Rostro-Gonzalez H, Cerna-Garcia PA, Trejo-Caballero G, Garcia-Capulin CH, Ibarra-Manzano MA, Avina-Cervantes JG, Torres-Huitzil C: A CPG system based on spiking neurons for hexapod robot locomotion. Neurocomputing 2015, 170:47–54.

4. Espinal A, Rostro-Gonzalez H, Carpio M, Guerra-Hernandez EI, Ornelas-Rodriguez M, Sotelo-Figueroa M: Design of Spiking Central Pattern Generators for Multiple Locomotion Gaits in Hexapod Robots by Christiansen Grammar Evolution. Frontiers in Neurorobotics 2016, 10:6.

P231 Temporal structure of bilateral coherence in essential and physiological hand tremor

Martin Zapotocky1,2, Soma Chakraborty1,2, Martina Hoskovcová2, Jana Kopecká2, Olga Ulmanová2, Evžen Růžička2

1Institute of Physiology, Czech Academy of Sciences, Prague, 14220, Czech Republic; 2Department of Neurology, First Faculty of Medicine, Charles University in Prague, 120 00, Czech Republic

Correspondence: Martin Zapotocky (

BMC Neuroscience 2017, 18 (Suppl 1):P231

Pathological hand tremor is associated with a number of neurological diseases and may significantly impede motor functions in the patient. The most common pathological type is essential tremor (ET), found in 4.6% of the population aged over 65 years [1]. The neurophysiological basis of ET is still under debate, and recent literature suggests that patients with the ET diagnosis may in fact fall into several categories with distinct disease origins [2]. Detailed quantitative analysis of the features of the tremor may help in further classification and in clarifying the underlying neurophysiological mechanisms.

Depending on the underlying mechanism, the tremors in the left hand and right hand may be coupled or independent. In the previous literature on tremors, this bilateral coupling was assessed using stationary spectral coherence analysis, both on the level of hand kinematics and of muscle activity. Highly prevalent bilateral coherence was found for orthostatic [3] and psychogenic [4] tremors, while for other tremor types including ET, such coupling was only rarely reported. In our recent study [5], we used nonstationary, wavelet-based coherence analysis of kinematic recordings to show that the oscillations of the two hands are intermittently coupled in ET. We found that intervals of strong bilateral coherence, lasting for up to a dozen seconds, alternate with time intervals of insignificant coherence. We also observed intermittent bilateral coherence for physiological tremor (a normal hand oscillation of low amplitude) recorded in healthy subjects.

Here we further extend the analysis of Ref. [5], based on the same dataset of accelerometric recordings obtained from 34 ET patients and 42 healthy subjects. We analyze the distribution of durations of the bilaterally coherent time intervals extracted from wavelet analysis, and examine its dependence on the tremor type (physiological vs. essential) and on the hand position. The statistical significance of the coherence intervals is evaluated with surrogate analysis, using “natural” surrogates (the hand acceleration recorded from other subjects), as well as artificially constructed surrogates that have randomized Fourier phases but match the power spectrum and value distribution of the recorded time series [6]. We analyze separately the bilateral coupling of tremor amplitude, and evaluate its contribution to the bilateral coherence of tremor as assessed by spectral/wavelet coherence.


Supported by Czech Science Foundation (P304/12/G069), Charles University in Prague (Progres Q27, SVV NeST III), and Czech Health Research Council (AZV 16-28119A).


1. Louis ED, Ferreira JJ: How common is the most common adult movement disorder? Update on the worldwide prevalence of essential tremor. Mov Disord 2010, 25(5):534–41.

2. Louis ED: Essential tremors: a family of neurodegenerative disorders? Arch Neurol 2009, 66(10):1202–1208.

3. Lauk M, Köster B, Timmer J, Guschlbauer B, Deuschl G, Lücking CH. Side-to-side correlation of muscle activity in physiological and pathological human tremors. Clin Neurophysiol 1999, 110:1774–1783.

4. Raethjen J, Kopper F, Govindan RB, Volkmann J, Deuschl G: Two different pathogenetic mechanisms in psychogenic tremor. Neurology 2004, 63:812–815.

5. Chakraborty S, Kopecká J, Šprdlík O, Hoskovcová M, Ulmanová O, Růžička E, Zapotocky M: Intermittent bilateral coherence in physiological and essential hand tremor. Clin Neurophysiol 2017, 128(4):622–634.

6. Schreiber T, Schmitz A: Improved surrogate data for nonlinearity tests. Phys Rev Lett 1996 77(4):635–638.

P232 Detecting joint pausiness in parallel spike trains

Matthias Gärtner1, Sevil Duvarci2, Jochen Roeper2, Gaby Schneider1

1Institute of Mathematics, Goethe-University, Frankfurt, Germany; 2Neuroscience Center, Institute of Neurophysiology, Goethe-University, Frankfurt, Germany

Correspondence: Matthias Gärtner (

BMC Neuroscience 2017, 18 (Suppl 1):P232

Transient periods with reduced neuronal discharge - called ‘pauses’ - have recently gained increasing attention. In dopamine neurons, pauses are considered important teaching signals, encoding negative reward prediction errors. Particularly simultaneous pauses are likely to have increased impact on information processing. Available methods for detecting joint pausing analyze temporal overlap of pauses across spike trains. Such techniques are threshold dependent and can fail to identify joint pauses that are easily detectable by eye, particularly in spike trains with different firing rates.

We introduce a new statistic called ‘pausiness’ that measures the degree of synchronous pausing in spike train pairs and avoids threshold-dependent identification of specific pauses. A new graphic termed the ‘cross-pauseogram’ compares the joint pausiness of two spike trains with its time shifted analogue, such that a (pausiness) peak indicates joint pausing. When assessing significance of pausiness peaks, we use a stochastic model with synchronous spikes to disentangle joint pausiness arising from synchronous spikes from additional ‘Joint Excess Pausiness’ (JEP). Parameter estimates are obtained from auto- and cross-correlograms, and statistical significance is assessed by comparison to simulated cross-pauseograms.

Our new method was applied to dopamine neuron pairs recorded in the ventral tegmental area of awake behaving mice. Significant JEP was detected in about 20% of the pairs. Given the neurophysiological importance of pauses and the fact that neurons integrate multiple inputs, our findings suggest that the analysis of JEP can reveal interesting aspects in the activity of simultaneously recorded neurons.


This work was supported by the Priority Program 1665 of the DFG (DU 1433/1-1 to SD and JR, and SCHN 1370/2-1 to MG and GS), by an EMBO long-term fellowship (ALTF_210-2012 to SD), and by the German Federal

Ministry of Education and Research (BMBF, 01ZX1404B to GS).

P233 A stochastic model relates responses to bistable stimuli to underlying neuronal processes

Stefan Albert1, Katharina Schmack2, Gaby Schneider1

1Institute of Mathematics, Goethe-University, Frankfurt a.M., Germany; 2Department of Psychiatry and Psychotherapy, Charité Universitätsmedizin, Berlin, Germany

Correspondence: Stefan Albert (

BMC Neuroscience 2017, 18 (Suppl 1):P233

Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. The respective response patterns show differences between schizophrenic patients and healthy controls [1, 2]. At the same time, these patterns show similarities with spiking patterns of dopaminergic cells [3] that may be related to schizophrenia spectrum disorders. Specifically, oscillatory behavior [4] with single percept changes occurs during continuous viewing of ambiguous stimuli, and stable more or less regular periods followed by bursts of percept changes are observed during intermittent viewing of ambiguous stimuli.

Therefore, we propose a stochastic model that provides a link between the observed response patterns and potential underlying neuronal processes. To that end, we first develop a Hidden Markov Model that captures the observed group differences by describing switches between stable and unstable states in the intermittent presentation and using only one state in continuous presentation. Second, the model is embedded into a hierarchical model that describes potential underlying neuronal activity as difference between two competing neuronal populations similar to [5]. This differential activity is assumed here to generate switching between (i) the two conflicting percepts and between (ii) stable and unstable states with comparable mechanisms on different neuronal levels. Using only a small number of parameters, the model can be fitted to a large data set of perceptual responses of schizophrenic patients and healthy controls under continuous and intermittent stimulation. The model can closely reproduce a wide variety of response patterns and is able to capture and to provide potential neuronal mechanisms for group differences between healthy controls and schizophrenic patients such as the weaker tendency to stabilized perception in the patient group under intermittent stimulation [2].


This work was supported by the German Federal Ministry of Education and Research (BMBF, Funding number: 01ZX1404B; SA, KS, GS).


1. Schmack K, Gòmez-Carrillo de Castro A, Rothkirch M, Sekutowicz M, Rössler H, Haynes J, Heinz A, Petrovic P, Sterzer S: Delusions and the Role of Beliefs in Perceptual Inferences. J Neurosci E 2013, 33(34):13701–13712.

2. Schmack K, Schnack A, Priller J, Sterzer P: Perceptual instability in schizophrenia: Probing predicitive coding accounts of delusions with ambiguous stimuli. Schizophr Res Cog 2015, 2(2):72–77.

3. Bingmer M, Schiemann J, Roeper J, Schneider G: Measuring burstiness and regularity in oscillatory spike trains. J Neurosci Methods 2011, 201: 426–437.

4. Brascamp JW, Pearson J, Blake R, van den Berg AV: Intermittent ambiguous stimuli: Implicit memory causes periodic perceptual alternations. J Vis 2009, 9(3): 1-23.

5. Gigante G, Mattia M, Braun J, Del Guidice P: Bistable perception Modeled as Competing Stochastic Integration at Two Levels. PLoS Comput Bio 2009, 5(7): e1000430.

P234 Function and energy consumption constrain biophysical properties of neurons - an example from the auditory brainstem

Michiel Remme1,2, John Rinzel3,4, Susanne Schreiber1,2

1Institute for Theoretical Biology, Humboldt University, 10115 Berlin, Germany; 2Bernstein Center for Computational Neuroscience Berlin, Germany; 3Center for Neural Science, New York University, New York, NY 10003, United States; 4Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, United States

Correspondence: Michiel Remme (

BMC Neuroscience 2017, 18 (Suppl 1):P234

Neural morphology and membrane properties vary greatly between cell types in the nervous system. While the function of neurons is thought to be the key constraint for their biophysical properties, additional constraints may further shape neuronal design and explain observed properties. Here, we focus on principal neurons in the MSO nucleus of the auditory brainstem and show that a tradeoff between a functionally relevant computation and energy consumption predicts optimal ranges of biophysical parameters.

Biophysical properties of MSO cells as well as their function are well characterized: MSO cells encode the direction of sound in the horizontal plane. Inputs to MSO cells are phase-locked to sound wave stimuli to each ear and the interaural time difference (ITD) of sound waves is used to compute source location. To achieve sensitivity to ITDs in the range of tens of μs, MSO cells have specialized membrane properties, including a very fast membrane time constant (~1 ms) and a low-threshold potassium current (IKLT), both contributing to a very short input integration window [1]. Furthermore, MSO cell function is supported by their bipolar morphology, with inputs from the two ears segregated to the two main dendrites [2].

Next to function, energy use can be assumed to significantly constrain MSO cell properties. Overall, the brain accounts for a disproportionately large part (~20%) of the energy budget, with metabolic energy being mostly spent on synaptic input, action potentials, and resting potentials [3]. MSO cells, in particular, receive inputs at very high rates (hundreds of Hz), generate action potentials at similarly high rates, and display a very leaky membrane.

Here, we quantify and contrast sensitivity of MSO cells to ITDs as well as the associated metabolic cost. We developed a simplified dendritic model of an MSO cell that includes the KLT-current. We first fit the model to experimental data from [1] and then explored how varying the morphological and membrane parameters affects performance and energy consumption. We found that most experimentally constrained parameters were close to a functional optimum; if a wider range of functionally good values was available, the fitted parameters tended towards lower energy usage. Interestingly, we found that the KLT-current increases energy costs, but strongly improves coincidence detection, beyond passive capabilities. We next explored the full parameter space by considering 100,000 models with random combinations of parameters. The experimentally constrained model was among the top 13% regarding performance and top 12% regarding energy efficiency (i.e., sensitivity per energy). Exploration of the full parameter space highlighted that two model features explain most of their performance and energy consumption: 1) the level of saturation of the driving force of the synaptic conductance inputs and 2) the width of the somatic compound EPSPs. We conclude that the neural design of MSO cells is indeed compatible with both functional and energetic constraints, with a preference of function over cost.


This work was supported by the Einstein Foundation Berlin and the German Federal Ministry of Education and Research (01GQ0901, 01GQ1403).


1. Mathews PJ, Jercog PE, Rinzel J, Scott LL, Golding NL. Control of submillisecond synaptic timing in binaural coincidence detectors by Kv1 channels. Nat Neurosci 2010. 13:601–609.

2. Agmon-Snir H, Carr CE, Rinzel J: The role of dendrites in auditory coincidence detection. Nature 1998, 393:268–272.

3. Attwell D, Laughlin SB: An energy budget for signaling in the grey matter of the brain. J Cereb Blood Flow Metab 2001, 21:1133–1145.

P235 The Brain Simulation Platform of the Human Brain Project: collaborative web applications and tools for data-driven brain models

Michele Migliore1, Carmen A. Lupascu1, Luca L. Bologna1, Rosanna Migliore1, Stefano M. Antonel2, Jean-Denis Courcol2, Felix Schürmann2

1Institute of Biophysics, National Research Council (CNR), Palermo, Italy; 2Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland

Correspondence: Michele Migliore (

BMC Neuroscience 2017, 18 (Suppl 1):P235

The Brain Simulation Platform (BSP) of the Human Brain Project (HBP) provides a large set of tools to build, reconstruct, simulate and analyze data-driven brain models in a collaborative manner (Figure 1). The available tools are organized by use cases, consisting of selected procedures illustrating specific practical examples on how to exploit the Platform capabilities to pursue scientific goals.

The platform is designed to target users with different background and expertize such as: a) “end-users”, interested in using the platform in a user-friendly manner, b) “power-users”, able to take advantage of the platform services while integrating their own High Performance Computing resources, c) “expert-users”, who can contribute to the development of the tools, and d) “co-design developers” who are early adopters of initial versions of the platform facilities.

In this poster, we will give an overview of the current BSP release, the services it provides and the collaborative approach underlying its design. To illustrate the potential of the platform, and how users with different background can take full advantage of its tools, we will demo a few use cases in which “end-users” and-or “expert-users” are guided through step-by-step python-based jupyter notebook and web applications graphical interfaces (Figure 1).

Figure 1. The HBP Brain Simulation Platform web interface. A. BSP Overview web page. B and C. synaptic events Fitting and Electrophysiological Feature Extraction GUIs, developed as a jupyter notebook and a web app respectively


This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 720270.

P236 A Single Pyramidal-Cell and Network Computational Model of the Hippocampal CA3 Region

Sami Utku Çelikok1, Eva M. Navarro-López2, Neslihan Serap Şengör3

1Biomedical Engineering Department, Boğaziçi University, Istanbul, 34342, Turkey; 2School of Computer Science, The University of Manchester, Manchester, M13 9PL, UK; 3Electronics and Communication Department, Istanbul Technical University, Istanbul, 34469, Turkey

Correspondence: Sami Utku Çelikok (

BMC Neuroscience 2017, 18 (Suppl 1):P236

Hippocampal subarea CA3 has long drawn attention for its major role in encoding spatial representations and episodic memories [1]. Due to the presence of rich recurrent feedback connections, CA3 has been considered to play a key role in long-term memory formation. Moreover, CA3 has long been proposed as an auto-associative network capable of pattern completion and path integration for the retrieval and storage of episodic/declarative memory traces [2]. A broad range of experimental studies have supported the idea that hippocampal oscillations must be taken into consideration while investigating the region as a memory network. Empirically-validated studies on freely moving rats have identified two major oscillatory patterns of hippocampal activity in a behaviour-dependent context: theta- (4–8 Hz) and gamma-band (30–100 Hz) frequency rhythms [3, 4]. In rodents and humans, gamma rhythms embedded into theta oscillations become prominent during memory functions, object exploration, and spatial navigation [1]. The consideration of the spiking patterns of the neurons during oscillatory regimes is key to uncover the significance of hippocampal network oscillations in different processes. When the broad electrophysiological repertoire of CA3 pyramidal cells is considered, the computational description of the network requires a neural model. This model has to be simple enough to support a large hippocampal network, but still rich enough to capture complex pyramidal-cell dynamics. This is precisely what we propose here: a single-cell computational model for a CA3 pyramidal neuron that is used as the basic element to form a CA3 network model which will be able to reproduce key hippocampal oscillatory patterns. The spiking patterns of the offered single-cell model capture some essential features of well-known hippocampal spiking behaviour, such as: spike broadening at the end of a burst, rebound bursting, low-frequency bursts, and high-frequency tonic spiking (Figure 1). Moreover, the model for the CA3 population is also able to generate theta and gamma-band oscillations, known to be present in the CA3 region.

Figure 1. A. Single-cell model results. Upper-left: Initial spike generation, upper-right: rebound bursting in response to hyperpolarisation, bottom: burst-to-tonic spike transition with increased input current. B. Population model spectrograms. Upper: gamma-band oscillations in the network, bottom: theta-band oscillations in the network


1. O’Keefe J, Nadel, L: The Hippocampus as a Cognitive Map. Oxford, UK: Oxford University Press; 1978.

2. Samsonovich A, McNaughton BL: Path integration and cognitive mapping in a continuous attractor neural network model. J Neurosci 1997, 17(15):5900–5920.

3. Gloveli T, Kopell N, Dugladze T: Neuronal activity patterns during hippocampal network oscillations in vitro. In: Hippocampal Microcircuit 2010, Springer 247–276.

4. Leung LS, Lopes da Silva F, Wadman WJ: Spectral characteristics of the hippocampal EEG in the freely moving rat. Clin Neurophysiol 1982, 54:203–219.

P237 Functional connectivity between prefrontal cortex and striatum showed by computational model

Rahmi Elibol, Neslihan Serap Sengor

Electronics and Communication Engineering, Istanbul Technical University, Istanbul, Turkey

Correspondence: Rahmi Elibol (

BMC Neuroscience 2017, 18 (Suppl 1):P237

It is well-known that there is a strong correlation between cortex and striatal activity especially during progression of action selection and goal directed behavior. This interaction between cortex and striatum project back to the cortex through direct and indirect pathways and over thalamus forming a closed loop [1]. Such structural associations of the brain are called structural connectivity or connectome. Due to the development of measurement technologies as fMRI, more work has been carried to build up the association between the different areas of the brain and the cognitive processes, and such associations are called functional connectivity or functional connectome. Besides these, the processes carried out at neuronal level and/or the changes at synaptic connections which give rise to relations that are observed at frequency and/or phase levels is called dynome [2]. The structural connection between cortex and striatum is already known and their functional connectivity has been shown with experimental studies. In this work, based on the experimental results given in [3], a computational model is proposed based on the dynamical connection of neurons and synapses showing the dynome relation between cortex and striatum.

During the experimental studies that have been explained in [3], LFP in prefrontal cortex and striatum are measured. Beta and gamma frequency bands have been observed and with PLV, the correlation between cortical and striatal activity has been shown [3, 4]. These experimental results have been recreated with the computational model proposed and it is shown that the results given in Figure 1 are similar to the experimental results. The simulations are carried out by considering the similar conditions considered in experiments. The stimuli are applied as in the experimental work and the role of different reward quantities is investigated by changing the dopamine levels.

Figure 1. The correlation between the PFC and striatal activity: A. The activity in PFC B. The activity in striatum, C. The correlation between PFC and striatum. The activities in PFC and striatum are given with normalized firing rate values. The results show the there is a correlation between cortex and striatum


1. GE Alexander, MD Crutcher, MR DeLong, Basal ganglia-thalamocortical circuits: Parallel substrates for motor, oculomotor, “prefrontal” and “limbic” functions. Progress in Brain Research, 85, 119–146,

2. NJ Kopell, HJ Gritton, MA Whittington, MA Kramer: Beyond the connectome: the dynome. Neuron 2014, 83(6):1319–1328. doi:

3. Y Zhang, X Pan, R Wang, M Sakagami: Functional connectivity between prefrontal cortex and striatum estimated by phase locking value. Cogn Neurodyn. 2016, 10(3):245–254. doi:

4. EG Antzoulatos, EK Miller: Increases in functional connectivity between prefrontal cortex and striatum during category learning. Neuron 2014, 83(1):216–225. doi:

P238 A spiking neural network model of basal ganglia-thalamocortical circuit with Brian2

Mustafa Yasir Özdemir, Neslihan Serap Şengör

Electronic-Communication Department, İstanbul Technical University, İstanbul, Turkey

Correspondence: Mustafa Yasir Özdemir (

BMC Neuroscience 2017, 18 (Suppl 1):P238

Basal ganglia circuit which is located in the midbrain has an essential role in action selection, decision making and reward based learning processes. In this work, especially basal ganglia-thalamocortical circuit responsible for motor control giving rise to voluntary movement is considered.

The characteristics of neuronal activity and their functional abilities, properties of synaptic connections, effect of neurotransmitters as dopamine and the relation between different nuclei defined by pathways, all these are effective in realizing voluntary movement. It is long known that abnormalities in dopamine level influence basal ganglia operations negatively giving rise to neurological disorders like Parkinson’s Disease, Hungtinton’s chorea, hemiballismus, dystonia [1].

The equations written for neuronal activity are complicated and simulations of computational models are especially versatile to predict the neuronal activity. Computational models reflect the consequences of various assumptions made in forming the models [2]. Most computational models of basal ganglia circuits consider a specific process and only partly reflect their nature and function. In this work, an attempt is made to obtain a holistic model of basal ganglia-thalamocortical circuit in Brian 2 environment to ease the further improvement and testing of the model by the neuroscientist.

Here a spiking neural network model is realized to configure the entire properties of basal ganglia circuit. The characteristic neuronal activities of each substructure are obtained by modification of Izhikevich neuron model [3]. The proposed model of basal ganglia-thalamocortical circuit is also capable of showing the dopamine effect on the processes due to the modified striatum neurons. Medium spiny neurons which have different dopamine receptors are considered in the model separately. Also, direct, indirect and hyper-direct pathways exist in the model and effect of dopamine on these pathways can be observed in the simulations. Synaptic connections configured to realize learning and probability of connections are set according to the research presented in the literature. The model is formed with inspiration from another study [4] and realized on Brian2 simulator.

The simulation results of the model are given by raster plots, firing rates and time-frequency analysis. The stimulus activity in the cortex is projected to the thalamus in the simulations and the model reveals the role of direct, indirect and hyper-direct pathways on the formation of this projection separately.


1. Wichmann T, DeLong MR: Deep Brain Stimulation for Neurologic and Neuropsychiatric Disorders. Neuron 2006, 52(1): 197–204.

2. Schroll H, Hamker FH: Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy. Frontiers in Sys Neu 2013, doi:

3. Izhikevich EM: Which Model to Use for Cortical Spiking Neurons? IEEE Trans Neural Networks 15:1063–1070.

4. Çelikok U, Navarro-Lopez EM, Şengör NS: A computational model describing the interplay of basal ganglia and subcortical background oscillations during working memory processes. arXiv:1601.07740

P239 Coordinate-transformation spiking neural network for spatial navigation

Tianyi Li, Angelo Arleo, Denis Sheynikhovich

Sorbonne Universités, UPMC Univ Paris 06, INSERM, CNRS, Institut de la Vision, 17 rue Moreau, 75012 Paris, France

Correspondence: Denis Sheynikhovich (

BMC Neuroscience 2017, 18 (Suppl 1):P239

Spatial navigation in primates is thought to be mediated by neural networks linking the dorsal visual pathway (including parietal and retrosplenial cortices) and the medial temporal lobe [1]. Neurons along this pathway are sensitive to visual cues of varying complexity (from simple visual features to views of spatial scenes [2, 3]) and have been characterized to code environmental features in different reference frames (from egocentric eye- or head-centered representations early in the pathway to allocentric world-centered ones later in the pathway [3, 4]). However, neural mechanisms underlying the transformation between egocentric-visual and allocentric-spatial representations remain poorly understood.

In this work, we present a spiking-neural-network model of visuo-spatial coordinate transformation that receives input in the form of realistic head-centered visual input with limited view field. After processing this input with V1-like orientation-sensitive neuronal filters, it is transformed to an allocentric directional frame using two mechanisms, experimentally observed along the dorsal pathway. First, head direction signal, thought to be provided by the retrosplenial cortex, is used by the network to align egocentric input views with a world-centered directional frame [4]; Second, short-term visual working memory in the parietal network serves to link subsequent views during head rotation into scene-like representation of visual features. The output of the coordinate-transformation network serves as input to the hippocampus, where location-sensitive neuronal responses are learned using spike-timing-dependent plasticity.

Neuronal activities in the model are shown to reproduce basic features of dorsal-pathway neurons. In particular, in an experimental setup mimicking an animal sitting in front of a screen, visual receptive fields of model parietal/retrosplenial neurons code features in head- or world-centered reference frames, and firing activities in the transformation network exhibit gain fields with respect to head direction, as observed in classical experiments with monkeys. In a setup where the simulated animal explores an experimental environment, modeled hippocampal cells exhibit location-sensitive firing fields after learning. These purely visual place fields are influenced by changes in the visuo-spatial environmental layout (e.g. its spatial geometry [5]), and are modulated by currently observed view [2]. Moreover, spike synchrony patterns in this model reflect environment topology [6]. This model links the processing of low-level visual features in the brain with high-level cognitive processes implicated in spatial navigation.


This research was supported by ANR - Essilor SilverSight Chair ANR-14-CHIN-0001


1. Kravitz DJ, Saleem KS, Baker CI, Mishkin M: A new neural framework for visuospatial processing. Nat Rev Neurosci. 2011, 12:217–230.

2. Ekstrom AD: Why Vision is Important to How We Navigate. Hippocampus 2015, 25:731–735.

3. Snyder LH, Grieve KL, Brotchie P, Andersen R: Separate body- and world-referenced representations of visual space in parietal cortex. Nature 1998, 394:887–891.

4. Byrne P, Becker S, Burgess N: Remembering the past and imagining the future: A neural model of spatial memory and imagery. Psychol Rev. 2007, 114:340–375.

5. Sheynikhovich D, Chavarriaga R, Strösslin T, Arleo A, Gerstner W, Strosslin T, Arleo A, Gerstner W: Is there a geometric module for spatial orientation? Insights from a rodent navigation model. Psychol Rev. 2009, 116:540–566.

6. Curto C, Itskov V: Cell Groups Reveal Structure of Stimulus Space. PLoS Comput Biol. 2008, 4:e1000205.

P240 Micro-connectomics with cognitive task selectivity

Akihiro Nakamura1, Masanori Shimono1,2

1Osaka University, Toyonaka, Osaka, Japan; 2Riken Brain Science Institute, Saitama, Japan

Correspondence: Masanori Shimono (

BMC Neuroscience 2017, 18 (Suppl 1):P240

Various cognitive functions of our brain are realized by interactions among a large number of neurons. Traditionally, the selectivity of neuronal activity to individual cognitive tasks has been studied [1]. In order to understand the function of the brain more deeply, we need to investigate the micro-connectome, which is a comprehensive map of connectivity or interactions of neurons or synapses, beyond the basic statistical observations of its individual elements [2]. This study reports the interactions among neurons measured from the anterior lateral motor cortex (ALM) of mice using calcium fluorescence imaging and focuses on selectivity for cognitive planning of directed licking behaviors [3]. We reconstructed the functional networks from the spiking activities of the neuron ensembles at resting periods and compared them with the motion-selectivity of individual neurons (Figure 1). The network structure was characterized using graph theory [4]. Past studies [3] have declared that significant activities can be observed in layer 5 of the ALM. However, the contributions of different layers were not reported. Our connectome analyses also consistently showed that, in layer 5 of the ALM, a simple connection strength measure in motion-selective neurons was significantly stronger than in motion-nonselective cells. Surprisingly, in layer 2, a Centrality measure was significantly higher in selective cells, especially contralateral selective cells, than in non-selective cells. Centrality represents that the cell is in an important position within the network. It has been repeatedly reported that the effective connectivity, the estimated neuronal activities recorded using Ca Imaging technique in the resting period, reflects the underlining structural synaptic connectivity fairly well [5]. Therefore, our results suggest that the neurons involved in motor-planning were located at highly central positions in the micro-connectome from the structural design. Because of the position, they will be able to influence a large number of neuropiles within, and probably beyond, the ALM. If we observe the brain more widely, layer 5 exists on the bottom-up information flow that originally came from the thalamus, and layer 2 exists on the top-down information flow relatively close to the output to the thalamus. Therefore, layer 2 in the micro-connectome may represent a different functional role of the motor-planning than the neuron group existing in layer 5. Our findings and methodological schemes will contribute to a more accurate understanding of cognitive functions, the effects of aging, and various neurodegenerative diseases.

Figure 1. The general concept of this study. A. Neuronal activities when rodents are taking rest (or just waiting a task) or when performing licking tasks were recorded using Ca Imaging technique. B. is an example of effective/functional networks of neurons reconstructed from the neuronal dynamics. The differences of markers show differences of responses of neurons. (Neurons responding selectively to contralateral lickings (), to ipsilateral lickings (□), and neurons showing no responses to these licking behaviors ())


1. Hubel DH, Wiesel TN: Receptive fields and functional architecture of monkey striate cortex. The Journal of physiology 1968, 195(1): 215–243.

2. Shimono M, Beggs, JM: Functional clusters, hubs, and communities in the cortical microconnectome. Cerebral Cortex 2015, 25 (10): 3743–3757.

3. Li, N, Chen, TW, Guo ZV, Gerfen CR, Svoboda, K.: A motor cortex circuit for motor planning and movement. Nature 2015, 519(7541): 51–56.

4. Bullmore, E., Sporns, O.: Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience 2009, 10(3): 186–198.

5. Stetter O, Battaglia D, Soriano J, Geisel T: Model-free reconstruction of excitatory neuronal connectivity from calcium imaging signals. PLoS Comput Biol 2012, 8(8): e1002

P241 Does reinforcement learning explain zone-allocation behavior between two competing mice?

Youngjo Song1, Sol Park1,2, Ilhwan Choi2, Jaeseung Jeong1,3, Hee-sup Shin2

1Department of Bio and Brain Engineering, KAIST, Daejeon, 34141, Republic of Korea; 2Center for Cognition and Sociality, IBS, Daejeon, 34047, Republic of Korea; 3Program of Brain and Cognitive Engineering, KAIST, Daejeon, 34141, Republic of Korea

Correspondence: Youngjo Song (

BMC Neuroscience 2017, 18 (Suppl 1):P241

In the previous study (Choi et al., in revision), we observed two mice showing cooperative-like behavior in the competitive situation over rewards. We have also shown that this cooperative-like behavior enhanced mutual rewords and produced payoff equity between two competing mice. However, the origin of this behavior is not clear. Thus, the aim of this study is to address whether the cooperative-like behavior could be explained by reinforcement learning or not. In the behavior chamber for mice, two light cues which indicate two reward zones, respectively. If a mouse goes the left reward zone when the left light cue turns on, the mouse gets reward, and a mouse can get rewards if the mouse get in the right reward zone when the right light cue turns on. The reward is given by wireless brain stimulation from the electrode implanted in the Medial forebrain bundle (MFB), the pleasure center in the mouse brain. When the mice learned the meaning of light cues, we performed the pair test in which the two mice released in one training chamber. In this experiment, 15 out of 19 pairs showed the tendency to separate and allocate their own reward zone by themselves. In other words, those mice had their own preferred sides and did not interfere opponent’s preferred side (we called this behavior as ‘zone-allocation behavior’). We followed the ethical guidelines of the Institutional Animal Care and Use Committee in the KAIST. This behavior could be considered as a heuristic rule of reciprocity and cooperation. To investigate if the reinforcement learning can explain this behavior in two competing mice, we developed computational model based on the Temporal difference (TD) learning model. In this computational simulation, the environment is set up identically with the real training room. The model mouse makes decisions only based on a state-action value function which is updated by the TD rule. We found that the computational model successfully mimicked the zone allocating behavior between two model mice. Two types of pairs in our model were observed. The first type is a pair dividing their own reward zone each other, which indicates each mouse obtained its own preferred side (Figure 1A). This can be thought as a case of zone allocating pair in actual experiment. The second type is that one mouse dominates both side of reward zone (Figure 1B). From repetitive iterations, we obtained 75% of model mouse pairs showing the zone allocating behavior, which is quite consistent with the experimental results of the real zone-allocating pair ratio (69%). Moreover, we examined whether a mouse achieve this behavior when it uses model-based learning. We used Dyna-Q algorithm to implement this model mouse. Zone allocating behavior, however, could not be achieved. If it uses model-based learning, it updates its state-action value too often. Therefore, the mouse’s behavior did not converge.

Figure 1. A. State-action values (Q-value) of a pair of mice showing zone-allocation behavior. In mouse1, Q-value for R(right) reward zone is larger than Q-value for L(left) reward zone. It means that mouse1 prefer R reward zone. In the same way, mouse2 prefer L reward zone. Moreover, Q-value of mouse1 for L reward zone and Q-value of mouse2 for R reward zone becomes less than 0.2. It means that each mouse didn’t interfere opponent’s preferred side. B. State-action values (Q-value) of a pair of mice not showing zone-allocation behavior. Q-value of mouse2 for reward zone is converging to zero. It means that mouse2 prefer not to move, so mouse1 got all the reward

Conclusion: This computational result supports the hypothesis that the zone allocating rodent behavior can be explained from positive reinforcement learning (particularly model-free learning). Zone-allocation might be a strategy to maximize reward and to minimize cost in aspect of reinforcement learning in competitive situation. We suggest that, to investigate the social heuristic behavior, it might be crucial to remove convergent egoistic characteristic of animal behavior.


1. Richard S. Sutton, Andrew G. Barto:Reinforcement Learning: An Introduction. The MIT press; 1998.

2. Paul W. Glimcher, Ernst Fehr: Neuroeconomics, 2 nd Edition. Academic press; 2014

P242 Optimal synaptic scaling emerges from Hebbian learning rules in balanced networks

Sadra Sadeh1, Padraig Gleeson1, R. Angus Silver1

1Department of Neuroscience, Physiology and Pharmacology, University College London, London WC1E 6BT, UK

Correspondence: Sadra Sadeh (

BMC Neuroscience 2017, 18 (Suppl 1):P242

Synaptic connectivity varies widely across cell types and brain regions and connections are formed and lost during development and learning. However, normal function cannot be maintained by simply adding or subtracting excitatory synaptic inputs onto a neuron, since this will cause neurons to become hyper- or hypo-excitable, resulting in network instability and loss of function. How then do neurons scale their synaptic input to maintain function? Theoretical work suggests that the optimal way of scaling of synaptic weights (J) as the number of synaptic connections per neuron (degree, K) is J ~ 1/√K [1], a result that has recently been confirmed experimentally [2]. However, the mechanisms by which such optimal scaling arises are unknown. To address this question, we implemented Hebbian-like plasticity rules at excitatory (E) and inhibitory (I) synapses in large-scale balanced spiking networks of primary visual cortex [3]. As K was increased in the networks we found that synaptic weight decreased with a dependence of J = 1/K0.6, close to the theoretically optimal scaling [1] and closely matching that found experimentally [2]. Interestingly, optimal synaptic scaling emerged when Hebbian plasticity was present at both E and I synapses. In contrast, spiking networks relying solely on plasticity of I → E synapses to balance excitation and inhibition [4] did not exhibit optimal scaling. A simplified mean-field analysis of network dynamics explained the dependence of J on K in networks with Hebbian-like plasticity of E and I synapses, while revealing why the optimal scaling does not always hold in networks with plasticity of only I → E synapses.

Irrespective of the initial weights and number of synaptic connections, spiking networks with Hebbian-like plasticity of E and I synapses robustly self-regulated themselves through recurrent inhibition and learning into a low activity regime where the activity of the E neuronal population exhibited a long tail of activity. Notably, this was accompanied by higher activity and lower selectivity of I neurons, consistent with experimental observations. Examination of the input-output relationship of individual current-based or conductance-based neurons revealed that optimal synaptic scaling robustly preserved neuronal gain as the number of synaptic inputs was altered. Moreover, contrast-invariant input tuning curves translated to contrast-invariant output tuning curves only when the optimal (1/√K) scaling of weights was preserved. Our results thus suggest that Hebbian learning in both E and I connections is necessary for preserving cortical computation and function during changes in synaptic connectivity. These findings have important implications for cortical function during development, and cortical dysfunction during brain diseases.


yFunded by the Wellcome Trust and the ERC.


1. van Vreeswijk C, Sompolinsky H: Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 1996, 274(5293):1724–1726.

2. Barral J, Reyes AD: Synaptic scaling rule preserves excitatory-inhibitory balance and salient neuronal network dynamics. Nat Neurosci 2016, 19(12):1690–1696.

3. Sadeh S, Clopath C, Rotter S: Emergence of Functional Specificity in Balanced Networks with Synaptic Plasticity. PLoS Comput Biol 2015, 11(6): e1004307.

4. Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W: Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks. Science 2011, 334(6062):1569–1573.

P243 Deciphering the contributions of oriens-lacunosum/moleculare (OLM) cells during local field potential (LFP) theta rhythms in CA1 hippocampus

Alexandra Pierri Chatzikalymniou1,2 Frances K. Skinner1,3,2

1Krembil Research Institute, University Health Network, Toronto, ON, Canada; 2Department of Physiology, University of Toronto, Toronto, ON, Canada; 3Department of Medicine (Neurology), University of Toronto, Toronto, ON, Canada

Correspondence: Alexandra Pierri Chatzikalymniou (

BMC Neuroscience 2017, 18 (Suppl 1):P243

In the hippocampus, one of the most prevalent LFP rhythms is the 3–12 Hz “theta” oscillation [1]. This LFP theta rhythm is tightly correlated with spatial navigation, episodic memory and rapid eye movement (REM) sleep [1]. Recent work by Goutagny and colleagues [4] showed that theta rhythms emerge in the CA1 region of an intact in vitro hippocampus preparation due to local interactions between hippocampal interneurons and pyramidal (PYR) cells. Oriens-lacunosum/moleculare (OLM) cells are a major class of GABAergic interneurons in the hippocampus [5]. In addition to inhibiting distal dendrites of PYR cells in stratum LM, OLM cells disinhibit PYR cells in stratum radiatum, an inner to middle layer, by inhibiting interneurons that target PYR cells in that region [5].

Our goal is to examine the contributions of OLM cells to ongoing LFP theta rhythms in the context of the intact in vitro preparation using computational modeling. We use network models of OLM cells, bistratified cells (BiCs), and basket/axo-axonic cells (BC/AACs) that target PYR cells in specific layers [3], and assess the role of OLM cells as their interactions with BiCs and the PYR cell vary. We find that the LFP power is mostly affected by changes in the synaptic conductance from OLM cells to BiCs rather than by synaptic conductance changes from BiCs to OLM cells, indicating a more important role for the former. This observation suggests that progressive inhibition of OLM cells and thus progressive decrease of their synaptic inputs onto the PYR cell does not strongly alter LFP characteristics whereas progressive inhibition of BiCs does. Decomposition of the LFP signal reveals that fluctuations in power occur due to BiC and BC/AAC synaptic inputs onto the PYR cell rather than to OLM cell synaptic inputs onto the PYR cell. Selective removal of either OLM cells or BiCs/BCs/AACs reveal minimal contribution of the OLM cells to the total LFP power across the dendritic tree. Conversely, the BiCs/BCs/AACs generated LFP component comprises approximately 90% of the total signal. Furthermore, changes in synaptic weights from OLM cells to the PYR cell do not produce substantial changes in the LFP.

Brain rhythms can be considered as representations of brain function [1, 2]. Given that particular inhibitory cell populations and abnormalities in theta rhythms are associated with disease states [2], it is important to understand the cellular contributions to LFP theta rhythm modulations. Our results show that OLM cells prominently contribute to local LFP theta through their interactions with other local inhibitory cell types. Decomposition of the LFP reveals little contribution of synaptic inputs from OLM cells onto the PYR cell. In CA1 PYR cells, distal and middle apical dendrites comprise two distinct dendritic domains with separate branching [6]. Since we find that maximum LFP power is recorded around the soma and the proximal dendrites, OLM cell contributions to LFP theta can be understood in the context of the cytoarchitectonic separation of the of distal and proximal dendrites in PYR cells which prohibits distal inhibitory inputs from effectively propagating to the soma.


Supported by NSERC Canada, Margaret J. Santalo Fellowship (Physiology, Univ Toronto) and SciNet HPC.


1. Buzsáki G: Theta oscillations in the hippocampus. Neuron 2002, 33:325–340.

2. Colgin L: Rhythms of the hippocampal network. Nat Neurosci Rev 2016, 17:239–249.

3. Ferguson KA, Huh CYL, Amilhon B, Williams S, Skinner FK: Network models provide insight into how oriens-lacunosum-moleculare (OLM) and bistratified cell (BSC) interactions influence local CA1 theta rhythms. Front Syst Neurosci 2015, 9:110.

4. Goutagny R, Jackson J, Williams S: Self-generated theta oscillations in the hippocampus. Nat Neurosci 2009, 12:1491–1493.

5. Maccaferri G: Stratum oriens horizontal interneurone diversity and hippocampal network dynamics. J Physiol 2005, 562.1:73–80.

6. Spruston N: Pyramidal neurons: dendritic structure and synaptic integration. Nature Neurosci Rev 2008, 9:206.

P244 Nonlinear optimal control of brain networks

Lazaro M. Sanchez-Rodriguez, Roberto C. Sotero

Hotchkiss Brain Institute and Department of Radiology, University of Calgary, Calgary, Alberta, Canada, T2 N 1N4

Correspondence: Lazaro M. Sanchez-Rodriguez (

BMC Neuroscience 2017, 18 (Suppl 1):P244

The problem of controlling brain networks has been the focus of several recent studies given its relationship to brain stimulation. In this work, we introduce the State-Dependent Ricatti Equation formalism (SDRE) [1] for the computation of optimal control signals in nonlinear brain networks. Firstly, the optimal input for the abatement of epileptic-like activity in the model proposed in [2] was calculated (see Figure 1B). Additionally, we looked at higher dimensional systems consisting of coupled autonomous Duffing oscillators (see Figure 1, panels C-E). In the linear case our results are in agreement with those obtained in [3]. However, as the strength of the non-linearity increases, the fraction of the networks that can be controlled is generally lower whereas the cost of controlling the systems grows. Thus, we find evidence for supporting the use of realistic nonlinear modeling of electrical neural activity in the design of optimal controllers for brain networks.

Figure 1. SDRE-optimal control of the networks. A. General scheme. B. Controlling the model in [2]. As soon as the control signal (top right corner) is sent, the diseased solution –in red– is derived to normal background activity. C. Typical trajectory for a controlled network of autonomous Duffing oscillators coupled through a scale-free connectivity matrix. Stimuli are inputted over the nodes with lower degree –third part of the total number of nodes in the network. D. Expected cost for the control over 25 scale-free networks (N = 100, mean degree ≈ 6). The numbers over each of the error bars indicate the fraction of the realizations of the network in which control is achieved as the non-linearity (coefficient of the cubic term) is changed. For strengths past 125, none of the networks can be controlled. In this case, the costs are infinitely high in theory. They are represented as red asterisks at the top of the panel. E. Analogue to D for randomizations of the previously computed scale-free networks


1. Jayaram A, Tadi M: Synchronization of chaotic systems based on SDRE method. Chaos Solitons Fractals 2006, 28:707–715.

2. Taylor PN, Thomas J, Sinha N, Dauwels J, Kaiser M, Thesen T, Ruths J: Optimal control based seizure abatement using patient derived connectivity. Front. Neurosci 2015, 9:1–10.

3. Liu YY, Slotine JJ, Barabási AL: Controllability of complex networks. Nature 2011, 473:167–173.

P245 An inhibitory microcircuit that amplifies the redistribution of somatic and dendritic inhibition

Loreen Hertäg1, Owen Mackwood1, Henning Sprekeler1

1Modelling of Cognitive Processes, Berlin Institute of Technology and Bernstein Center for Computational Neuroscience, Berlin, 10587, Germany

Correspondence: Loreen Hertäg (

BMC Neuroscience 2017, 18 (Suppl 1):P245

GABAergic interneurons constitute only a small fraction of neurons in the brain, but their importance for brain function is undeniable [1]. Moreover, they display a large diversity in their biophysical, physiological and anatomical properties [2], suggesting a functional ‘division of labor’. However, the computational roles of the various interneuron types and how they are supported by their individual properties is largely unknown.

A striking difference between inhibitory cell types is that they form synapses onto different compartments of their postsynaptic targets. Parvalbumin- (PV) and somatostatin (SOM)-expressing interneurons, in particular, seem to predominantly target the perisomatic regions and the dendrites, respectively. As SOM and PV cells are also connected, it has been suggested that inhibition can be dynamically redistributed between the dendrites and somata of pyramidal cells (PCs) [3, 4]. Here, we argue that a different cortical sub-circuit consisting of SOM- and vasoactive intestinal peptide (VIP)-expressing interneurons is optimized to control this redistribution by amplifying small top-down control signals.

To support this hypothesis, we performed a mathematical analysis and simulations of a network model comprising excitatory PCs and inhibitory PV, SOM and VIP neurons. The connectivity in the circuit was chosen according to experimental findings [4]. We show that the SOM-VIP circuit can serve as an amplifier that translates small top-down signals onto VIP cells [5, 6] into large changes in the somato-dendritic distribution of inhibition onto PCs. Taken to the extreme, the circuit can generate winner-take-all (WTA) dynamics that implement a binary switch for somato-dendritic inhibition.

Furthermore, we interpret key properties of the SOM-VIP sub-circuit in the light of this hypothesis. We show that the striking lack of recurrent inhibition as well as the presence of short-term synaptic facilitation (STF) observed among VIP and SOM cells strengthens the amplification properties of the network. Artificially including recurrent inhibitory connections within the VIP or SOM populations not only weakens the amplification, but can also lead to pathological conditions in which almost all cells within each population are silenced. These pathological states are not observed when firing rate adaptation is included that is, indeed, a common feature of SOM and VIP neurons.

In summary, our analysis shows that the SOM-VIP sub-circuit is well suited to redistribute inhibition onto soma and dendrites of excitatory PC neurons by amplifying small changes in the input signal to VIP cells. The synaptic and neural properties, including lack of recurrence, presence of STF and firing rate adaptation, underpin this computation by strengthening the amplification properties and/or avoiding pathological states.


yThe project is funded by the German Federal Ministry for Education and Research, FKZ 01GQ1201.


1. Isaacson, JF, Scanziani M: How inhibition shapes cortical activity. Neuron 2011, 72(2): 231–243.

2. Tremblay, R, Lee, S, and Rudy, B: GABAergic interneurons in the neocortex: from cellular properties to circuits. Neuron 2016, 91(2): 260–292.

3. Pouille, F, Scanziani, M: Routing of spike series by dynamic circuits in the hippocampus. Nature 2004, 429(6993): 717–723.

4. Pfeffer, CK, Xue, M, He, M, Huang, ZJ, Scanziani, M: Inhibition of inhibition in visual cortex: the logic of connections between molecularly distinct interneurons. Nature neuroscience 2013, 16(8): 1068–1076.

5. Lee, S, Kruglikov, I, Huang, ZJ, Fishell, G, Rudy, B: A disinhibitory circuit mediates motor integration in the somatosensory cortex. Nature neuroscience 2013, 16(11): 1662–1670.

6. Pi, HJ, Hangya, B, Kvitsiani, D, Sanders, JI, Huang, ZJ, Kepecs, A: Cortical interneurons that specialize in disinhibitory control. Nature 2013, 503(7477), 521–524.

P246 Learning grid cells in recurrent neural networks

Steffen Puhlmann1, Simon N. Weber1,2, Henning Sprekeler1,2

1MKP, Modelling of cognitive processes, Berlin Institute of Technology, 10587 Berlin, Germany; 2Bernstein Center for Computational Neuroscience, 10115, Berlin, Germany

Correspondence: Steffen Puhlmann (

BMC Neuroscience 2017, 18 (Suppl 1):P246

Grid cells are spatially tuned neurons in the entorhinal cortex, whose spatial firing fields tessellate the environment with a hexagonal lattice. The mechanisms that underlie this highly symmetric firing pattern are currently subject to intense debate [1]. As an alternative to attractor and oscillatory interference models that perform path integration and assume a specific connectivity [1], we recently suggested that grid cells could be learned in a feedforward network by interacting excitatory and inhibitory plasticity on spatially modulated inputs [2]. A central prerequisite for the suggested mechanism is that inhibitory inputs have a broader spatial tuning than their excitatory counterparts. Given that recurrent inhibition is abundant in entorhinal cortex [3] and spatially tuned [4], we reasoned that this broadened inhibition could be the result of recurrent processing.

To corroborate this hypothesis, we analyzed a recurrent network model consisting of excitatory and inhibitory rate neurons. For the sake of the argument, only the excitatory neurons in the network receive external, spatially modulated excitatory input. All synapses in the network are plastic, with Hebbian plasticity on the excitatory synapses and homeostatic plasticity on the inhibitory synapses [5]. When exposing the network to inputs that mimic the movement of an animal on a linear track, a large fraction of cells in the recurrent network rapidly develops a grid-like firing pattern. We find that the underlying mechanism is robust to details of the spatial input tuning and that the spatial scale of the resulting grids is primarily determined by the spatial autocorrelation length of inputs. Based on insights from earlier work on the interaction of excitatory and inhibitory synaptic plasticity [6, 2], we identify key mechanisms in the circuit that are required for the formation of grid cells: 1) a smooth, saturating nonlinearity in the interneurons, which ensures that their spatial tuning is broader than the tuning of their excitatory drive, and 2) sufficiently many and diverse excitatory inputs to the inhibitory neurons.

Based on these findings, we suggest that grid cells could be bootstrapped from a large variety of spatially modulated excitatory inputs to a recurrent network of excitatory and inhibitory neurons with synaptic plasticity on all synapses.


The project is funded by the German Federal Ministry for Education and Research, FKZ 01GQ1201.


1. Giocomo LM, Moser MB, Moser EI: Computational models of grid cells. Neuron 2011, 71(4):589–603.

2. Weber SN, Sprekeler H: Learning place cells, grid cells and invariances: A unifying model. bioRxiv 2017, 102525.

3. Couey JJ, Witoelar A, Zhang SJ, Zheng K, Ye J, Dunn B, Czajkowski R, Moser MB, Moser EI, Roudi Y, et al.: Recurrent inhibitory circuitry as a mechanism for grid formation. Nat Neurosci 2013, 16(3):318–324.

4. Buetfering C, Allen K, Monyer H: Parvalbumin interneurons provide grid cell-driven recurrent inhibition in the medial entorhinal cortex. Nat Neurosci 2014, 15(5):710–718.

5. Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W: Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 2011, 334 (6062):1569–1573.

6. Clopath C, Vogels TP, Froemke RC, Sprekeler H: Receptive field formation by interacting excitatory and inhibitory synaptic plasticity. bioRxiv 2016, 066589.

P247 A model of perceptual learning, biases, and roving

David Higgins1,2, Henning Sprekeler1,2

1Modelling of Cognitive Processes, TU Berlin, 10587, Germany; 2Bernstein Center for Computational Neuroscience, Berlin, 10115, Germany

Correspondence: David Higgins (

BMC Neuroscience 2017, 18 (Suppl 1):P247

Roving is a random task-sequencing paradigm, in perceptual learning, whereby multiple tasks are learned in a randomly interleaved sequence. For certain experiments, such as bisection tasks, human subjects appear to be unable to learn the individual tasks under roving conditions [1]. In general, theoretical descriptions of perceptual learning experiments have resorted to approaches involving tuning of inputs, using either recurrence or suppression [2, 3]. However, these approaches have exhibited only partial success in tackling roving. In 2012, Herzog et al. [4] proposed a theoretically inspired explanation involving a constant drift in synaptic efficacies in the system (unsupervised bias), due to an inability to maintain accurate task specific estimates of performance. This leads to a failure to learn using feedback. We update this approach, adding additional features, which though adding realism tend to counteract the action of the unsupervised bias. We then use this model to examine whether the unsupervised bias is sufficient to explain roving or not.

The proof-of-concept model proposed in Herzog et al. [4] does indeed lead to a failure to correctly learn during roving but, while it fails due to the mooted unsupervised bias in the learning rule, the implementation relies on unbounded weight growth, an unrealistic phenomenon. We introduce a simple weight normalisation term, to counteract the unbounded weight growth, and implement a cognitive bias, often observed in human subjects, towards 50:50 presentation ratios. We thus discover a more appropriate model of human perceptual learning performance. Our model (i) learns correctly on a single bisection or vernier task, (ii) fails to learn during roving of multiple tasks, (iii) exhibits the human tendency towards 50:50 ratios of choice, thus failing when a 75:25 ratio is used, and (iv) correctly learns when informed of the altered presentation ratio, similarly to human subjects (unpublished data). A further extension to the original model, operating on a much slower timescale, allows the task critic system to learn over time to separately identify the tasks. This ultimately leads to learning of the initially unlearnable tasks, as seen in [5].

Our model can be seen as the distillation of the mechanism of failure to learn due to the unsupervised bias. Consistent with intuitions within the perceptual learning community, our model indicates that the degree of overlap in task representations, combined with the unsupervised bias, leads to the difference in outcomes between successful transfer learning versus failure. Interestingly, a cognitive bias in the task presentation ratio appears to be quite helpful in a range of presentation paradigms, often counteracting the unsupervised bias and rescuing potential failures to learn correctly. Our work would combine quite well with the more detailed work of Liu et al. [6] to provide a full model of perceptual learning in the visual system.


1. Otto, TU, Herzog MH, Fahle M, Zhaoping L: Perceptual Learning with Spatial Uncertainties. Vision Research 2006, 46(19): 3223–3233.

2. Zhaoping, L, Herzog MH, Dayan P: Nonlinear Ideal Observation and Recurrent Preprocessing in Perceptual Learning. Network 2003, 14(2): 233–247.

3. Schäfer, R, Vasilaki E, Senn W: Adaptive Gain Modulation in V1 Explains Contextual Modifications during Bisection Learning. PLOS Comput Biol 2009, 5(12): e1000617.

4. Herzog, MH, Aberg KC, Frémaux N, Gerstner W, Sprekeler H: Perceptual Learning, Roving and the Unsupervised Bias. Vision Research 2012, 61: 95–99.

5. Parkosadze K, Otto TU, Malania M, Kezeli A, Herzog M: Perceptual Learning of Bisection Stimuli under Roving: Slow and Largely Specific. Journal of Vision 2008, 8(1): 5.

6. Liu J, Dosher BA, Lu ZL: Augmented Hebbian Reweighting Accounts for Accuracy and Induced Bias in Perceptual Learning with Reverse Feedback. Journal of Vision 2015, 15(10): 10–10.

P248 Presynaptic inhibition provides a rapid stabilization of recurrent excitation in the face of plasticity

Laura B. Naumann1,2, Henning Sprekeler1,2

1Modelling of Cognitive Processes, Berlin Institute of Technology, Berlin, Germany; 2Bernstein Center for Computational Neuroscience, Berlin, Germany

Correspondence: Laura B. Naumann (

BMC Neuroscience 2017, 18 (Suppl 1):P248

Synaptic plasticity in recurrent neural networks is believed to underlie learning and memory in the brain. One practical problem of this hypothesis is that recurrent excitation forms a positive feedback loop that can easily be destabilized by synaptic plasticity. Numerous homeostatic mechanisms have been suggested to stabilize plastic recurrent networks [1], but recent computational work indicates that all these mechanisms share a major caveat: An effective rate stabilization requires a homeostatic process that operates on the order of seconds, while experimentally observed mechanisms such as synaptic scaling occur over much longer timescales [2].

Here, we suggest presynaptic inhibition as an alternative homeostatic process, which does not suffer from this discrepancy in timescales. Experimental studies have revealed that excess network activity can trigger an inhibition of transmitter release at excitatory synapses through the activation of presynaptic GABAB receptors, which effectively weakens synaptic strength [3]. This attenuation of recurrent interactions has been observed to be fully reversible and acts on timescales of 100 s of milliseconds, thus constituting a candidate mechanism for the rapid compensation of synaptic changes.

To highlight the beneficial properties of presynaptic inhibition in excitatory recurrent circuits, we analyzed a simple rate-based recurrent network model. Presynaptic inhibition is mimicked by multiplicatively scaling down recurrent excitatory weights in response to excess population activity. Using analytical and numerical methods, we show that presynaptic inhibition ensures a gradual increase of firing rates with growing recurrent excitation, even for very strong recurrence (Fig. 1A). An in-depth mathematical analysis of the underlying dynamical system further reveals that the stability of non-zero fixed points (Fig 1A, filled markers) is largely independent of model parameters. In contrast, classical subtractive postsynaptic inhibition is unable to control recurrent excitation once it has surpassed a critical value (Fig. 1B). Moreover, we investigate the conditions under which presynaptic inhibition can stabilize recurrent networks if Hebbian assemblies are imprinted.

In summary, the multiplicative character of presynaptic inhibition provides a powerful homeostatic mechanism to rapidly reduce effective recurrent interactions while retaining synaptic weights and hence conserving the underlying connectivity. It might therefore set the stage for stable learning without interfering with plasticity at the level of single synapses.

Figure 1. Steady state firing rates as a function of recurrent strength for different input intensities I ext. A. presynaptic inhibition B. postsynaptic inhibition


1. Abbott LF, Nelson SB: Synaptic plasticity: taming the beast. Nat Neurosci 2000, 3:1178–1490.

2. Zenke F, Gerstner W: Hebbian plasticity requires compensatory processes on multiple timescales. Phil Trans R Soc B 2017, 372(1715):20160259.

3. Urban-Ciecko J, Fanselow EE, Barth AL: Neocortical Somatostatin Neurons Reversibly Silence Excitatory Transmission via GABAb Receptors. Curr Biol 2016, 25(6):722–731.

P249 A grid score for individual spikes of grid cells

Simon N. Weber1,2, Henning Sprekeler1,2

1Berlin Institute of Technology, 10587 Berlin, Germany; 2Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany

Correspondence: Simon N. Weber (

BMC Neuroscience 2017, 18 (Suppl 1):P249

The location-specific firing of cells in the entorhinal cortex is subject to extensive experimental and theoretical research. When classifying the tuning properties of entorhinal cells, researchers distinguish between grid cells, i.e., cells whose firing locations form a hexagonal grid, and cells that fire periodically but without hexagonal symmetry [1–3]. This classification requires a measure for the symmetry of spatially modulated firing patterns — a grid score. The most established grid score is computed in multiple stages [e.g., 4]. Spike locations are transformed into a rate map. Subsequently, an autocorrelogram of the rate map is cropped, rotated and correlated with its unrotated copy. The final grid score is obtained from the resulting correlation-vs-angle function at selected angles. This procedure results in a global grid score for the firing pattern, whose exact value depends on the parameter choices required at each stage.

Here we suggest a new approach that computes a local grid score — and the local grid orientation — for each individual spike, directly from spike locations. We compare it to established grid scores and show that it is at least as reliable in quantifying the global grid score of the spike pattern and robust to noise on the spike locations. The score enables the plotting of spike locations, color coded with the local grid score or the local orientation of the grid and could thus simplify the visualization of experimental data. More specifically, it could be used to quantify and highlight recent experimental findings, like boundary effects on the structure of grids in asymmetric enclosures [5], drifts in grid orientation along the arena [6] or the preferred alignment of grids to one of the boundaries [6]. The grid score is applicable to any n-fold symmetry.

We provide a public Python package (using SciPy and NumPy) that efficiently determines the grid score directly from spike locations.


Funded by the German Federal Ministry for Education and Research, FKZ 01GQ1201.


1. Krupic J, Burgess N, O’Keefe J: Neural representations of location composed of spatially periodic bands. Science 2012, 337(6096):853–857.

2. Buetfering C, Allen K, Monyer H: Parvalbumin interneurons provide grid cell-driven recurrent inhibition in the medial entorhinal cortex. Nature Neurosci 2014, 17(5):710–718.

3. Kropff E, Carmichael JE, Moser MB, Moser EI. Speed cells in the medial entorhinal cortex. Nature 2015, 523(7561), 419–424.

4. Sargolini F, Fyhn M, Hafting T, McNaughton BL, Witter MP, Moser MB, Moser EI: Conjunctive representation of position, direction, and velocity in entorhinal cortex. Science 2016, 312(5774), 758–762.

5. Krupic J, Bauza M, Burton S, Barry C, O’Keefe J: Grid cell symmetry is shaped by environmental geometry. Nature 2015, 518(7538), 232–235.

6. Stensola T, Stensola H, Moser MB, Moser EI: Shearing-induced asymmetry in entorhinal grid cells. Nature 2015, 518(7538), 207–212.

P250 Cortical circuits implement optimal integration of context

Ramakrisnan Iyer, Stefan Mihalas

Allen Institute for Brain Science, Seattle, WA, 98109, USA

Correspondence: Stefan Mihalas (

BMC Neuroscience 2017, 18 (Suppl 1):P250

Neurons in the primary visual cortex (V1) predominantly respond to a patch of the visual input, their classical receptive field. These responses are modulated by the visual input in the surround [1]. This reflects the fact that features in natural scenes do not occur in isolation: lines, surfaces are generally continuous, and the surround provides context for the information in the classical receptive field. It is generally assumed that the information in the near surround is transmitted via lateral connections, between neurons in the same area [1]. A series of large scale efforts have recently described the relation between the lateral connectivity and visual evoked responses and found like-to-like connectivity between excitatory neurons [2, 3]. Additionally, specific cell type connectivity for inhibitory neuron types has been described [4]. However current normative models of cortical function rely on sparsity [5], saliency [6] predict functional inhibition between similarly tuned neurons. What computations are consistent with the observed structure of the lateral connections between the excitatory and diverse types of inhibitory neurons? We combined natural scene statistics [7] and mouse V1 neuron responses [8] to compute the lateral connections and computations of individual neurons which would optimally integrate information from the classical receptive field with that from the surround. The direct implementation requires single neurons to make complex computations on their inputs. While it is possible for such computations to be implemented by the dendritic trees, we show that an approximation can be achieved with relatively simple neurons. We show that this network has “like-to-like” lateral connections between excitatory neurons similar to the observed one [2, 3], distance dependence of connections similar to the observed ones [9], and requires three classes of inhibitory neurons: one performing local normalization, one surround inhibition, and one gating the inhibition from the surround, similar to anatomical [4] and physiological studies. This method generates an entire connectivity matrix for lateral connections in a layer in a purely unsupervised fashion, such that it generates testable hypotheses for connectome studies. Additionally, when these lateral connections are implemented in a neuronal network the reconstruction of natural scenes is significantly improved. For images with different statistics, such as independent and identically distributed random patches, using a natural scene prior hurts reconstruction. However, an additional gating mechanism allows optimal reconstruction for this type of features as well. We hypothesize that this computation: optimal integration of contextual cues is a general property of cortical circuits, and the rules constructed for mouse V1 generalize to other areas and species.


1. Angelucci A. and Bressloff P. C. Contribution of feedforward, lateral and feedback connections to the classical receptive field center and extra-classical receptive field surround of primate V1 neurons. In Progress in brain research 2006, 154:93–120.

2. Ko H., B Hofer S. B., Pichler B., Buchanan K. A., Sjöström P. J., and Mrsic-Flogel T. D. Functional specificity of local synaptic connections in neocortical networks. Nature, 473(7345):87–91, 5 2011.

3. Lee WC., Bonin V., Reed M., Graham B. J., Hood G., Glattfelder K., and Reid R.C. Anatomy and function of an excitatory network in the visual cortex. Nature, 532(7599):370–4, 4 2016.

4. Jiang X., Shen S., Cadwell C. R., Berens P., Sinz F., Ecker A. S., Patel S., and Tolias A. S. Principles of connectivity among morphologically defined cell types in adult neocortex. Science, 350, 11 2015.

5. Olshausen B. A. and Field D. J. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–9, 6 1996.

6. Coen-Cagli R., Dayan P., and Schwartz O. Cortical Surround Interactions and Perceptual Salience via Natural Scene Statistics. PLoS computational biology, 8(3):e1002405, 3 2012.

7. Martin D, Fowlkes C, Tal D, and Malik J. A Database of Human Segmented Natu